Goto

Collaborating Authors

What is The Artificial Intelligence (AI) We're Living With

#artificialintelligence

Artificial intelligence has always been a hot topic. The problem of creating a machine that can think, make its own decisions and at the same time lacks human limitations, has always aroused controversies. Because what is the artificial intelligence, what do we know about it? Is it going to be human's best friend or maybe the worst enemy? We all remember what the infamous Skynet has done to our planet in Terminator series, right?


What is The Artificial Intelligence (AI) We're Living With

#artificialintelligence

Artificial Intelligence has always been a hot topic. The problem of creating a machine that can think, make its own decisions and at the same time lacks human limitations, has always aroused controversies. We all remember what the infamous Skynet has done to our planet in Terminator series, right? We can also recall this nasty feeling of a cold shiver running down our spines when Neo has woken up from the Matrix. After watching such films, we can wonder why our scientists want to build intelligent machines in a first place.


Discourse on the Philosophy of Artificial Intelligence and the Future Role of Humanity

#artificialintelligence

Artificial intelligence can be defined as "the ability of an artifact to imitate intelligent human behavior" or, more simply, the intelligence exhibited by a computer or machine that enables it to perform tasks that appear intelligent to human observers (Russell & Norvig 2010). AI can be broken down into two different categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI), which are defined as follows: ANI refers to the ability of a machine or computer program to perform one particular task at an extremely high level or learn how to perform this task faster than any other machine. The most famous example of ANI is Deep Blue, which played chess against Garry Kasparov in 1997. AGI refers to the idea that a computer or machine would one day have the ability to exhibit intelligent behavior equal to that of humans across any given field such as language, motor skills, and social interaction; this would be similar in scope and complexity as natural intelligence. A typical example given for AGI is an educated seven-year-old child.


Artificial Intelligence: Robot Technology And The Danger Of Human Extinction!

#artificialintelligence

It was the organizers' belief that "significant advances can be made" in at least one, if not several of these specific areas of concern through a joint effort of a "carefully selected group of scientists". And what's more, this advancement could be quicker than many though possible. While there already had been significant developments in automata – machines that can carry preprogrammed and predetermined functions – it was the conference organizers' belief, especially McCarthy, that there was a mountain of potential for the development of truly "intelligent" machines that could essentially, think for themselves. It was his further belief that through the joint effort of likeminded people willing to "devote time to it…could make real progress". It would turn out, he was correct. In the years that followed the crucial conference in the summer of 1956, advancements in artificial intelligence began to become ever-more rapid.


Creating robots capable of moral reasoning is like parenting – Regina Rini Aeon Essays

#artificialintelligence

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what's moral for a human? Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there's another problem, one that really ought to come first. It's the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I'd argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature? We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong.