The Future of Artificial Intelligence


Last week we covered the past and current state of artificial intelligence -- what modern AI looks like, the differences between weak and strong AI, AGI, and some of the philosophical ideas about what constitutes consciousness. Weak AI is already all around us, in the form of software dedicated to performing specific tasks intelligently. Strong AI is the ultimate goal, and a true strong AI would resemble what most of us have grown familiar with through popular fiction. Artificial General Intelligence (AGI) is a modern goal many AI researchers are currently devoting their careers to in an effort to bridge that gap. While AGI wouldn't necessarily possess any kind of consciousness, it would be able to handle any data-related task put before it.

The philosophy that could save the world - Jamie Woodhouse


There is a little known philosophical position that, for me at least, is well founded in reality, provides a strong basis for compassionate ethics and will eventually become humanity's predominant way of thinking. Most people disagree with it and with me. Sentientism is an ethical philosophy that applies evidence and reason and grants moral consideration to all sentient beings. Sentient beings have sentience -- the capacity to experience -- both suffering and flourishing. Things that can't experience might be important in other ways, but they don't need our moral consideration.

Researchers are already building the foundation for sentient AI


Few sci-fi tropes are more reliable in enthralling audiences than the plot of artificial intelligence betraying mankind. Perhaps this is because AI makes us confront the idea of what makes us human at all. From HAL 9000, to Skynet, to Westworld's robot uprising, the fears of sentient AI feel very real. Even Elon Musk worries about what AI is capable of.

What Does It Mean For Us To Live With Computers That Appear Sentient?

Forbes Technology

How should we be thinking about machine learning and AI? originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights. The big issue that I am interested in is: what does it mean for us to live with machines that are not sentient but appear as if they are? Let's start by quickly defining sentience and intelligence (at least for this discussion). We'll say that intelligence is the ability solve complex problems, handle varied input and achieve goals. An abacus is not intelligent.