As technology gets smarter, humanity faces major questions about the future. Will computers become superhuman computers become superhuman and so intelligent that they can tackle any task and do it better than a human counterpart? Not in the next few decades, but rapid advancement in Artificial Intelligence (AI) fields means that such a scenario could happen. Humans could in fact create their own obsolescence, and approximately half of all current jobs could be automated. That's without the availability of a super intelligence to oversee disparate AI-driven technologies.
These capabilities will in many cases be integrated into our living systems. We thus face the question: What might remain uniquely human? And does this question even matter? AIs will become better and faster than unenhanced homo sapiens at nearly everything. AIs will become better and faster than unenhanced homo sapiens at nearly everything.
MIT has launched a new project, the MIT Intelligence Quest, whose aim is to "advance the science and engineering of both human and machine intelligence." The goal of this project is to discover the foundations of human intelligence, and how this knowledge can be applied to the burgeoning science and technology of Artificial Intelligence. The idea of Artificial Intelligence as "the simulation of human intelligence as processed by machines," is an idea that has been stored in the collective consciousness of humanity since time immemorial. One needs to look no further than mythology to see how Artificial Intelligence can run the ethical gamut from sinister to benign. Thus, Artificial Intelligence, the ability to give mind to matter, is ingrained in the human psyche.
What do you think when someone asks you about empathy? Do you struggle to find its meaning or does it come to you naturally? In the age of artificial intelligence, do our AI systems need empathy? If so, what are some use cases where empathy can be most helpful in AI Systems? When we read a book out loud to our children, they can hear the emotions we imbue into the passages.
There's no doubt Artificial Intelligence (AI)–machines that reproduce human thought and actions–is on the rise, both in the scientific community and in the news. And along with AI, there comes "emotional AI," from systems that can detect users' emotions and adjust their responses accordingly, to learning programs that provide emotional analysis, to devices, such as smart speakers and virtual assistants, that mimic human interactions. As the pace of AI development and implementation accelerates–with the potential to change the ways we live and work–the ethics and empathy that guide those designing technology of our future will have far-reaching consequences. It is this moral dimension that concerns me most: do the organizations and software developers creating these programs have an ethical rudder? Long before the concept of AI became commonplace, science fiction writer Isaac Asimov introduced the "Three Laws of Robotics" in his 1942 short story "Runaround" (which was later included in his 1950 collection, I, Robot): Much of Asimov's robot-based fiction hinges upon robots finding loopholes in their interpretations of the laws, which are programmed into them as a safety measure that cannot be bypassed.