Hitting the Books: Do we really want our robots to have consciousness?


From Star Trek's Data and 2001's HAL to Columbus Day's Skippy the Magnificent, pop culture is chock full of fully conscious AI who, in many cases, are more human than the humans they serve alongside. But is all that self-actualization really necessary for these synthetic life forms to carry out their essential duties? In his new book, How to Grow a Robot: Developing Human-Friendly, Social AI, author Mark H. Lee examines the social shortcomings of the today's AI and delves into the promises and potential pitfalls surrounding deep learning techniques, currently believed to be our most effective tool at building robots capable of doing more than a handful of specialized tasks. In the excerpt below, Lee argues that the robots of tomorrow don't necessarily need -- nor should they particularly seek out -- the feelings and experiences that make up the human condition. Although I argue for self-awareness, I do not believe that we need to worry about consciousness.

Duplicate Docs Excel Report

None found

Similar Docs  Excel Report  more

None found