Social & Ethical Issues


Miko 2 and robots like it want to be friends

#artificialintelligence

It was almost ten years ago when Sherry Turkle warned that the world was headed for a place where humans would be interacting socially with machines, like robots. Turkle is a MIT professor and social scientist who has been working on human-technology interaction and what it will mean for the human race. She is the author of several books including Alone Together and Reclaiming Conversation which explore the impact of technology on some of the aspects that actually make humans humans. Over the years, through her books and numerous talks, Sherry Turkle has explained the dangers of people trying to replace each other with machines including the smartphone and robots, but the world seems to have taken little heed as today we see companies inventing robots for all sorts of tasks and even for human relationships. Remember the Chinese inventor of a female robot whom he married in 2017?



Human beings are unable to connect with artificial intelligence: Pranav Mistry - ETtech

#artificialintelligence

Neon, the artificial human prototype conceptualized by computer scientist and inventor Pranav Mistry, created waves recently. The President and CEO of Samsung's STAR Labs told ET in an exclusive interview that he created Neon because human beings are unable to connect with artificial intelligence (AI) assistants such as Apple's Siri. The Palanpur (Gujarat)-born Mistry, considered one of the best innovative minds in the world right now, said Neon will be a companion to the elderly and to those who are lonely and could even work as fashion models or news anchors. The 38-year-old also spoke about the dangers posed by AI,echoing Google parent Alphabet Inc's chief Sundar Pichai who recently called upon governments to regulate AI. Edited Excerpts: When you started thinking about Neon, what was the problem you were trying to solve?


Artificial Intelligence Needs Private Markets for Regulation--Here's Why

#artificialintelligence

A regulatory market approach would enable the dynamism needed for AI to flourish in a way consistent with safety and public trust. It seems the White House wants to ramp up America's artificial intelligence (AI) dominance. Earlier this month, the U.S. Office of Management and Budget released its "Guidance for Regulation of Artificial Intelligence Applications," for federal agencies to oversee AI's development in a way that protects innovation without making the public wary. The noble aims of these principles respond to the need for a coherent American vision for AI development--complete with transparency, public participation and interagency coordination. But the government is missing something key.


A New AI Ethics Center Investigates Growing Anxiety About Intelligent Machines - IntelligentHQ

#artificialintelligence

The pace of progress in artificial intelligence is scaring many people, that feel threatened by the huge impact automation might have on employment, and other areas such as the development of autonomous weapons. A question growing in the heads of experts, deeply aware of the impact of AI on society, is how to understand and predict what can happen, when increasingly automated complex systems fail, or go off track. As John Danaher wrote in the Institute for Ethics & Emerging Technologies, "Artificial intelligence is a classic risk/reward technology. If developed safely and properly, it could be a great boon. Trying to deliver some answers to this and other questions, Carnegie Mellon University just launched a new center, entitled K&L Gates Endowment for Ethics and Computational Technologies.


Regulation will 'stifle' AI and hand the lead to Russia and China, warns Garry Kasparov

#artificialintelligence

Garry Kasparov has warned that any attempts by the Government to regulate artificial intelligence (AI) could "stifle" its development and give Russia and China an advantage. The former world chess champion has become an advocate for AI development following his resignation from professional chess in 2005. He told The Telegraph that "the government should be involved" in helping researchers and private firms to develop AI in order to "pave the road" for the technology. However, he cautioned against governments attempting to regulate the technology too closely. "It's too early for the government to interfere," he said.


The Road to Artificial Intelligence: An Ethical Minefield

#artificialintelligence

The term "Artificial Intelligence" conjures, in many, an image of an anthropomorphized Terminator-esque killer robot apocalypse. Hollywood movies, in recent decades, have served to only further this notion. Physicists and moral philosophers like Max Tegmark and Sam Harris, however, claim we need not fear a runaway superintelligence to adequately worry about the deleterious effects endemic to the AI space, but rather that competence on behalf of machines is a sufficiently frightening springboard from which an irreversibly harmful future can be launched. That said, there are currently a number of far more nefarious, insidious, and relevant ethical dilemmas which warrant our attention. In a world increasingly controlled by automated processes, rapidly approaching is a time in which adaptive, self-improving algorithms guide or even dictate most of the decisions that define human experience.


Top Quotes about AI, Automation and Robotics - Supply Chain Today

#artificialintelligence

"Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver." "In the long term, artificial intelligence and automation are going to be taking over so much of what gives humans a feeling of purpose." "I predict that, because of artificial intelligence and its ability to automate certain tasks that in the past were impossible to automate, not only will we have a much wealthier civilization, but the quality of work will go up very significantly and a higher fraction of people will have callings and careers relative to today." "Let's start with the three fundamental Rules of Robotics…. We have: one, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws." "In 30 years, a robot will likely be on the cover of time magazine as the best CEO. Machines will do what human beings are incapable of doing. Machines will partner and cooperate with humans, rather than become mankind's biggest enemy."


Top Quotes about AI, Automation and Robotics - Supply Chain Today

#artificialintelligence

"Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver." "In the long term, artificial intelligence and automation are going to be taking over so much of what gives humans a feeling of purpose." "I predict that, because of artificial intelligence and its ability to automate certain tasks that in the past were impossible to automate, not only will we have a much wealthier civilization, but the quality of work will go up very significantly and a higher fraction of people will have callings and careers relative to today." "Let's start with the three fundamental Rules of Robotics…. We have: one, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws." "In 30 years, a robot will likely be on the cover of time magazine as the best CEO. Machines will do what human beings are incapable of doing. Machines will partner and cooperate with humans, rather than become mankind's biggest enemy."


Five Ways Companies Can Adopt Ethical AI

#artificialintelligence

Does your company have an AI ethics officer? In 2014, Stephen Hawking said that AI would be humankind's best or last invention. Six years later, as we welcome 2020, companies are looking at how to use Artificial Intelligence (AI) in their business to stay competitive. The question they are facing is how to evaluate whether the AI products they use will do more harm than good. Many public and private leaders worldwide are thinking about how to address these questions around the safety, privacy, accountability transparency and bias in algorithms.