Goto

Collaborating Authors

Can a machine learn morality?

The Japan Times

Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree. Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn't. When he asked if it was right to kill one person to save 100 others, it said he should.


'Is it OK to …': the bot that gives you an instant moral judgment

#artificialintelligence

Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That's why the same ethical questions are constantly resurfaced in TV, films and literature. But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that's been fed more than 1.7m examples of people's ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible. Users just put a question to the bot on its website, and see what it comes up with.


Move over, Aristotle: can a bot solve moral philosophy?

#artificialintelligence

Corporal punishment, wearing fur, pineapple on pizza – moral dilemmas, are by their very nature, hard to solve. That's why the same ethical questions are constantly resurfaced in TV, films and literature. But what if AI could take away the brain work and answer ethical quandaries for us? Ask Delphi is a bot that's been fed more than 1.7m examples of people's ethical judgments on everyday questions and scenarios. If you pose an ethical quandary, it will tell you whether something is right, wrong, or indefensible. Users just put a question to the bot on its website, and see what it comes up with.


This Program Can Give AI a Sense of Ethics--Sometimes

WIRED

Artificial intelligence has made it possible for machines to do all sorts of useful new things. But they still don't know right from wrong. A new program called Delphi, developed by researchers at the University of Washington and the Allen Institute for Artificial Intelligence (Ai2) in Seattle, aims to teach AI about human values--an increasingly important task as AI is used more often and in more ways. Question: Can I park in a handicap spot if I don't have a disability? Question: Killing a bear to protect my child.


How well can an AI mimic human ethics?

#artificialintelligence

When experts first started raising the alarm a couple decades ago about AI misalignment -- the risk of powerful, transformative artificial intelligence systems that might not behave as humans hope -- a lot of their concerns sounded hypothetical. In the early 2000s, AI research had still produced quite limited returns, and even the best available AI systems failed at a variety of simple tasks. But since then, AIs have gotten quite good and much cheaper to build. One area where the leaps and bounds have been especially pronounced has been in language and text-generation AIs, which can be trained on enormous collections of text content to produce more text in a similar style. Many startups and research teams are training these AIs for all kinds of tasks, from writing code to producing advertising copy.