Experts are worried that advancements in AI could threaten humanity
A Barbie doll that uses artificial intelligence to communicate interactively. Oren Etzioni, a well-known AI researcher, complains about news coverage of potential long-term risks arising from future success in AI research (see "No, Experts Don't Think Superintelligent AI is a Threat to Humanity"). After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom's "main source of data on the advent of human-level intelligence" consists of surveys on the opinions of AI researchers. He then surveys the opinions of AI researchers, arguing that his results refute Bostrom's. It's important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it's important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent.
Nov-7-2016, 20:30:19 GMT
- Country:
- Europe > Ukraine
- Kyiv Oblast > Chernobyl (0.05)
- North America > United States
- California > Alameda County > Berkeley (0.05)
- Europe > Ukraine
- Genre:
- Industry:
- Energy > Power Industry > Utilities > Nuclear (0.32)
- Technology: