Decades of sci-fi flicks have instilled a fear of AI in some people – with a new report suggesting many remain unconvinced it will benefit humanity. The report, from the Center for the Governance of AI based at Oxford University, reveals concerns artificial intelligence may harm or endanger humankind. Baobao Zhang and Allan Dafoe, authors of the report, wrote in its summary: "Public sentiments have shaped many policy debates, including those about immigration, free trade, international conflicts, and climate change mitigation. As in these other policy domains, we expect the public to become more influential over time. It is thus vital to have a better understanding of how the public thinks about AI and the governance of AI." 41 of respondents'strongly' or'somewhat strongly' support the continued development of AI, compared to 22 that it to some degree.
Americans consider many AI governance challenges to be important; prioritize data privacy and preventing AI-enhanced cyber attacks, surveillance, and digital manipulation We sought to understand how Americans prioritize policy issues associated with AI. Respondents were asked to consider five AI governance challenges, randomly selected from a test of 13 (see Appendix B for the text); the order these five were to each respondent was also randomized. After considering each governance challenge, respondents were asked how likely they think the challenge will affect large numbers of people 1) in the U.S. and 2) around the world within 10 years. We use scatterplots to visualize our survey results. In Figure 3.1, the x-axis is the perceived likelihood of the problem happening to large numbers of people in the U.S. In Figure 3.2, the x-axis is the perceived likelihood of the problem happening to large numbers of people around the world.
Advances in artificial intelligence (AI)1 could impact nearly all aspects of society: the labor market, transportation, healthcare, education, and national security. AI's effects may be profoundly positive, but the technology entails risks and disruptions that warrant attention. While technologists and policymakers have begun to discuss AI and applications of machine learning more frequently, public opinion has not shaped much of these conversations. In the U.S., public sentiments have shaped many policy debates, including those about immigration, free trade, international conflicts, and climate change mitigation. As in these other policy domains, we expect the public to become more influential over time.
Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it's complicated. In this month's podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39). You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher. If you work with ...
AI is applicable in a wide variety of areas--everything from agriculture to cybersecurity. However, most of our work has been on the short-term impact of AI in business. We're not talking about next quarter, or even next year, but in the decades to come. As AI becomes more powerful, we expect it to have a larger impact on our world, including your organization. So, we decided to do what we do best: a deep analysis of AI applications and implications.