Goto

Collaborating Authors

Postdoc in the study of Policy, Responsible Innovation and the Future of AI (Fixed Term)

#artificialintelligence

The Leverhulme Centre for the Future of Intelligence (CFI) invites applications for a postdoctoral Research Associate for the project'Policy, Responsible Innovation and the Future of AI'. The appointment will be for 3 years, and is based in Cambridge. CFI is an exciting new interdisciplinary research centre addressing the challenges and opportunities posed by artificial intelligence (AI). Funded by the Leverhulme Trust, CFI is based at the University of Cambridge, with partners in the University of Oxford, Imperial College, and UC Berkeley, and close links with industry partners and policymakers. This project examines the prospects for a robust safety and benefits culture within the AI industry, in anticipation of the development of increasingly powerful AI systems that will present ever-greater real-world opportunities and challenges.


Global AI Experts Sound The Alarm

#artificialintelligence

Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report--sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists. Forecasting rapid growth in cyber-crime and the misuse of drones during the next decade--as well as an unprecedented rise in the use of'bots' to manipulate everything from elections to the news agenda and social media--the report is a clarion call for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI. However, the report--"The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation"--also recommends interventions to mitigate the threats posed by the malicious use of AI: The co-authors come from a wide range of organizations and disciplines, including Oxford University's Future of Humanity Institute; Cambridge University's Center for the Study of Existential Risk; OpenAI, a leading non-profit AI research company; the Electronic Frontier Foundation, an international non-profit digital rights group; the Center for a New American Security, a U.S.-based bipartisan national security think-tank; and other organizations. The 100-page report identifies three security domains (digital, physical, and political security) as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely-targeted, and highly-efficient attacks.


Stephen Hawking launches AI research center with opening speech

AITopics Original Links

Theoretical physicist and cosmologist Stephen Hawking has repeatedly warned of the dangers posed by out-of-control artificial intelligence (AI). But on Wednesday, as the professor opened the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge, he remarked on its potential to bring positive change – if developed correctly. "Success in creating AI could be the biggest event in the history of our civilisation. But it could also be the last, unless we learn how to avoid the risks," Dr. Hawking said at the launch, according to a University of Cambridge press release. Representing a collaboration between the universities of Oxford, Cambridge, Imperial College London, and the University of California, Berkeley, the CFI will bring together a multidisciplinary team of researchers, as well as tech leaders and policy makers, to ensure that societies can "make the best of the opportunities of artificial intelligence," as its website states.


Predicting the future of artificial intelligence has always been a fool's game

AITopics Original Links

From the Darmouth Conferences to Turing's test, prophecies about AI have rarely hit the mark. In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting. The "spectacularly wrong prediction" of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate. The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.


Interview: Artificial Intelligence: Thinking Outside the Box (Part One)

#artificialintelligence

Artificial intelligence (AI) is no longer the stuff of science fiction. While robot maids may not yet be a reality, researchers are working hard to create reasoning, problem-solving machines whose "brains" might rival our own. Seán Ó hÉigeartaigh (anglicized as Sean O'Hegarty), while enthusiastic about the benefits that AI can bring, is also wary of the technology's dark side. He holds a doctorate in genomics from Trinity College Dublin and is now executive director of the Center for the Study of Existential Risk at the University of Cambridge. He has played a central role in international research on the long-term impacts and risks of AI.