artificial intelligence safety
US and UK announce formal partnership on artificial intelligence safety
The United States and Britain on Monday announced a new partnership on the science of artificial intelligence safety, amid growing concerns about upcoming next-generation versions. The US commerce secretary, Gina Raimondo, and British technology secretary, Michelle Donelan, signed a memorandum of understanding in Washington to work jointly to develop advanced AI model testing, following commitments announced at an AI safety summit in Bletchley Park in November. "We all know AI is the defining technology of our generation," Raimondo said. "This partnership will accelerate both of our institutes' work across the full spectrum to address the risks of our national security concerns and the concerns of our broader society." Under the formal partnership, Britain and the United States plan to perform at least one joint testing exercise on a publicly accessible model and are considering exploring personnel exchanges between the institutes.
- North America > United States (1.00)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.27)
SafeAI 2019 – AAAI's Workshop on Artificial Intelligence Safety
SafeAI aims to explore new ideas on AI safety engineering, ethically aligned design, regulation and standards for AI-based systems. You are invited to submit short position papers (2-4 pages), full technical papers (6-8 pages) or proposals for technical talks (one-page abstract). The workshop proceedings will be published on CEUR-WS.org. SafeAI will be a memorable event with two top keynote speakers and three great invited talks. The Program is now available!
The Truth about Artificial Intelligence Safety
When we heard that two Facebook artificial intelligence programs had to be shut down because they were talking to each other in a language we couldn't understand, things got weird. Is the age of true artificial intelligence upon us? It's become a hot topic in recent years, with entrepreneurs such as Mark Zuckerberg arguing that it will be wholly beneficial for humanity, while others like Elon Musk saying that it could be one of the greatest threats to humanity. Let's find out the real story behind artificial intelligence. Follow me on Twitter: https://twitter.com/mikeohhello
Elon Musk: "If you're not concerned about Artificial Intelligence safety, you should be"
Tesla and SpaceX CEO, Elon Musk recently emphasized the threat Artificial Intelligence (AI) poses on the human race. In an open letter to the United Nations, Musk and other robotic experts, said artificial super intelligence would lead to "lethal autonomous weapons" that would bring the "third revolution in warfare". The letter was signed by 115 robotic experts who feel the need to raise the alarm on artificially intelligent robots – which they state, are a present danger and not necessarily a threat coming in the distant future. If you're not concerned about AI safety, you should be. The letter states that this Pandora's box will be hard to close, once it is opened.
- North America > United States > Massachusetts (0.06)
- Asia > North Korea (0.06)
Researchers collaborate with Google DeepMind on artificial intelligence safety
Academics from the Future of Humanity Institute (FHI), part of the Oxford Martin School, are teaming up with Google DeepMind to make artificial intelligence safer. Stuart Armstrong, the Alexander Tamas Fellow in Artificial Intelligence and Machine Learning at FHI and Laurent Orseau, of Google DeepMind, will present their research on reinforcement learning agent interruptibility at the UAI 2016 conference in New York City later this month. Orseau and Armstrong's research explores a method to ensure that reinforcement learning agents can be repeatedly safely interrupted by human or automatic overseers. This ensures that the agents do not "learn" about these interruptions, and do not take steps to avoid or manipulate the interruptions. Interruptibility has several advantages as an approach over previous methods of control.