Goto

Collaborating Authors

Deep Learning


To avoid AI doom, learn from nuclear safety

MIT Technology Review

Last week, a group of tech company leaders and AI experts pushed out another open letter, declaring that mitigating the risk of human extinction due to AI should be as much of a global priority as preventing pandemics and nuclear war. So how do companies themselves propose we avoid AI ruin? One suggestion comes from a new paper by researchers from Oxford, Cambridge, the University of Toronto, the University of Montreal, Google DeepMind, OpenAI, Anthropic, several AI research nonprofits, and Turing Prize winner Yoshua Bengio. They suggest that AI developers should evaluate a model's potential to cause "extreme" risks at the very early stages of development, even before starting any training. These risks include the potential for AI models to manipulate and deceive humans, gain access to weapons, or find cybersecurity vulnerabilities to exploit.


AI may have an 'eye' on growing babies: Could predict premature birth as early as 31 weeks

FOX News

Fox News medical contributor Dr. Marc Siegel joins'Fox & Friends' to discuss the benefits of artificial intelligence in the medical industry if used with caution. About 10% of all infants born in the U.S. in 2021 were preterm -- which means they were delivered earlier than 37 weeks of pregnancy, per the Centers for Disease Control and Prevention (CDC). Preterm births also make up about 16% of infant deaths. Now, researchers from Washington University in St. Louis, Missouri, are looking to improve those odds through the use of artificial intelligence. They developed a deep learning model that can predict preterm births by analyzing electrical activity in the woman's uterus during pregnancy -- then they tested the model in a study that was published in the medical journal PLOS One.


PeSTo: an AI tool for predicting protein interactions

AIHub

The geometric deep-learning method (PeSTo) used to predict protein binding interfaces. The amino acids involved in the protein binding interface are highlighted in red. Proteins are essential to the biological functions of most living organisms. They have evolved to interact with other proteins, nucleic acids, lipids etc., and all of those interactions form large, "supra-molecular" complexes. This means that understanding protein interactions is crucial for understanding many cellular processes.


The Morning After: Industry leaders say AI presents 'risk of extinction' on par with nuclear war

Engadget

With the rise of AI language models and tools like ChatGPT and Bard, we've heard warnings from people involved, like Elon Musk, about the risks posed by AI. Now, a group of high-profile industry leaders has issued a one-sentence statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." It was posted to the Center for AI Safety, an organization with the mission "to reduce societal-scale risks from artificial intelligence," according to its website. Signatories include OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis. Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, the godfathers of modern AI, also put their names to it.


AI leaders sign an open letter to openly acknowledge the dangers of AI

ZDNet

In March, an open letter spearheaded by tech industry experts sought to halt the development of advanced AI models out of fear the technology could pose a "profound risk to society and humanity". This week, a statement cosigned by OpenAI CEO Sam Altman, the "godfather" of AI Geoffrey Hinton, and others seeks to reduce the risk AI poses to push humanity to extinction. The statement's preface encourages industry leaders to discuss AI's most severe threats openly. According to the statement, AI's risk to humanity is so severe that it's comparable to global pandemics and nuclear war. Other cosigners are researchers from Google DeepMind, Kevin Scott, Microsoft's chief technology officer, and Bruce Schneier, an internet security pioneer.


AI should be 'a global priority alongside pandemics and nuclear war',' new letter states

Daily Mail - Science & tech

A new open letter calling for regulation to mitigate'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech. The 22-word statement reads: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT, who called on Congress to establish regulations for AI. While the document does not provide details, the statement likely aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans in place for pandemics and nuclear wars. Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.


AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation

TIME - Tech

The CEOs of the world's leading artificial intelligence companies, along with hundreds of other AI scientists and experts, made their most unified statement yet about the existential risks to humanity posed by the technology, in a short open letter released Tuesday. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the letter, released by California-based non-profit the Center for AI Safety, says in its entirety. The CEOs of what are widely seen as the three most cutting-edge AI labs--Sam Altman of OpenAI, Demis Hassabis of DeepMind, and Dario Amodei of Anthropic--are all signatories to the letter. So is Geoffrey Hinton, a man widely acknowledged to be the "godfather of AI," who made headlines last month when he stepped down from his position at Google and warned of the risks AI posed to humanity. Read More: DeepMind's CEO Helped Take AI Mainstream.


AI presents 'risk of extinction' on par with nuclear war, industry leaders say

Engadget

With the rise of ChatGPT, Bard and other large language models (LLMs), we've been hearing warnings from the people involved like Elon Musk about the risks posed by artificial intelligence (AI). Now, a group of high-profile industry leaders has issued a one-sentence statement effectively confirming those fears. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. It was posted to the Center for AI Safety, an organization with the mission "to reduce societal-scale risks from artificial intelligence," according to its website. Signatories are a who's who of the AI industry, including OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis.


No 10 acknowledges 'existential' risk of AI for first time

The Guardian

The "existential" risk of artificial intelligence has been acknowledged by No 10 for the first time, after the prime minister met the heads of the world's leading AI research groups to discuss safety and regulation. Rishi Sunak and Chloe Smith, the secretary of state for science, innovation and technology, met the chief executives of Google DeepMind, OpenAI and Anthropic AI on Wednesday evening and discussed how best to moderate the development of the technology to limit the risks of catastrophe. "They discussed safety measures, voluntary actions that labs are considering to manage the risks, and the possible avenues for international collaboration on AI safety and regulation," the participants said in a joint statement. "The lab leaders agreed to work with the UK government to ensure our approach responds to the speed of innovations in this technology both in the UK and around the globe. "The PM and CEOs discussed the risks of the technology, ranging from disinformation and national security, to existential threats … The PM set out how the approach to AI regulation will need to keep pace with the fast-moving advances in this technology." It is the first time the prime minister has acknowledged the potential "existential" threat of developing a "superintelligent" AI without appropriate safeguards, a risk that contrasts with the UK government's generally positive approach to AI development.


To PiM or Not to PiM

Communications of the ACM

A 20nm 6GB function-in-memory DRAM, based on HBM2 with a 1.2 TFLOPS programmable computing unit using bank-level parallelism, for machine learning applications.