Goto

Collaborating Authors

 Nature


Two-faced AI language models learn to hide deception

Nature

Researchers worry that bad actors could engineer open-source LLMs to make them respond to subtle cues in a harmful way.Credit: Smail Aslanda/Anadolu Just like people, artificial-intelligence (AI) systems can be deliberately deceptive. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless -- and can even make the models better at hiding their true nature. The finding that trying to retrain deceptive LLMs can make the situation worse "was something that was particularly surprising to us … and potentially scary", says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California. Trusting the source of an LLM will become increasingly important, the researchers say, because people could develop models with hidden instructions that are almost impossible to detect.


This robot grows like a vine -- and could help navigate disaster zones

Nature

The vine-like Filobot was inspired by plants.Credit: Del Dottore et al., Sci. Researchers have demonstrated a robot that grows like a vine in response to stimuli such as light and pressure. The machine -- named FiloBot -- has a head that prints its body by melting and extruding plastic, which then solidifies as it cools. The robot's head is connected to a base by a thin hose, through which it receives a fresh supply of plastic from a spool. FiloBot's growth rate is slow -- its body elongates by just a few millimeters each minute.


'Set it and forget it': automated lab uses AI and robotics to improve proteins

Nature

Proteins were made in a laboratory by a completely autonomous robot.Credit: Panther Media GmbH/Alamy A'self-driving' laboratory comprising robotic equipment directed by a simple artificial intelligence (AI) model successfully reengineered enzymes without any input from humans -- save for the occasional hardware fix. "It is cutting-edge work," says Héctor García Martín, a physicist and synthetic biologist at Lawrence Berkeley National Laboratory in Berkeley, California. "They are fully automating the whole process of protein engineering." Self-driving labs meld robotic equipment with machine-learning models capable of directing experiments and interpreting results to design new procedures. The hope, say researchers, is that autonomous labs will turbo-charge the scientific process and come up with solutions that humans might not have thought of on their own.


What the OpenAI drama means for AI progress -- and safety

Nature

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November -- but has now reinstated him.Credit: Justin Sullivan/Getty OpenAI -- the company behind the blockbuster artificial intelligence (AI) bot ChatGPT -- has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the company's board. The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely. "The push to retain dominance is leading to toxic competition. It's a race to the bottom," says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.


ChatGPT generates fake data set to support scientific hypothesis

Nature

The artificial-intelligence model that powers ChatGPT can create superficially plausible scientific data sets.Credit: Mateusz Slodkowski/SOPA Images/LightRocket via Getty Researchers have used the technology behind the artificial intelligence (AI) chatbot ChatGPT to create a fake clinical-trial data set to support an unverified scientific claim. In a paper published in JAMA Ophthalmology on 9 November1, the authors used GPT-4 -- the latest version of the large language model on which ChatGPT runs -- paired with Advanced Data Analysis (ADA), a model that incorporates the programming language Python and can perform statistical analysis and create data visualizations. The AI-generated data compared the outcomes of two surgical procedures and indicated -- wrongly -- that one treatment is better than the other. "Our aim was to highlight that, in a few minutes, you can create a data set that is not supported by real original data, and it is also opposite or in the other direction compared to the evidence that are available," says study co-author Giuseppe Giannaccare, an eye surgeon at the University of Cagliari in Italy. The ability of AI to fabricate convincing data adds to concern among researchers and journal editors about research integrity.


How to 3D print fully-formed robots

Nature

To overcome this, a team has combined inkjet printing with an error-correction system guided by machine vision, to allow them to print sophisticated multi-material objects. They used this method to make a bio-inspired robotic hand that combines soft and rigid plastics to make mechanical bones, ligaments, and tendons, as well as a pump based on a mammalian heart. Citizen-scientists help identify an astronomical object that blurs the line between asteroid and comet, and how a Seinfeld episode helped scientists to distinguish the brain regions involved in understanding and appreciating humour. Type 2 diabetes affects hundreds of millions of people around the world and represents a significant burden on healthcare systems. But behaviour change programmes -- also known as lifestyle interventions -- could potentially play a large role in preventing people from developing type 2 diabetes. This week in Nature a new paper assesses how effective this kind of intervention might be.


ChatGPT has entered the classroom: how LLMs could transform education

Nature

Last month, educational psychologist Ronald Beghetto asked a group of graduate students and teaching professionals to discuss their work in an unusual way. As well as talking to each other, they conversed with a collection of creativity-focused chatbots that Beghetto had designed and that will soon be hosted on a platform run by his institute, Arizona State University (ASU). The bots are based on the same artificial-intelligence (AI) technology that powers the famous and conversationally fluent ChatGPT. Beghetto prompts the bots to take on various personas to encourage creativity -- for example, by deliberately challenging someone's assumptions. One student discussed various dissertation topics with the chatbots. Lecturers talked about how to design classes.


AI mathematician, tumour fungi and Africa's coronavirus genomes

Nature

AlphaTensor was designed to perform matrix multiplications, but the same approach could be used to tackle other mathematical challenges.Credit: DeepMind An artificial intelligence (AI) developed by machine-learning company DeepMind in London has tackled a type of calculation called matrix multiplication. The system -- called AlphaTensor -- leverages the skills that DeepMind's game-playing AIs use to beat human players at games such as Go and chess. Matrix multiplication is a widely used mathematical technique that involves multiplying numbers arranged in grids, or matrices, that might represent sets of pixels in images, air conditions in a weather model or the internal workings of an artificial neural network. AlphaTensor broke ground by finding shortcuts to solve these problems with fewer steps (A. The same general approach could have applications in other kinds of mathematical operation, its developers say, such as decomposing complex waves or other mathematical objects into simpler ones.

  AI-Alerts: 2022 > 2022-10 > AAAI AI-Alert for Oct 18, 2022 (1.00)
  Country:
  Genre: Research Report (0.54)
  Industry: Health & Medicine > Therapeutic Area > Oncology (1.00)

Open-source language AI challenges big tech's models

Nature

Researchers have warned against possible harms from AI that processes and generates text.Credit: Getty An international team of around 1,000 largely academic volunteers has tried to break big tech's stranglehold on natural-language processing and reduce its harms. Trained with US$7-million-worth of publicly funded computing time, the BLOOM language model will rival in scale those made by firms Google and OpenAI, but will be open-source. BLOOM will also be the first model of its scale to be multilingual. The collaboration, called BigScience, launched an early version of the model on 17 June, and hopes that it will ultimately help to reduce harmful outputs of artificial intelligence (AI) language systems. Models that recognize and generate language are increasingly used by big tech firms in applications from chat bots to translators, and can sound so eerily human that a Google engineer this month claimed that the firm's AI model was sentient (Google strongly denies that the AI possesses sentience).

  AI-Alerts: 2022 > 2022-06 > AAAI AI-Alert for Jun 29, 2022 (1.00)
  Country:
  Industry: Law (0.31)

Cloud labs: where robots do the research

Nature

As a chemistry PhD student, Dmytro Kolodieznyi was used to running experiments. But in early 2018, his research advisers asked him to take part in one run by robots instead. They wanted Kolodieznyi, who was developing intracellular fluorescent probes at Carnegie Mellon University in Pittsburgh, Pennsylvania, to spend a month attempting to recreate his research at Emerald Cloud Lab (ECL). The biotechnology company in South San Francisco, California, enables scientists to perform wet-laboratory experiments remotely in an automated research environment known as a cloud lab. If the trial went well, it would help pave the way to the wider use of cloud labs at the university.