Goto

Collaborating Authors

 *


Biden Audio Deepfake Alarms Experts in Lead-Up to Elections

Time Politics

No political deepfake has alarmed the world's disinformation experts more than the doctored audio message of U.S. President Joe Biden that began circulating over the weekend. In the phone message, a voice edited to sound like Biden urged voters in New Hampshire not to cast their ballots in Tuesday's Democratic primary. "Save your vote for the November election," the phone message went. It even made use of one of Biden's signature phrases: "What a bunch of malarkey." In reality, the president isn't on the ballot in the New Hampshire race -- and voting in the primary doesn't preclude people from participating in November's election.


Facial recognition used after Sunglass Hut robbery led to man's wrongful jailing, says suit

The Guardian > Technology

A 61-year-old man is suing Macy's and the parent company of Sunglass Hut over the stores' alleged use of a facial recognition system that misidentified him as the culprit behind an armed robbery and led to his wrongful arrest. While in jail, he was beaten and raped, according to his suit. Harvey Eugene Murphy Jr was accused and arrested on charges of robbing a Houston-area Sunglass Hut of thousands of dollars of merchandise in January 2022, though his attorneys say he was living in California at the time of the robbery. He was arrested on 20 October 2023, according to his lawyers. According to Murphy's lawsuit, an employee of EssilorLuxottica, Sunglass Hut's parent company, worked with its retail partner Macy's and used facial recognition software to identify Murphy as the robber.


Two-faced AI language models learn to hide deception

Nature

Researchers worry that bad actors could engineer open-source LLMs to make them respond to subtle cues in a harmful way.Credit: Smail Aslanda/Anadolu Just like people, artificial-intelligence (AI) systems can be deliberately deceptive. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless -- and can even make the models better at hiding their true nature. The finding that trying to retrain deceptive LLMs can make the situation worse "was something that was particularly surprising to us โ€ฆ and potentially scary", says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California. Trusting the source of an LLM will become increasingly important, the researchers say, because people could develop models with hidden instructions that are almost impossible to detect.


Cops Used DNA to Predict a Suspect's Face--and Tried to Run Facial Recognition on It

WIRED

In 2017, detectives at the East Bay Regional Park District Police Department working a cold case got an idea, one that might help them finally get a lead on the murder of Maria Jane Weidhofer. Officers had found Weidhofer, dead and sexually assaulted, at Berkeley, California's Tilden Regional Park in 1990. Nearly 30 years later, the department sent genetic information collected at the crime scene to Parabon NanoLabs--a company that says it can turn DNA into a face. Parabon NanoLabs ran the suspect's DNA through its proprietary machine learning model. Soon, it provided the police department with something the detectives had never seen before: the face of a potential suspect, generated using only crime scene evidence. The image Parabon NanoLabs produced, called a Snapshot Phenotype Report, wasn't a photograph.


OpenAI bans developer of bot for presidential hopeful Dean Phillips

Washington Post - Technology News

Dean.Bot was the brainchild of Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who had started a super PAC supporting Phillips (Minn.) The PAC had received 1 million from hedge fund manager Bill Ackman, the billionaire activist who led the charge to oust Harvard University president Claudine Gay.


Watch a plant-inspired robot grow towards light like a vine

New Scientist

A robot that can grow around trees or rocks like a vine could be used to make buildings or measure pollution in hard-to-reach natural environments. Vine-like robots aren't new, but they are often designed to rely on just a single sense to grow upwards, such as heat or light, which means they don't work as well in some settings as others. Emanuela Del Dottore at the Italian Institute of Technology and her colleagues have developed a new version, called FiloBot, that can use light, shade or gravity as a guide. It grows by coiling a plastic filament into a cylindrical shape, adding new layers to its body just behind the head that contains the sensors. "Our robot has an embedded microcontroller that can process multiple stimuli and direct the growth at a precise location, the tip, ensuring the body structure is preserved," she says.


A New Nonprofit Is Seeking to Solve the AI Copyright Problem

TIME - Tech

Stability AI, the makers of the popular AI image generation model Stable Diffusion, had trained the model by feeding it with millions of images that had been "scraped" from the internet, without the consent of their creators. Newton-Rex, the head of Stability's audio team, disagreed. "Companies worth billions of dollars are, without permission, training generative AI models on creators' works, which are then being used to create new content that in many cases can compete with the original works. In December, the New York Times sued OpenAI in a Manhattan court, alleging that the creator of ChatGPT had illegally used millions of the newspaper's articles to train AI systems that are intended to compete with the Times as a reliable source of information. Meanwhile, in July 2023, comedian Sarah Silverman and other writers sued OpenAI and Meta, accusing the companies of using their writing to train AI models without their permission.


This robot grows like a vine -- and could help navigate disaster zones

Nature

The vine-like Filobot was inspired by plants.Credit: Del Dottore et al., Sci. Researchers have demonstrated a robot that grows like a vine in response to stimuli such as light and pressure. The machine -- named FiloBot -- has a head that prints its body by melting and extruding plastic, which then solidifies as it cools. The robot's head is connected to a base by a thin hose, through which it receives a fresh supply of plastic from a spool. FiloBot's growth rate is slow -- its body elongates by just a few millimeters each minute.


Don't Talk to People Like They're Chatbots

The Atlantic - Technology

For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer's language. This is beginning to change. Large language models--the technology undergirding modern chatbots--allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges.


Google DeepMind's new AI system can solve complex geometry problems

MIT Technology Review

Solving mathematics problems requires logical reasoning, something that most current AI models aren't great at. This demand for reasoning is why mathematics serves as an important benchmark to gauge progress in AI intelligence, says Wang. DeepMind's program, named AlphaGeometry, combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions. Language models excel at recognizing patterns and predicting subsequent steps in a process. However, their reasoning lacks the rigor required for mathematical problem-solving. The symbolic engine, on the other hand, is based purely on formal logic and strict rules, which allows it to guide the language model toward rational decisions.