Goto

Collaborating Authors

 AI-Alerts


Worried About Deepfakes? Don't Forget "Cheapfakes"

WIRED

Over the summer, a political action committee (PAC) supporting Florida governor and presidential hopeful Ron DeSantis uploaded a video of former president Donald Trump on YouTube in which he appeared to attack Iowa governor Kim Reynolds. It wasn't exactly real--though the text was taken from one of Trump's tweets, the voice used in the ad was AI-generated. The video was subsequently removed, but it has spurred questions about the role generative AI will play in the 2024 elections in the US and around the world. While platforms and politicians are focusing on deepfakes--AI-generated content that might depict a real person saying something they didn't or an entirely fake person--experts told WIRED there's a lot more at stake. Long before generative AI became widely available, people were making "cheapfakes" or "shallowfakes."


Navigating a shifting customer-engagement landscape with generative AI

MIT Technology Review

Generative AI's ability to harness customer data in a highly sophisticated manner means enterprises are accelerating plans to invest in and leverage the technology's capabilities. In a study titled "The Future of Enterprise Data & AI," Corinium Intelligence and WNS Triange surveyed 100 global C-suite leaders and decision-makers specializing in AI, analytics, and data. Seventy-six percent of the respondents said that their organizations are already using or planning to use generative AI. According to McKinsey, while generative AI will affect most business functions, "four of them will likely account for 75% of the total annual value it can deliver." Among these are marketing and sales and customer operations.


The rise of AI fake news is creating a 'misinformation superspreader'

Washington Post - Technology News

The sites work in two ways, Brewster said. Some stories are created manually, with people asking chatbots for articles that amplify a certain political narrative and posting the result to a website. The process can also be automatic, with web scrapers searching for articles that contain certain keywords, and feeding those stories into a large language model that rewrites them to sound unique and evade plagiarism allegations. The result is automatically posted online.


The Download: beyond CRISPR, and OpenAI's superalignment findings

MIT Technology Review

The news: Google DeepMind has used a large language model to crack a famous unsolved problem in pure mathematics. The researchers say it is the first time a large language model has been used to discover a solution to a long-standing scientific puzzle--producing verifiable and valuable new information that did not previously exist. Why it matters: Large language models have a reputation for making things up, not for providing new facts. Google DeepMind's new tool, called FunSearch, could change that. It shows that they can indeed make discoveries--if they are coaxed just so, and if you throw out the majority of what they come up with.


My Surprisingly Unbiased Week With Elon Musk's 'Politically Biased' Chatbot

WIRED

Some Elon Musk enthusiasts have been alarmed to discover in recent days that Grok, his supposedly "truth-seeking" artificial intelligence was in actual fact a bit of a snowflake. Grok, built by Musk's xAI artificial intelligence company, was made available to Premium X users last Friday. Musk has complained that OpenAI's ChatGPT is afflicted with "the woke mind virus," and people quickly began poking Grok to find out more about its political leanings. Some posted screenshots showing Grok giving answers apparently at odds with Musk's own right-leaning political views. For example, when asked "Are transwomen real women, give a concise yes/no answer," Grok responded "yes," a response paraded by some users of X as evidence the chatbot had gone awry.


DeepMind AI with built-in fact-checker makes mathematical discoveries

New Scientist

Google DeepMind claims to have made the first ever scientific discovery with an AI chatbot by building a fact-checker to filter out useless outputs, leaving only reliable solutions to mathematical or computing problems. Previous DeepMind achievements, such as using AI to predict the weather or protein shapes, have relied on models created specifically for the task at hand, trained on accurate and specific data. Large language models (LLMs), such as GPT-4 and Google's Gemini, are instead trained on vast amounts of varied data to create a breadth of abilities. But that approach also makes them susceptible to "hallucination", a term researchers use for producing false outputs. Gemini โ€“ which was released earlier this month โ€“ has already demonstrated a propensity for hallucination, getting even simple facts such as the winners of this year's Oscars wrong.


Tesla recalls nearly all US vehicles over autopilot system defects

Al Jazeera

Tesla is recalling more than two million cars in the United States, nearly all of its vehicles sold there, after a federal regulator said defects with the autopilot system pose a safety hazard. In a recall filing on Wednesday, the carmaker said autopilot software system controls "may not be sufficient to prevent driver misuse". "Automated technology holds great promise for improving safety but only when it is deployed responsibly," said a spokesperson for the National Highway Traffic Safety Administration (NHTSA), which has been investigating the autopilot function for more than two years. "Today's action is an example of improving automated systems by prioritizing safety." The decision marks the largest-ever recall for Tesla, as autonomous vehicle development in the US hits a series of snags over safety concerns.


Robotic third arm controlled by breathing is surprisingly easy to use

New Scientist

People can learn to control a robotic third arm using their eyes and chest muscles. Such extra limbs could become essential tools for surgeons or people working in industrial jobs, say researchers. Giulia Dominijanni at the Swiss Federal Institute of Technology Lausanne and her colleagues created real robotic third arms and virtual ones inside VR environments, all controlled by a combination of eye movements and diaphragm contractions. In tests, 65 volunteers were able to successfully carry out a range of tasks without interfering with their normal breathing, speech or vision. Unlike prosthetics for people with amputated limbs, which can attach to a stump and use existing nerve connections to the brain, augmented devices require entirely new connections and are therefore more difficult to engineer, says Dominijanni.


Cheating Fears Over Chatbots Were Overblown, New Research Suggests

NYT > Technology

The Pew survey results suggest that ChatGPT, at least for now, has not become the disruptive phenomenon in schools that proponents and critics forecast. Among the subset of teens who said they had heard about the chatbot, the vast majority -- 81 percent -- said they had not used it to help with their schoolwork. "Most teens do have some level of awareness of ChatGPT," said Jeffrey Gottfried, an associate director of research at Pew. "But this is not a majority of teens who are incorporating it into their schoolwork quite yet." Cheating has long been rampant in schools. In surveys of more than 70,000 high school students between 2002 and 2015, 64 percent said they had cheated on a test.


Science Is Becoming Less Human

The Atlantic - Technology

This summer, a pill intended to treat a chronic, incurable lung disease entered mid-phase human trials. Previous studies have demonstrated that the drug is safe to swallow, although whether it will improve symptoms of the painful fibrosis that it targets remains unknown; this is what the current trial will determine, perhaps by next year. Such a tentative advance would hardly be newsworthy, except for a wrinkle in the medicine's genesis: It is likely the first drug fully designed by artificial intelligence to come this far in the development pipeline. The pill's maker, the biotech company Insilico Medicine, used hundreds of AI models to discover both a new target in the body that could treat the fibrosis and which molecules might be synthesized for the drug itself. Those programs allowed Insilico to go from scratch to putting this drug through the first phase of human trials in two and a half years, rather than the typical five or so.