Plotting

 TIME - Tech


What Happened When Computers Learned How to Read

TIME - Tech

They flag offensive content on social networks and delete spam from our inboxes. At the hospital, they help convert patient--doctor conversations into insurance billing codes. Sometimes, they alert law enforcement to potential terrorist plots and predict (poorly) the threat of violence on social media. Legal professionals use them to hide or discover evidence of corporate fraud. Students are writing their next school paper with the aid of a smart word processor, capable not just of completing sentences, but generating entire essays on any topic.


Biden Economic Adviser Elizabeth Kelly Picked to Lead AI Safety Testing Body

TIME - Tech

Elizabeth Kelly, formerly an economic policy adviser to President Joe Biden, has been named as director of the newly formed U.S. Artificial Intelligence Safety Institute (USAISI), U.S. Commerce Secretary Gina Raimondo announced Wednesday. "For the United States to lead the world in the development of safe, responsible AI, we need the brightest minds at the helm," said Raimondo. "Thanks to President Biden's leadership, we're in a position of power to meet the challenges posed by AI, while fostering America's greatest strength: innovation." Kelly has previously contributed to the Biden Administration's efforts to regulate AI with the AI Executive Order, which an Administration official tells TIME she was involved in the development of from the beginning. Kelly was "a driving force behind the domestic components of the AI executive order, spearheading efforts to promote competition, protect privacy, and support workers and consumers, and helped lead Administration engagement with allies and partners on AI governance," according to a press release announcing her appointment. Read More: Why Biden's AI Executive Order Only Goes So Far Previously, Kelly was special assistant to the President for economic policy at the White House National Economic Council.


An Ancient Roman Scroll on Pleasure Was Just Decoded Using AI

TIME - Tech

A Roman scroll, partially preserved when it was buried in the eruption of Mount Vesuvius in A.D. 79, has been virtually unwrapped and decoded using artificial intelligence. The feat was achieved by three contestants in the Vesuvius Challenge, a competition launched in March 2023 in which people around the world raced to read the ancient Herculaneum papyri. Papyrologists working with the Vesuvius Challenge believe the scroll contains "never-before-seen text from antiquity," and the text in question is a piece of Epicurean philosophy on the subject of pleasure. The winning submission shows ancient Greek letters on a large patch of scroll, and the author seems to be discussing the question: are things that are scarce more pleasurable as a result? The author, whose identity is unconfirmed, doesn't think so: "As too in the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant," one passage from the scroll reads.


Inside OpenAI's Plan to Make AI More 'Democratic'

TIME - Tech

He was surrounded by seven staff from the world's leading artificial intelligence lab, which had launched ChatGPT a few months earlier. One of them was Wojciech Zaremba, an OpenAI co-founder. For over a decade, Megill had been toiling in relative obscurity as the co-founder of Polis, a nonprofit open-source tech platform for carrying out public deliberations. Democracy, in Megill's view, had barely evolved in hundreds of years even as the world around it had transformed unrecognizably. Each voter has a multitude of beliefs they must distill down into a single signal: one vote, every few years. The heterogeneity of every individual gets lost and distorted, with the result that democratic systems often barely reflect the will of the people and tend toward polarization.


Meta Oversight Board Warns of 'Incoherent' Rules After Fake Biden Video

TIME - Tech

Meta Platforms Inc.'s independent Oversight Board agreed with the company's recent decision to leave up a misleading video of US President Joe Biden, but criticized its policies on content generated by artificial intelligence as "incoherent" and too narrow. The board, which was set up in 2020 by management to independently review some of the company's most significant content moderation decisions, on Monday urged Meta to update its policies quickly ahead of the 2024 U.S. general election. "The Board is concerned about the manipulated media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent, such as disrupting electoral processes," the organization said in a statement. The criticism from the board came after reviewing Meta's decision to leave up a manipulated video of Biden, which was edited to make it look like he was inappropriately touching his adult granddaughter's chest. The video included a caption that referred to Biden as a "pedophile."


AI Learns to Speak Like a Baby

TIME - Tech

Imagine seeing the world through the eyes of a six-month-old child. You don't have the words to describe anything. How could you possibly begin to understand language, when each sound that comes out of the mouths of those around you has an almost infinite number of potential meanings? This question has led many scientists to hypothesize that humans must have some intrinsic language facility to help us get started in acquiring language. But a paper published in Science this week found that a relatively simple AI system fed with data filmed from a baby's-eye view began to learn words.


How a New Bill Could Protect Against Deepfakes

TIME - Tech

A day before the Senate Judiciary Committee grilled CEOs from tech companies about internet child safety, bipartisan lawmakers introduced a bill that would allow victims to sue people who create and distribute sexually-explicit deepfakes under certain circumstances. The Disrupt Explicit Forged Images and Non-Consensual Edits, or DEFIANCE Act, allows victims to sue if those who created the deepfakes knew, or "recklessly disregarded" that the victim did not consent to its making. The federal bill, introduced on Tuesday, came nearly a week after deepfake pornographic images of Taylor Swift flooded X. The social media platform temporarily removed the ability to search for Swift's name on X after the explicit content was viewed tens of millions of times. Only ten states currently have criminal laws against this form of manipulated media files.


As Tech CEOs Are Grilled Over Child Safety Online, AI Is Complicating the Issue

TIME - Tech

The CEOs of five social media companies including Meta, TikTok and X (formerly Twitter) were grilled by Senators on Wednesday about how they are preventing online child sexual exploitation. The Senate Judiciary Committee called the meeting to hold the CEOs to account for what they said was a failure to prevent the abuse of minors, and ask whether they would support the laws that members of the Committee had proposed to address the problem. It is an issue that is getting worse, according to the National Center for Missing and Exploited Children, which says reports of child sexual abuse material (CSAM) reached a record high last year of more than 36 million, as reported by the Washington Post. The National Center for Missing and Exploited Children CyberTipline, a centralized system in the U.S. for reporting online CSAM, was alerted to more than 88 million files in 2022, with almost 90% of reports coming from outside the country. Mark Zuckerberg of Meta, Shou Chew of TikTok, and Linda Yaccarino of X appeared alongside Jason Spiegel of Snap and Jason Citron of Discord to answer questions from the Senate Judiciary Committee.


X Reactivates Search Function for Taylor Swift After Surge of Deepfakes Spurred Crackdown

TIME - Tech

Elon Musk's X has reactivated the ability to search its social network for musician Taylor Swift, after disabling queries for her name in response to a flood of explicit deepfake images. "Search has been re-enabled and we will continue to be vigilant for attempts to spread this content and will remove it wherever we find it," said Joe Benarroch, head of business operations at X. Last week, explicit artificial intelligence-generated images of Swift amassed tens of millions of views on X, the website formerly known as Twitter. X's efforts to curb their spread included disabling the search. She wasn't alone in being a recent high-profile target of the technology: U.S. President Joe Biden was also the victim of a fake audio clip spreading online, created with the help of widely available AI tools.


AI Companies Will Be Required to Report Safety Tests to U.S. Government

TIME - Tech

The Biden Administration will start implementing a new requirement for the developers of major artificial intelligence systems to disclose their safety test results to the government. The White House AI Council is scheduled to meet Monday to review progress made on the executive order that President Joe Biden signed three months ago to manage the fast-evolving technology. Read More: Why Biden's AI Executive Order Only Goes So Far Chief among the 90-day goals from the order was a mandate under the Defense Production Act that AI companies share vital information with the Commerce Department, including safety tests. Ben Buchanan, the White House special adviser on AI, said in an interview that the government wants "to know AI systems are safe before they're released to the public -- the president has been very clear that companies need to meet that bar." The software companies are committed to a set of categories for the safety tests, but companies do not yet have to comply with a common standard on the tests.