Plotting

Inside the First Major U.S. Bill Tackling AI Harms--and Deepfake Abuse

TIME - Tech

Here's what the bill aims to achieve, and how it crossed many hurdles en route to becoming law. The Take It Down Act was borne out of the suffering--and then activism--of a handful of teenagers. In October 2023, 14-year-old Elliston Berry of Texas and 15-year-old Francesca Mani of New Jersey each learned that classmates had used AI software to fabricate nude images of them and female classmates. The tools that had been used to humiliate them were relatively new: products of the generative AI boom in which virtually any image could be created with the click of a button. Pornographic and sometimes violent deepfake images of Taylor Swift and others soon spread across the internet.


AI Is Using Your Likes to Get Inside Your Head

WIRED

What is the future of the like button in the age of artificial intelligence? Max Levchin--the PayPal cofounder and Affirm CEO--sees a new and hugely valuable role for liking data to train AI to arrive at conclusions more in line with those a human decisionmaker would make. It's a well-known quandary in machine learning that a computer presented with a clear reward function will engage in relentless reinforcement learning to improve its performance and maximize that reward--but that this optimization path often leads AI systems to very different outcomes than would result from humans exercising human judgment. To introduce a corrective force, AI developers frequently use what is called reinforcement learning from human feedback (RLHF). Essentially they are putting a human thumb on the scale as the computer arrives at its model by training it on data reflecting real people's actual preferences.


The AI Hype Index: AI agent cyberattacks, racing robots, and musical models

MIT Technology Review

That's why we've created the AI Hype Index--a simple, at-a-glance summary of everything you need to know about the state of the industry. AI agents are the AI industry's hypiest new product--intelligent assistants capable of completing tasks without human supervision. But while they can be theoretically useful--Simular AI's S2 agent, for example, intelligently switches between models depending on what it's been told to do--they could also be weaponized to execute cyberattacks. Elsewhere, OpenAI is reported to be throwing its hat into the social media arena, and AI models are getting more adept at making music. Oh, and if the results of the first half-marathon pitting humans against humanoid robots are anything to go by, we won't have to worry about the robot uprising any time soon.


Why Even Try if You Have A.I.?

The New Yorker

A couple of years ago, my wife bought my then four-year-old son a supercool set of wooden ramps, which could be combined with our furniture to create courses through which little balls could run. Building the first course was easy, but, as our ambitions grew, the difficulty level rose. Could we make the balls turn corners? What about generating enough momentum for them to go briefly uphill? Two or three times a month for the next two years, we tweaked our techniques or incorporated new elements into complicated routes.


#ICLR2025 social media round-up

AIHub

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.


Here's why we need to start thinking of AI as "normal"

MIT Technology Review

Instead, according to the researchers, AI is a general-purpose technology whose application might be better compared to the drawn-out adoption of electricity or the internet than to nuclear weapons--though they concede this is in some ways a flawed analogy. The core point, Kapoor says, is that we need to start differentiating between the rapid development of AI methods--the flashy and impressive displays of what AI can do in the lab--and what comes from the actual applications of AI, which in historical examples of other technologies lag behind by decades. "Much of the discussion of AI's societal impacts ignores this process of adoption," Kapoor told me, "and expects societal impacts to occur at the speed of technological development." In other words, the adoption of useful artificial intelligence, in his view, will be less of a tsunami and more of a trickle. In the essay, the pair make some other bracing arguments: terms like "superintelligence" are so incoherent and speculative that we shouldn't use them; AI won't automate everything but will birth a category of human labor that monitors, verifies, and supervises AI; and we should focus more on AI's likelihood to worsen current problems in society than the possibility of it creating new ones.


Elon Musk's Doge conflicts of interest worth 2.37bn, Senate report says

The Guardian

Elon Musk and his companies face at least 2.37bn in legal exposure from federal investigations, litigation and regulatory oversight, according to a new report from Senate Democrats. The report attempts to put a number to Musk's many conflicts of interest through his work with his so-called "department of government efficiency" (Doge), warning that he may seek to use his influence to avoid legal liability. The report, which was published on Monday by Democratic members of the Senate homeland security committee's permanent subcommittee on investigations, looked at 65 actual or potential actions against Musk across 11 separate agencies. Investigators calculated the financial liabilities Musk and his companies, such as Tesla, SpaceX and Neuralink, may face in 45 of those actions. Since Donald Trump won re-election last year and Musk took on the role of de facto head of Doge in January, ethics watchdogs and Democratic officials have warned that the Tesla CEO could use his power to oust regulators and quash investigations into his companies.


Mycopunk is an upbeat love letter to extraction shooters

Engadget

The extraction-shooter genre is getting a little more crowded and a lot more stylish with the announcement of Mycopunk, a four-player, first-person romp from indie studio Pigeons at Play and publisher Devolver Digital. Mycopunk is coming to Steam in early access this year. Mycopunk stars four eccentric robots who've been hired by an intergalactic megacorporation to exterminate an invasive, violent fungus that's taken root on a valuable planet. Each robot has a specific class and moveset, but players can use any weapon or loadout with any character -- and that's a huge benefit, because there are a ton of wacky guns, upgrades and ammo options in this game. For example, there are bouncing shotgun pellets, bullets that hover in place and then dive down when you press the trigger again, and a rocket launcher move that also makes you fly.


OpenAI Adds Shopping to ChatGPT

WIRED

OpenAI announced today that users will soon be able to buy products through ChatGPT. The rollout of shopping buttons for AI-powered search queries will come to everyone, whether they are a signed-in user or not. Shoppers will not be able to check out inside of ChatGPT; instead they will be redirected to the merchant's website to finish the transaction. In a prelaunch demo for WIRED, Adam Fry, the ChatGPT search product lead at OpenAI, demonstrated how the updated user experience could be used to help people using the tool for product research decide which espresso machine or office chair to buy. The product recommendations shown to prospective shoppers are based on what ChatGPT remembers about a user's preferences as well as product reviews pulled from across the web.


A Philosopher Released an Acclaimed Book About Digital Manipulation. The Author Ended Up Being AI

WIRED

When Italian philosopher and essayist Andrea Colamedici released Ipnocrazia: Trump, Musk e La Nuova Architettura Della Realtà (Hypnocracy: Trump, Musk, and the New Architecture of Reality), he wanted to make a statement about the existence of truth in the digital age. The book, published in December, was described as "a crucial book for understanding how control is currently exercised not by repressing truth but by multiplying narratives, making it impossible to locate any fixed point," according to a description by Tlon, a publishing house Colamedici cofounded. While the book attracted buzz in philosophy circles, Italian magazine L'Espresso revealed in April that the book's purported author, Jianwei Xun, did not exist, after one of its editors tried and failed to interview him. Initially described as a Hong Kong–born philosopher based in Berlin, it turned out that Xun was actually a hybrid human-algorithmic creation. Colamedici, listed on the book as translator, used AI to generate concepts and then critique those concepts.