Not enough data to create a plot.
Try a different view from the menu above.
The Atlantic - Technology
Why Does AI Art Look Like That?
This week, X launched an AI-image generator, allowing paying subscribers of Elon Musk's social platform to make their own art. So--naturally--some users appear to have immediately made images of Donald Trump flying a plane toward the World Trade Center; Mickey Mouse wielding an assault rifle, and another of him enjoying a cigarette and some beer on the beach; and so on. Some of the images that people have created using the tool are deeply unsettling; others are just strange, or even kind of funny. They depict wildly different scenarios and characters. But somehow they all kind of look alike, bearing unmistakable hallmarks of AI art that have cropped up in recent years thanks to products such as Midjourney and DALL-E.
AI Can't Make Music
The first concert I bought tickets to after the pandemic subsided was a performance of the British singer-songwriter Birdy, held last April in Belgium. I've listened to Birdy more than to any other artist; her voice has pulled me through the hardest and happiest stretches of my life. I know every lyric to nearly every song in her discography, but that night Birdy's voice had the same effect as the first time I'd listened to her, through beat-up headphones connected to an iPod over a decade ago--a physical shudder, as if a hand had reached across time and grazed me, somehow, just beneath the skin. Countless people around the world have their own version of this ineffable connection, with Taylor Swift, perhaps, or the Beatles, Bob Marley, or Metallica. My feelings about Birdy's music were powerful enough to propel me across the Atlantic, just as tens of thousands of people flocked to the Sphere to see Phish earlier this year, or some 400,000 went to Woodstock in 1969.
AI Has Become a Technology of Faith
An important thing to realize about the grandest conversations surrounding AI is that, most of the time, everyone is making things up. This isn't to say that people have no idea what they're talking about or that leaders are lying. But the bulk of the conversation about AI's greatest capabilities is premised on a vision of a theoretical future. It is a sales pitch, one in which the problems of today are brushed aside or softened as issues of now, which surely, leaders in the field insist, will be solved as the technology gets better. What we see today is merely a shadow of what is coming.
We Need to Control AI Agents Now
In 2010--well before the rise of ChatGPT and Claude and all the other sprightly, conversational AI models--an army of bots briefly wiped out 1 trillion of value across the NASDAQ and other stock exchanges. Lengthy investigations were undertaken to figure out what had happened and why--and how to prevent it from happening again. The Securities and Exchange Commission's report on the matter blamed high-frequency-trading algorithms unexpectedly engaging in a mindless "hot potato" buying and selling of contracts back and forth to one another. A "flash crash," as the incident was called, may seem quaint relative to what lies ahead. That's because, even amid all the AI hype, a looming part of the AI revolution is under-examined: "agents." Agents are AIs that act independently on behalf of humans.
Hot AI Jesus Is Huge on Facebook
Jesus is punching the devil on Facebook. The two are in a boxing ring. Jesus is wearing a pair of white boxing shorts with his name embroidered on the waistband. He is ripped beyond belief; not only does he have six-pack abs, but every muscle on his body is bulging. Jesus is hitting the devil directly on the chin, a knockout blow.
Generative AI Can't Cite Its Sources
Silicon Valley appears, once again, to be getting the better of America's newspapers and magazines. Tech companies are injecting every corner of the web with AI language models, which may pose an existential threat to journalism as we currently know it. After all, why go to a media outlet if ChatGPT can deliver the information you think you need? A growing number of media companies--the publishers of The Wall Street Journal, Business Insider, New York, Politico, The Atlantic, and many others--have signed licensing deals with OpenAI that will formally allow the start-up's AI models to incorporate recent partner articles into their responses. OpenAI is just the beginning, and such deals may soon be standard for major media companies: Perplexity, which runs a popular AI-powered search engine, has had conversations with various publishers (including The Atlantic's business division) about a potential ad-revenue-sharing arrangement, the start-up's chief business officer, Dmitry Shevelenko, told me yesterday.
Google Is Turning Into a Libel Machine
A few weeks ago, I witnessed Google Search make what could have been the most expensive error in its history. In response to a query about cheating in chess, Google's new AI Overview told me that the young American player Hans Niemann had "admitted to using an engine," or a chess-playing AI, after defeating Magnus Carlsen in 2022--implying that Niemann had confessed to cheating against the world's top-ranked player. Suspicion about the American's play against Carlsen that September indeed sparked controversy, one that reverberated even beyond the world of professional chess, garnering mainstream news coverage and the attention of Elon Musk. Except, Niemann admitted no such thing. Quite the opposite: He has vigorously defended himself against the allegations, going so far as to file a 100 million defamation lawsuit against Carlsen and several others who had accused him of cheating or punished him for the unproven allegation--Chess.com, for example, had banned Niemann from its website and tournaments.
Where Does Photoshop Go From Here?
In 2017, Rihanna posted a photo of herself on Instagram in which she appeared to have an extra thumb. It was, in retrospect, the thumb-shaped canary in the coal mine. Although far from the first celebrity "Photoshop fail," it just so happened to predict the era of faux-finger drama we now live in: AI image generators are universally, horrifically bad at rendering human hands. Today, an extra finger is a telltale sign of digital manipulation. Flaws aside, faking it has never been easier.
Watch Apple Trash-Compact Human Culture
Here is a nonexhaustive list of objects Apple recently pulverized with a menacing hydraulic crusher: a trumpet, a piano, a turntable, a sculpted bust, lots and lots of paint, video-game controllers. These are all shown being demolished in the company's new iPad commercial, a minute-long spot titled "Crush!" The items are arranged on a platform beneath a slowly descending enormous metal block, then trash-compactored out of existence in a violent symphony of crunching. Once the destruction is complete, the press lifts back up to reveal that the items have been replaced by a slender, shimmering iPad. The notion behind the commercial is fairly obvious. Apple wants to show you that the bulk of human ingenuity and history can be compressed into an iPad, and thereby wants you to believe that the device is a desirable entry point to both the consumption of culture and the creation of it.
ElevenLabs Is Building an Army of Voice Clones
I'd been waiting, compulsively checking my inbox. I opened the email and scrolled until I saw a button that said, plainly, "Use voice." I considered saying something aloud to mark the occasion, but that felt wrong. The computer would now speak for me. I had thought it'd be fun, and uncanny, to clone my voice. I'd sought out the AI start-up ElevenLabs, paid 22 for a "creator" account, and uploaded some recordings of myself. A few hours later, I typed some words into a text box, hit "Enter," and there I was: all the nasal lilts, hesitations, pauses, and mid-Atlantic-by-way-of-Ohio vowels that make my voice mine.