Plotting

 The Atlantic - Technology


Shh, ChatGPT. That's a Secret.

The Atlantic - Technology

This past spring, a man in Washington State worried that his marriage was on the verge of collapse. "I am depressed and going a little crazy, still love her and want to win her back," he typed into ChatGPT. With the chatbot's help, he wanted to write a letter protesting her decision to file for divorce and post it to their bedroom door. "Emphasize my deep guilt, shame, and remorse for not nurturing and being a better husband, father, and provider," he wrote. In another message, he asked ChatGPT to write his wife a poem "so epic that it could make her change her mind but not cheesy or over the top." The man's chat history was included in the WildChat data set, a collection of 1 million ChatGPT conversations gathered consensually by researchers to document how people are interacting with the popular chatbot.


The Next Big Thing Is Still โ€ฆ Smart Glasses

The Atlantic - Technology

Last week, Mark Zuckerberg stood on a stage in California holding what appeared to be a pair of thick black eyeglasses. His baggy T-shirt displayed Latin text that seemed to compare him to Julius Caesar--aut Zuck aut nihil--and he offered a bold declaration: These are Orion, "the most advanced glasses the world has ever seen." Those glasses, just a prototype for now, allow users to take video calls, watch movies, and play games in so-called augmented reality, where digital imagery is overlaid on the real world. Demo videos at Meta Connect, the company's annual conference, showed people playing Pong on the glasses, their hands functioning as paddles, as well as using the glasses to project a TV screen onto an otherwise blank wall. "A lot of people have said that this is the craziest technology they've ever seen," Zuckerberg said.


The Playwright in the Age of AI

The Atlantic - Technology

Ayad Akhtar's brilliant new play, McNeal, currently at the Lincoln Center Theater, is transfixing in part because it tracks without flinching the disintegration of a celebrated writer, and in part because Akhtar goes to a place that few writers have visited so effectively--the very near future, in which large language models threaten to undo our self-satisfied understanding of creativity, plagiarism, and originality. And also because Robert Downey Jr., performing onstage for the first time in more than 40 years, perfectly embodies the genius and brokenness of the title character. Check out more from this issue and find your next story to read. I've been in conversation for quite some time with Akhtar, whose play Disgraced won the Pulitzer Prize in 2013, about artificial generative intelligence and its impact on cognition and creation. He's one of the few writers I know whose position on AI can't be reduced to the (understandable) plea For God's sake, stop threatening my existence! In McNeal, he not only suggests that LLMs might be nondestructive utilities for human writers, but also deployed LLMs as he wrote (he's used many of them, ChatGPT, Claude, and Gemini included). To my chagrin and astonishment, they seem to have helped him make an even better play. As you will see in our conversation, he doesn't believe that this should be controversial. In early September, Akhtar, Downey, Bartlett Sher--the Tony Award winner who directed McNeal--and I met at Downey's home in New York for what turned out to be an amusing, occasionally frenetic, and sometimes even borderline profound discussion of the play, its origins, the flummoxing issues it raises, and, yes, Avengers: Age of Ultron. We were joined intermittently by Susan Downey, Robert's wife (and producing partner), and the person who believed that Akhtar's play would tempt her husband to return to the stage. The conversation that follows is a condensed and edited version of our sprawling discussion, but I think it captures something about art and AI, and it certainly captures the exceptional qualities of three people, writer, director, and actor, who are operating at the pinnacle of their trade, without fear--perhaps without enough fear--of what is inescapably coming.


Does AI Actually Understand Language?

The Atlantic - Technology

This article was originally published by Quanta Magazine. A picture may be worth a thousand words, but how many numbers is a word worth? The question may sound silly, but it happens to be the foundation that underlies large language models, or LLMs--and through them, many modern applications of artificial intelligence. Every LLM has its own answer. In Meta's open-source Llama 3 model, words are split into tokens represented by 4,096 numbers; for one version of GPT-3, it's 12,288.


AI Is a Language Microwave

The Atlantic - Technology

Nearly two years ago, I wrote that AI would kill the undergraduate essay. That reaction came in the immediate aftermath of ChatGPT, when the sudden appearance of its shocking capabilities seemed to present endless vistas of possibility--some liberating, some catastrophic. Since then, the potential of generative AI has felt clear, although its practical applications in everyday life have remained somewhat nebulous. Academia remains at the forefront of this question: Everybody knows students are using AI. The answer to those questions will, at least to some extent, reveal the place that AI will find for itself in society at large.


High School Is Becoming a Cesspool of Sexually Explicit Deepfakes

The Atlantic - Technology

For years now, generative AI has been used to conjure all sorts of realities--dazzling paintings and startling animations of worlds and people, both real and imagined. This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases--perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been. This morning, the Center for Democracy and Technology, a nonprofit that advocates for digital rights and privacy, released a report on the alarming prevalence of nonconsensual intimate imagery (or NCII) in American schools. In the past school year, the center's polling found, 15 percent of high schoolers reported hearing about a "deepfake"--or AI-generated image--that depicted someone associated with their school in a sexually explicit or intimate manner.


OpenAI Takes Its Mask Off

The Atlantic - Technology

There's a story about Sam Altman that has been repeated often enough to become Silicon Valley lore. In 2012, Paul Graham, a co-founder of the famed start-up accelerator Y Combinator and one of Altman's biggest mentors, sat Altman down and asked if he wanted to take over the organization. The decision was a peculiar one: Altman was only in his late 20s, and at least on paper, his qualifications were middling. He had dropped out of Stanford to found a company that ultimately hadn't panned out. After seven years, he'd sold it for roughly the same amount that his investors had put in.


AI Could Still Wreck the Presidential Election

The Atlantic - Technology

For years now, AI has undermined the public's ability to trust what it sees, hears, and reads. The Republican National Committee released a provocative ad offering an "AI-generated look into the country's possible future if Joe Biden is re-elected," showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border. Fake robocalls purporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group using ChatGPT to generate fake social-media comments. It's not altogether clear what damage AI itself may cause, though the reasons for concern are obvious--the technology makes it easier for bad actors to construct highly persuasive and misleading content.


OpenAI's Big Reset

The Atlantic - Technology

After weeks of speculation about a new and more powerful AI product in the works, OpenAI today announced its first "reasoning model." The program, known as o1, may in many respects be OpenAI's most powerful AI offering yet, with problem-solving capacities that resemble those of a human mind more than any software before. Or, at least, that's how the company is selling it. As with most OpenAI research and product announcements, o1 is, for now, somewhat of a tease. The start-up claims that the model is far better at complex tasks but released very few details about the model's training.


Ted Chiang Is Wrong About AI Art

The Atlantic - Technology

Artists and writers all over the world have spent the past two years engaged in an existential battle. Generative-AI programs such as ChatGPT and DALL-E are built on work stolen from humans, and machines threaten to replace the artists and writers who made the material in the first place. Their outrage is well warranted--but their arguments don't always make sense or substantively help defend humanity. Over the weekend, the legendary science-fiction writer Ted Chiang stepped into the fray, publishing an essay in The New Yorker arguing, as the headline says, that AI "isn't going to make art." Chiang writes not simply that AI's outputs can be or are frequently lacking value but that AI cannot be used to make art, really ever, leaving no room for the many different ways someone might use the technology.