Not enough data to create a plot.
Try a different view from the menu above.
The Atlantic - Technology
The Great Language Flattening
In at least one crucial way, AI has already won its campaign for global dominance. An unbelievable volume of synthetic prose is published every moment of every day--heaping piles of machine-written news articles, text messages, emails, search results, customer-service chats, even scientific research. Chatbots learned from human writing. Now the influence may run in the other direction. Some people have hypothesized that the proliferation of generative-AI tools such as ChatGPT will seep into human communication, that the terse language we use when prompting a chatbot may lead us to dispose of any niceties or writerly flourishes when corresponding with friends and colleagues.
American Panopticon
If you have tips about DOGE and its data collection, you can contact Ian and Charlie on Signal at @ibogost.47 and @cwarzel.92. If you were tasked with building a panopticon, your design might look a lot like the information stores of the U.S. federal government--a collection of large, complex agencies, each making use of enormous volumes of data provided by or collected from citizens. The federal government is a veritable cosmos of information, made up of constellations of databases: The IRS gathers comprehensive financial and employment information from every taxpayer; the Department of Labor maintains the National Farmworker Jobs Program (NFJP) system, which collects the personal information of many workers; the Department of Homeland Security amasses data about the movements of every person who travels by air commercially or crosses the nation's borders; the Drug Enforcement Administration tracks license plates scanned on American roads. More obscure agencies, such as the recently gutted Consumer Financial Protection Bureau, keep records of corporate trade secrets, credit reports, mortgage information, and other sensitive data, including lists of people who have fallen on financial hardship. A fragile combination of decades-old laws, norms, and jungly bureaucracy has so far prevented repositories such as these from assembling into a centralized American surveillance state. But that appears to be changing. Since Donald Trump's second inauguration, Elon Musk and the Department of Government Efficiency have systematically gained access to sensitive data across the federal government, and in ways that people in several agencies have described to us as both dangerous and disturbing.
AI Executives Promise Cancer Cures. Here's the Reality
To hear Silicon Valley tell it, the end of disease is well on its way. Demis Hassabis, a Nobel laureate for his AI research and the CEO of Google DeepMind, said on Sunday that he hopes that AI will be able to solve important scientific problems and help "cure all disease" within five to 10 years. Earlier this month, OpenAI released new models and touted their ability to "generate and critically evaluate novel hypotheses" in biology, among other disciplines. These are all executives marketing their products, obviously, but is there even a kernel of possibility in these predictions? If generative AI could contribute in the slightest to such discoveries--as has been promised since the start of the AI boom--where would the technology and scientists using it even begin?
The Great AI Lock-In Has Begun
There are really two OpenAIs. One is the creator of world-bending machines--the start-up that unleashed ChatGPT and in turn the generative-AI boom, surging toward an unrecognizable future with the rest of the tech industry in tow. This is the OpenAI that promises to eventually bring about "superintelligent" programs that exceed humanity's capabilities. The other OpenAI is simply a business. This is the company that is reportedly working on a social network and considering an expansion into hardware; it is the company that offers user-experience updates to ChatGPT, such as an "image library" feature announced last week and the new ability to "reference" past chats to provide personalized responses.
The Gen Z Lifestyle Subsidy
Finals season looks different this year. Across college campuses, students are slogging their way through exams with all-nighters and lots of caffeine, just as they always have. Through the end of May, OpenAI is offering students two months of free access to ChatGPT Plus, which normally costs 20 a month. It's a compelling deal for students who want help cramming--or cheating--their way through finals: Rather than firing up the free version of ChatGPT to outsource essay writing or work through a practice chemistry exam, students are now able to access the company's most advanced models, as well as its "deep research" tool, which can quickly synthesize hundreds of digital sources into analytical reports. The OpenAI deal is just one of many such AI promotions going around campuses.
A Disaster for American Innovation
Nearly three months into President Donald Trump's term, the future of American AI leadership is in jeopardy. Basically any generative-AI product you have used or heard of--ChatGPT, Claude, AlphaFold, Sora--depends on academic work or was built by university-trained researchers in the industry, and frequently both. Today's AI boom is fueled by the use of specialized computer-graphics chips to run AI models--a technique pioneered by researchers at Stanford who received funding from the Department of Defense. They rely on a training method called "reinforcement learning," the foundations of which were developed with National Science Foundation (NSF) grants. "I don't think anybody would seriously claim that these [AI breakthroughs] could have been done if the research universities in the U.S. didn't exist at the same scale," Rayid Ghani, a machine-learning researcher at Carnegie Mellon University, told me.
Elon Musk Lost His Big Bet
Last night, X's "For You" algorithm offered me up what felt like a dispatch from an alternate universe. It was a post from Elon Musk, originally published hours earlier. "This is the first time humans have been in orbit around the poles of the Earth!" he wrote. Underneath his post was a video shared by SpaceX--footage of craggy ice caps, taken by the company's Dragon spacecraft during a private mission. Taken on its own, the video is genuinely captivating.
The Gleeful Cruelty of the White House X Account
On March 18, the official White House account on X posted two photographs of Virginia Basora-Gonzalez, a woman who was arrested earlier this month by U.S. Immigration and Customs Enforcement. The post described her as a "previously deported alien felon convicted of fentanyl trafficking," and celebrated her capture as a win for the administration. In one photograph, Basora-Gonzalez is shown handcuffed and weeping in a public parking lot. The White House account posted about Basora-Gonzalez again yesterday--this time, rendering her capture in the animated style of the beloved Japanese filmmaker Hayao Miyazaki, who co-founded the animation company Studio Ghibli. Presumably, whoever runs the account had used ChatGPT, which has been going viral this week for an update to its advanced "4o" model that enables it to transform photographs in the style of popular art, among other things.
The Unbelievable Scale of AI's Pirated-Books Problem
Editor's note: This analysis is part of The Atlantic's investigation into the Library Genesis data set. You can access the search tool directly here. Find The Atlantic's search tool for movie and television writing used to train AI here. When employees at Meta started developing their flagship AI model, Llama 3, they faced a simple ethical question. The program would need to be trained on a huge amount of high-quality writing to be competitive with products such as ChatGPT, and acquiring all of that text legally could take time.