Goto

Collaborating Authors

 korinek


America Isn't Ready for What AI Will Do to Jobs

The Atlantic - Technology

This story appears in the March 2026 print edition. While some stories from this issue are not yet available to read online, you can explore more from the magazine . Get our editors' guide to what matters in the world, delivered to your inbox every weekday. America Isn't Ready for What AI Will Do to Jobs Does anyone have a plan for what happens next? In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting. The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. They were operating at speeds that the human body--an elegant piece of engineering designed over millions of years for entirely different purposes--simply wasn't built to match. The owners knew this, just as they knew that there's a limit to how much misery people are willing to tolerate before they start setting fire to things. Still, the machines pressed on. Check out more from this issue and find your next story to read. So Massachusetts created the nation's first Bureau of Statistics of Labor, hoping that data might accomplish what conscience could not. By measuring work hours, conditions, wages, and what economists now call "negative externalities" but were then called "children's arms torn off," policy makers figured they might be able to produce reasonably fair outcomes for everyone. A few years later, with federal troops shooting at striking railroad workers and wealthy citizens funding private armories--leading indicators that things in your society aren't going great--Congress decided that this idea might be worth trying at scale and created the Bureau of Labor Statistics. Measurement doesn't abolish injustice; it rarely even settles arguments. But the act of counting--of trying to see clearly, of committing the government to a shared set of facts--signals an intention to be fair, or at least to be caught trying. It's one way a republic earns the right to be believed in. The BLS remains a small miracle of civilization.


The Economist Breaking Ranks to Warn of AI's Transformative Power

TIME - Tech

Technologists tend to predict that the economic impacts of their creations will be unprecedented--and this is especially true when it comes to artificial intelligence. Last year, Elon Musk predicted that continued advances in AI would render human labor obsolete. OpenAI CEO Sam Altman has written that AI will inevitably continue the shift in economic power from labor to capital and create "phenomenal wealth." Jensen Huang, CEO of semiconductor design firm Nvidia, has compared AI's development and deployment to a "new industrial revolution." But while the technologists are bullish on the economic impacts of AI, members of that other technocratic priesthood with profound influence over public life--the economists--are not.


Amazon's Partnership With Anthropic Shows Size Matters in the AI Industry

TIME - Tech

As part of the deal, Amazon, the world's largest provider of cloud infrastructure services through its AWS unit, will become the primary provider of computational processing power, also called compute, for Anthropic. The process of training and running state-of-the-art AI models requires vast amounts of compute, and many analysts expect future AI models to require increasing amounts of compute. In return, Amazon will acquire a minority ownership position in Anthropic, and Amazon's engineers will be able to incorporate Anthropic's AI models into their products and services such as Amazon's personal assistant, Alexa. Anthropic has also committed to offering its models via Bedrock, Amazon's online platform on which it hosts foundation models--broadly capable AI models that can be adapted for different tasks. Anthropic was founded in 2021, after a group of OpenAI employees left over differences in their approach to AI safety.


U.K. Competition Watchdog Signals Cautious Approach to AI Regulation

TIME - Tech

A report published this week by the U.K.'s Competition & Markets Authority (CMA) has raised concerns about the potential ways the artificial intelligence industry could become monopolized or harm consumers in future, but stressed that it is too soon to tell whether these scenarios would materialize. The issues raised by the report highlight the difficulties policymakers face in governing AI, a source of both huge potential commercial value and many risks. Rishi Sunak, the British Prime Minister, is pushing for the U.K. to occupy a central role in international AI policy discussions, with a particular focus on risks from advanced AI systems. If the U.K. competition watchdog decides to start taking action against AI developers, tech companies around the world could be affected. The report, published on Monday, focuses on foundation models, which the CMA defines as "a type of AI technology that are trained on vast amounts of data that can be adapted to a wide range of tasks and operations." Examples include text-generating AI models, such as GPT-3.5, the model that powers OpenAI's ChatGPT, as well as image-generating AI models, such as Stable Diffusion.


How Google's Search Rival Could Use ChatGPT to Get a Leg Up - CNET

#artificialintelligence

Have you ever found yourself trawling through endless pages of results on a search engine to find the answer to a complex question? Say you want to find out if a vegetarian diet is suitable for your dog. Your research journey might begin by hopping onto Google and typing "is a veg diet good for dogs" into the search box and then having to make sense of the legion of generated links. By the time you find an answer, you've sunk way more time than you'd budgeted into poring through articles, reports and their sources. In the not-so-distant future, finding the answer to a complex question might not be such a tedious, mind-numbing process.


What if humans are no longer earth's most intelligent beings?

#artificialintelligence

In his final, posthumously published book, famed physicist Stephen Hawking raises an alarm about the dangers of artificial intelligence, or AI, and the existential threat it could pose to humanity. In "Brief Answers to the Big Questions," Hawking writes, "a super-intelligent AI will be extremely good at accomplishing goals, and if those goals aren't aligned with ours, we're in trouble." University of Virginia economist Anton Korinek could not agree more, and he believes that the kind of AI that Hawking refers to – "general artificial intelligence" that can equal or surpass human intelligence – could be just a few decades away. "I believe that, by the second half of this century, AI – robots and programs – will be better than us humans at nearly everything," said Korinek, who holds a joint appointment in UVA's Economics Department and the Darden School of Business. "The fundamental question becomes, 'What will happen to humans if we are no longer the most generally intelligent beings on Earth?'" Korinek has written and co-written several published and forthcoming papers on the economic impact of increasing artificial intelligence, including a paper published by the National Bureau of Economic Research and several works in progress.