consequential conspiracy theory
AI Wrapped: The 14 AI terms you couldn't avoid in 2025
AI Wrapped: The 14 AI terms you couldn't avoid in 2025 From "superintelligence" to "slop," here are the words and phrases that defined another year of AI craziness. If the past 12 months have taught us anything, it's that the AI hype train is showing no signs of slowing. It's hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn't a thing. If that's left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse. Make sure you take the time to brace yourself for what promises to be another bonkers year.
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
- Asia > Philippines (0.04)
- Asia > China (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Communications > Social Media (0.96)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.95)
How social media encourages the worst of AI boosterism
The era of hype first, think later. Demis Hassabis, CEO of Google DeepMind, summed it up in three words: "This is embarrassing." Hassabis was replying on X to an overexcited post by Sébastien Bubeck, a research scientist at the rival firm OpenAI, announcing that two mathematicians had used OpenAI's latest large language model, GPT-5, to find solutions to 10 unsolved problems in mathematics. "Science acceleration via AI has officially begun," Bubeck crowed. Put your math hats on for a minute, and let's take a look at what this beef from mid-October was about. Bubeck was excited that GPT-5 seemed to have somehow solved a number of puzzles known as Erdős problems.
- North America > United States > Massachusetts (0.05)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.05)
- Asia > China (0.05)
China figured out how to sell EVs. Now it has to bury their batteries.
China figured out how to sell EVs. Now it has to bury their batteries. As early electric cars age out, hundreds of thousands of used batteries are flooding the market, fueling a gray recycling economy even as Beijing and big manufacturers scramble to build a more orderly system. In August 2025, Wang Lei decided it was finally time to say goodbye to his electric vehicle. Wang, who is 39, had bought the car in 2016, when EVs still felt experimental in Beijing. It was a compact Chinese brand.
- Asia > China > Beijing > Beijing (0.45)
- North America > United States > Massachusetts (0.05)
- Asia > China > Shanghai > Shanghai (0.05)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
Creating psychological safety in the AI era
Trust in AI begins when leaders admit what they do not know, address fears, and help people adapt. Rolling out enterprise-grade AI means climbing two steep cliffs at once. And second, creating the cultural conditions where employees can maximize its value. While the technical hurdles are significant, the human element can be even more consequential; fear and ambiguity can stall momentum of even the most promising initiatives. Psychological safety--feeling free to express opinions and take calculated risks without worrying about career repercussions1--is essential for successful AI adoption. In psychologically safe workspaces, employees are empowered to challenge assumptions and raise concerns about new tools without fear of reprisal.
Why it's time to reset our expectations for AI
Why it's time to reset our expectations for AI The hype we have been sold for the past few years has been overwhelming. Hype Correction is the antidote. Can I ask you a question: How do you about AI right now? Are you still excited? When you hear that OpenAI or Google just dropped a new model, do you still get that buzz? Or has the shine come off it, maybe just a teeny bit? Come on, you can be honest with me.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.96)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.72)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.37)
Quantum navigation could solve the military's GPS jamming problem
Quantum navigation could solve the military's GPS jamming problem The rise of GPS vulnerability is putting more resilient, atom-based navigational tools on the map. The Royal Navy partnered with Infleqtion to test a quantum clock on the uncrewed submarine XV Excalibur. In late September, a Spanish military plane carrying the country's defense minister to a base in Lithuania was reportedly the subject of a kind of attack --not by a rocket or anti-aircraft rounds, but by radio transmissions that jammed its GPS system. The flight landed safely, but it was one of thousands that have been affected by a far-reaching Russian campaign of GPS interference since the 2022 invasion of Ukraine. The growing inconvenience to air traffic and risk of a real disaster have highlighted the vulnerability of GPS and focused attention on more secure ways for planes to navigate the gauntlet of jamming and spoofing, the term for tricking a GPS receiver into thinking it's somewhere else. US military contractors are rolling out new GPS satellites that use stronger, cleverer signals, and engineers are working on providing better navigation information based on other sources, like cellular transmissions and visual data.
- Europe > Ukraine (0.25)
- Europe > Lithuania (0.24)
- Europe > United Kingdom (0.14)
- (4 more...)
- Transportation (1.00)
- Government > Military (1.00)
- Aerospace & Defense (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
AI might not be coming for lawyers' jobs anytime soon
AI might not be coming for lawyers' jobs anytime soon Generative AI might have aced the bar exam, but an LLM still can't think like a lawyer. When the generative AI boom took off in 2022, Rudi Miller and her law school classmates were suddenly gripped with anxiety. "Before graduating, there was discussion about what the job market would look like for us if AI became adopted," she recalls. So when it came time to choose a speciality, Miller--now a junior associate at the law firm Orrick--decided to become a litigator, the kind of lawyer who represents clients in court. She hoped the courtroom would be the last human stage. "Judges haven't allowed ChatGPT-enabled robots to argue in court yet," she says.
- North America > United States > Pennsylvania (0.04)
- North America > United States > Massachusetts (0.04)
- Law (1.00)
- Education > Educational Setting > Higher Education (0.70)
- Education > Curriculum > Subject-Specific Education (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.56)
The great AI hype correction of 2025
Four ways to think about this year's reckoning When OpenAI released a free web app called ChatGPT in late 2022, it changed the course of an entire industry--and several world economies. Millions of people started talking to their computers, and their computers started talking back. We were enchanted, and we expected more. Technology companies scrambled to stay ahead, putting out rival products that outdid one another with each new release: voice, images, video. With nonstop one-upmanship, AI companies have presented each new product drop as a major breakthrough, reinforcing a widespread faith that this technology would just keep getting better. Boosters told us that progress was exponential.
- Information Technology (1.00)
- Banking & Finance (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.52)
What even is the AI bubble?
What even is the AI bubble? Everyone in tech agrees we're in a bubble. They just can't agree on what it looks like -- or what happens when it pops. In July, a widely cited MIT study claimed that 95% of organizations that invested in generative AI were getting "zero return." While the study itself was more nuanced than the headlines, for many it still felt like the first hard data point confirming what skeptics had muttered for months: Hype around AI might be outpacing reality. Then, in August, OpenAI CEO Sam Altman said what everyone in Silicon Valley had been whispering.
- North America > United States > California (0.25)
- North America > United States > Massachusetts (0.04)
- Asia > India (0.04)
- Information Technology > Services (1.00)
- Banking & Finance (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.73)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.73)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.60)
The AI doomers feel undeterred
But they certainly wish people were still taking their warnings really seriously. It's a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad--very, very bad--for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can't control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international "red lines " to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science's most prestigious awards. But a number of developments over the past six months have put them on the back foot.
- North America > United States > Massachusetts (0.04)
- North America > United States > California (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- (2 more...)