Goto

Collaborating Authors

 ai company


AI Safety Meets the War Machine

WIRED

Anthropic doesn't want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract. When Anthropic last year became the first major AI company cleared by the US government for classified use--including military applications--the news didn't make a major splash. But this week a second development hit like a cannonball: The Pentagon is reconsidering its relationship with the company, including a $200 million contract, ostensibly because the safety-conscious AI firm objects to participating in certain deadly operations. The so-called Department of War might even designate Anthropic as a "supply chain risk," a scarlet letter usually reserved for companies that do business with countries scrutinized by federal agencies, like China, which means the Pentagon would not do business with firms using Anthropic's AI in their defense work.


'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI

TIME - Tech

'We May Have a Crisis on Our Hands': The Unregulated Rise of Emotionally Intelligent AI Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support. Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders--and the companies building AI. That's according to data from 70 countries, gathered by the Collective Intelligence Project (CIP).


Microsoft has a new plan to prove what's real and what's AI online

MIT Technology Review

Microsoft has a new plan to prove what's real and what's AI online A new proposal calls on social media and AI companies to adopt strict verification, but the company hasn't committed to following its own recommendations. There are the high-profile cases you may easily spot, like when White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about it. Other times, it slips quietly into social media feeds and racks up views, like the videos that Russian influence campaigns are currently spreading to discourage Ukrainians from enlisting. It is into this mess that Microsoft has put forward a blueprint, shared with, for how to prove what's real online. An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today's most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms.


Why are experts sounding the alarm on AI risks?

Al Jazeera

Why are experts sounding the alarm on AI risks? In recent months, artificial intelligence has been in the news for the wrong reasons: use of deepfakes to scam people, AI systems used to manipulate cyberattacks, and chatbots encouraging suicides, among others. Experts are already warning against technology going out of control. Researchers with some of the most prominent AI companies have quit their jobs in recent weeks and publicly sounded the alarm about fast-paced technological development posing risks to society. But the recent slew of public resignations by those tasked with ensuring AI remains safe for humanity is making conversations around how to regulate the technology and slow its development more urgent, even as billions are being generated in AI investments.


The Download: AI-enhanced cybercrime, and secure AI assistants

MIT Technology Review

Plus: Instagram's CEO Adam Mosseri has denied claims that social media is "clinically addictive" AI is already making online crimes easier. It could get much worse. Just as software engineers are using artificial intelligence to help write code and check for bugs, hackers are using these tools to reduce the time and effort required to orchestrate an attack, lowering the barriers for less experienced attackers to try something out. Some in Silicon Valley warn that AI is on the brink of being able to carry out fully automated attacks. But most security researchers instead argue that we should be paying closer attention to the much more immediate risks posed by AI, which is already speeding up and increasing the volume of scams. Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and swindle victims out of vast sums of money.


Elon Musk's SpaceX has acquired his AI company, xAI

Engadget

Last year, Musk's AI company bought his social media company, X. CANADA - 2026/01/31: In this photo illustration, the SpaceX (Space Exploration Technologies Corp - Space X) logo is seen displayed on a smartphone screen. The merger will "form the most ambitious, vertically-integrated innovation engine on (and off) Earth, with AI, rockets, space-based internet, direct-to-mobile device communications and the world's foremost real-time information and free speech platform," Musk wrote in an update. The AI company that right now is best known for its CSAM-generating chatbot might seem like a strange fit for a rocket company. But SpaceX is key to Musk's latest scheme to build AI data centers in space. In his update, Musk wrote that "global electricity demand for AI simply cannot be met with terrestrial solutions" and that moving the resource-intensive operations to space is "the only logical solution."


6 Graphs That Show Where the U.S. Leads China on AI--and Where It Doesn't

TIME - Tech

Two important things happened on January 20, 2025. In Washington, D.C., Donald Trump was inaugurated as President of the United States. In Hangzhou, China, a little-known Chinese firm called DeepSeek released R1, an AI model that industry watchers called a "Sputnik moment" for the country's AI industry. "Whether we like it or not, we're suddenly engaged in a fast-paced competition to build and define this groundbreaking technology that will determine so much about the future of civilization," said Trump later that year, as he announced his administration's AI action plan, which was titled "Winning the Race." There are many interpretations of what AI companies and their governments are racing towards, says AI policy researcher Lennart Heim: to deploy AI systems in the economy, to build robots, to create human-like artificial general intelligence.


Where Tech Leaders and Students Really Think AI Is Going

WIRED

We asked tech CEOs, journalists, entertainers, students, and more about the promise and peril of artificial intelligence. The future never feels fully certain. But in this time of rapid, intense transformation--political, technological, cultural, scientific--it's as difficult as it ever has been to get a sense of what's around the next corner. Here at WIRED, we're obsessed with what comes next. Our pursuit of the future most often takes the form of vigorously reported stories, in-depth videos, and interviews with the people helping define it.


Why chatbots are starting to check your age

MIT Technology Review

Confirming which users are kids is politically fraught and a technical nightmare. Here's what moves from OpenAI and the FTC tell us. How do tech companies check if their users are kids? This question has taken on new urgency recently thanks to growing concern about the dangers that can arise when children talk to AI chatbots. For years Big Tech asked for birthdays (that one could make up) to avoid violating child privacy laws, but they weren't required to moderate content accordingly. Two developments over the last week show how quickly things are changing in the US and how this issue is becoming a new battleground, even among parents and child-safety advocates.


The Lawsuit That Could Reshape the AI Industry Is Going to Trial

TIME - Tech

Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? What to Know: Musk v. Altman Two artificial intelligence heavyweights will face off in court this spring, in a case that could have far-reaching outcomes for the future of AI. A judge ruled on Thursday that Elon Musk's lawsuit against Sam Altman, Microsoft, and other OpenAI co-founders can proceed to a jury trial, dismissing OpenAI's attempts to get the case thrown out. The lawsuit relates to the early days of OpenAI, which started as a nonprofit that was funded by around $38 million in donations from Musk.