Goto

Collaborating Authors

 chinese espionage


'Time Is Running Out': New Open Letter Calls for Ban on Superintelligent AI Development

TIME - Tech

'Time Is Running Out': New Open Letter Calls for Ban on Superintelligent AI Development The home page of the ChatGPT application displayed on a smartphone screen. The home page of the ChatGPT application displayed on a smartphone screen. An open letter calling for the prohibition of the development of superintelligent AI was announced on Wednesday, with the signatures of more than 700 celebrities, AI scientists, faith leaders, and policymakers. Among the signatories are five Nobel laureates; two so-called "Godfathers of AI;" Steve Wozniak, a co-founder of Apple; Steve Bannon, a close ally of President Trump; Paolo Benanti, an adviser to the Pope; and even Harry and Meghan, the Duke and Duchess of Sussex. "We call for a prohibition on the development of superintelligence, not lifted before there is The letter was coordinated and published by the Future of Life Institute, a nonprofit that in 2023 published a different open letter calling for a six-month pause on the development of powerful AI systems. Although widely-circulated, that letter did not achieve its goal. Organizers said they decided to mount a new campaign, with a more specific focus on superintelligence, because they believe the technology--which they define as a system that can surpass human performance on all useful tasks--could arrive in as little as one to two years. "Time is running out," says Anthony Aguirre, the FLI's executive director, in an interview with TIME. The only thing likely to stop AI companies barreling toward superintelligence, he says, "is for there to be widespread realization among society at all its levels that this is not actually what we want." Polling released alongside the letter showed that 64% of Americans believe that superintelligence "shouldn't be developed until it's provably safe and controllable," and only 5% believe it should be developed as quickly as possible. "It's a small number of very wealthy companies that are building these, and a very, very large number of people who would rather take a different path," says Aguirre. Actors Joseph Gordon-Levitt and Stephen Fry, rapper will.i.am, Susan Rice, the national security advisor in Barack Obama's Administration, signed. So did one serving member of staff at OpenAI--an organization described by its CEO, Sam Altman, as a "superintelligence research company"--Leo Gao, a member of technical staff at the company. Aguirre expects more people to sign as the campaign unfolds. "The beliefs are already there," he says. "What we don't have is people feeling free to state their beliefs out loud." "The future of AI should serve humanity, not replace it," said Prince Harry, Duke of Sussex, in a message accompanying his signature. "I believe the true test of progress will be not how fast we move, but how wisely we steer.


The U.K. Lacks the Ability to Respond to AI Disasters, New Report Warns

TIME - Tech

Welcome back to, TIME's new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? A major AI-enabled disaster is becoming increasingly likely as AI capabilities advance. But a new report from a London-based think tank warns that the British government does not have the emergency powers necessary to respond to AI-enabled disasters like the disruption of critical infrastructure or a terrorist attack. The U.K. must give its officials new powers including being able to compel tech companies to share information and restrict public access to their AI models in an emergency, argues the report, which was shared exclusively with TIME ahead of its publication on Tuesday by the Centre for Long-Term Resilience (CLTR).


Exclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says

TIME - Tech

The unredacted report was circulated inside the Trump White House in recent weeks, according to its authors. TIME viewed a redacted version ahead of its public release. The White House did not respond to a request for comment. Today's top AI datacenters are vulnerable to both asymmetrical sabotage--where relatively cheap attacks could disable them for months--and exfiltration attacks, in which closely guarded AI models could be stolen or surveilled, the report's authors warn. "You could end up with dozens of datacenter sites that are essentially stranded assets that can't be retrofitted for the level of security that's required," says Edouard Harris, one of the authors of the report.


US indicts Chinese, Taiwanese firms for trade espionage

Al Jazeera

The US Justice Department indicted two companies based in China and Taiwan and three individuals saying they conspired to steal trade secrets from US semiconductor company Micron relating to its research and development of memory storage devices. The charges against Taiwan-based United Microelectronics Corp, China state-owned Fujian Jinhua Integrated Circuit, Co, Ltd, and three individuals mark the fourth case brought since September as part of a broader crackdown against alleged Chinese espionage on US companies. US Attorney-General Jeff Sessions told a news conference that Chinese espionage has been "increasing rapidly", adding "cheating must stop". He said the government is launching a new initiative to crack down on Chinese espionage trade cases. In addition to the criminal case, the Justice Department also filed a civil lawsuit seeking to prevent the two companies from exporting any products created using trade secrets.