report say
'Time Is Running Out': New Open Letter Calls for Ban on Superintelligent AI Development
'Time Is Running Out': New Open Letter Calls for Ban on Superintelligent AI Development The home page of the ChatGPT application displayed on a smartphone screen. The home page of the ChatGPT application displayed on a smartphone screen. An open letter calling for the prohibition of the development of superintelligent AI was announced on Wednesday, with the signatures of more than 700 celebrities, AI scientists, faith leaders, and policymakers. Among the signatories are five Nobel laureates; two so-called "Godfathers of AI;" Steve Wozniak, a co-founder of Apple; Steve Bannon, a close ally of President Trump; Paolo Benanti, an adviser to the Pope; and even Harry and Meghan, the Duke and Duchess of Sussex. "We call for a prohibition on the development of superintelligence, not lifted before there is The letter was coordinated and published by the Future of Life Institute, a nonprofit that in 2023 published a different open letter calling for a six-month pause on the development of powerful AI systems. Although widely-circulated, that letter did not achieve its goal. Organizers said they decided to mount a new campaign, with a more specific focus on superintelligence, because they believe the technology--which they define as a system that can surpass human performance on all useful tasks--could arrive in as little as one to two years. "Time is running out," says Anthony Aguirre, the FLI's executive director, in an interview with TIME. The only thing likely to stop AI companies barreling toward superintelligence, he says, "is for there to be widespread realization among society at all its levels that this is not actually what we want." Polling released alongside the letter showed that 64% of Americans believe that superintelligence "shouldn't be developed until it's provably safe and controllable," and only 5% believe it should be developed as quickly as possible. "It's a small number of very wealthy companies that are building these, and a very, very large number of people who would rather take a different path," says Aguirre. Actors Joseph Gordon-Levitt and Stephen Fry, rapper will.i.am, Susan Rice, the national security advisor in Barack Obama's Administration, signed. So did one serving member of staff at OpenAI--an organization described by its CEO, Sam Altman, as a "superintelligence research company"--Leo Gao, a member of technical staff at the company. Aguirre expects more people to sign as the campaign unfolds. "The beliefs are already there," he says. "What we don't have is people feeling free to state their beliefs out loud." "The future of AI should serve humanity, not replace it," said Prince Harry, Duke of Sussex, in a message accompanying his signature. "I believe the true test of progress will be not how fast we move, but how wisely we steer.
- North America > United States (1.00)
- Europe > United Kingdom (0.56)
- Asia > China (0.05)
Exclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says
The unredacted report was circulated inside the Trump White House in recent weeks, according to its authors. TIME viewed a redacted version ahead of its public release. The White House did not respond to a request for comment. Today's top AI datacenters are vulnerable to both asymmetrical sabotage--where relatively cheap attacks could disable them for months--and exfiltration attacks, in which closely guarded AI models could be stolen or surveilled, the report's authors warn. "You could end up with dozens of datacenter sites that are essentially stranded assets that can't be retrofitted for the level of security that's required," says Edouard Harris, one of the authors of the report.
- North America > United States (0.82)
- Asia > China (0.40)
Employees at Top AI Labs Fear Safety Is an Afterthought, Report Says
Workers at some of the world's leading AI companies harbor significant concerns about the safety of their work and the incentives driving their leadership, a report published on Monday claimed. The report, commissioned by the State Department and written by employees of the company Gladstone AI, makes several recommendations for how the U.S. should respond to what it argues are significant national security risks posed by advanced AI. Read More: Exclusive: U.S. Must Move'Decisively' To Avert'Extinction-Level' Threat from AI, Government-Commissioned Report Says The report's authors spoke with more than 200 experts for the report, including employees at OpenAI, Google DeepMind, Meta and Anthropic--leading AI labs that are all working towards "artificial general intelligence," a hypothetical technology that could perform most tasks at or above the level of a human. The authors shared excerpts of concerns that employees from some of these labs shared with them privately, without naming the individuals or the specific company that they work for. OpenAI, Google, Meta and Anthropic did not immediately respond to requests for comment. "We have served, through this project, as a de-facto clearing house for the concerns of frontier researchers who are not convinced that the default trajectory of their organizations would avoid catastrophic outcomes," Jeremie Harris, the CEO of Gladstone and one of the authors of the report, tells TIME. One individual at an unspecified AI lab shared worries with the report's authors that the lab has what the report characterized as a "lax approach to safety" stemming from a desire to not slow down the lab's work to build more powerful systems.
Apple Has Created Its Own AI Chatbot, Report Says - CNET
Apple has created its own generative artificial intelligence tools to compete with ChatGPT, according to a Bloomberg report Wednesday. Apple built its own framework that can create large language models, called "Ajax," as well as a chatbot service that internal engineers are calling Apple GPT, according to Bloomberg citing unnamed sources. It's part of the iPhone giant's bid to compete in the AI space, the report said. Apple didn't immediately respond to a request for comment. Large language models are what power generative artificial intelligence chatbots, like OpenAI's ChatGPT and Google's Bard.
ChatGPT Maker OpenAI Faces FTC Probe Over Risks to Consumers, Report Says - CNET
The US Federal Trade Commission has reportedly launched an investigation into whether OpenAI, the company behind popular AI chatbot ChatGPT, has violated consumer protection laws. The FTC sent OpenAI a 20-page request for documents covering concerns related to data privacy and reputational harm, according to a report Thursday from The Washington Post. The agency also asked for details on OpenAI's large language model, the technology behind its generative AI chatbot, including all sources used to train the model and how data was obtained, according to the request, which was shared by the Post. CNET hasn't independently verified the request. The FTC declined to comment.
- Law > Business Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
ChatGPT Caused 'Code Red' at Google, Report Says - CNET
ChatGPT, an AI chatbot developed by OpenAI that went viral because it can give people direct answers to just about any query possible, apparently has alarm bells ringing at Google, according to a report by the New York Times Wednesday. A Google executive the Times spoke to but didn't name said AI chatbots like ChatGPT could upend the search giant's business, which relies heavily on ads and e-commerce found in Google Search. In a memo and audio recording obtained by the Times, the publication says CEO Sundar Pichai has been in meetings to "define Google's AI strategy" and has "upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses." Google didn't immediately respond to a request for comment. ChatGPT is an AI chatbot that uses available data found online to give users conversational answers to a host of questions.
Artificial Intelligence, Automation Aren't Killing Labor Market, Reports Says
Concerns that emerging technologies like artificial intelligence and automation could wipe out wide swaths of American jobs aren't backed up by data, according to a Sept. 13 report released by the nonprofit, nonpartisan Information Technology and Innovation Foundation. The report examines decades' worth of data from the U.S. Bureau of Labor Statistics across 10 industries--construction, leisure and hospitality, professional and business services, retail trade, transportation and warehousing, wholesale trade, financial activities, information, education and health services, and manufacturing. The report found rates of job loss in each industry were lower in the third quarter of 2020 than in 1995. The third quarter of 2020 represented a stabilization of the American job market following a significant spike in job losses due to the COVID-19 pandemic that reached as high as 45% in the leisure and hospital industries. According to the report, U.S. workers have about a 5.8% chance of losing their jobs across those industries in any given quarter, down from 7.3% in 1995. "The prevailing narrative of accelerating job loss due to new technology is just a myth," ITIF President Robert Atkinson, who co-authored the report, said in a statement.
- North America > United States (0.59)
- Europe (0.07)
- Health & Medicine (1.00)
- Banking & Finance > Economy (1.00)
- Government > Regional Government > North America Government > United States Government (0.59)
Room for Improvement in Data Quality, Report Says
A new study commissioned by Trifacta is shining the light on the costs of poor data quality, particularly for organizations implementing AI initiatives. The study found that dirty and disorganized data are linked to AI projects that take longer, are more expensive, and do not deliver the anticipated results. As more firms ramp up AI initiatives, the consequences of poor data quality are expected to grow. The relatively sorry state of data quality is not a new phenomenon. Ever since humans started recording events, we've had to deal with errors.
- Information Technology > Data Science > Data Quality (1.00)
- Information Technology > Artificial Intelligence (1.00)
'ARREST BY ALGORITHM': China Uses Artificial Intelligence To Flag Entire Groups Of People For Arrest, Report Says
A new trove of highly classified leaked documents from the Chinese communist government shows how Beijing operates their widespread concentration camps where they reportedly have millions of Muslims and other minorities locked-up. The International Consortium of Investigative Journalists (ICIJ) reports that the leaked documents reveal that "Chinese police are guided by a massive data collection and analysis system that uses artificial intelligence to select entire categories of Xinjiang residents for detention." The manual obtained by ICIJ gives detailed instructions on everything from deciding when to let detainees use the toilet to how to keep the camps' existence totally secret. "The China Cables reveal how the system is able to amass vast amounts of intimate personal data through warrantless manual searches, facial recognition cameras, and other means to identify candidates for detention, flagging for investigation hundreds of thousands merely for using certain popular mobile phone apps. The documents detail explicit directives to arrest Uighurs with foreign citizenship and to track Xinjiang Uighurs living abroad, some of whom have been deported back to China by authoritarian governments. Among those implicated as taking part in the global dragnet: China's embassies and consulates."
- Government (1.00)
- Media > News (0.38)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.56)
AI Voice Assistants Reinforce Gender Biases, U.N. Report Says
Artificial intelligence voice assistants with female voices reinforce existing gender biases, according to a new United Nations' report. The new report from UNESCO, entitled "I'd Blush If I Could," looks at the impact of having female voice assistants, from Amazon's Alexa to Apple's Siri, projected in a way that suggests that women are "subservient and tolerant of poor treatment." The report takes its title from the response Siri used to give when a human told her, "Hey Siri, you're a b-tch." Further, researchers argue that tech companies have failed to take protective measures against abusive or gendered language from users. "Because the speech of most voice assistants is female, it sends a signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command like'hey' or'OK,'" the researchers write.