Goto

Collaborating Authors

The Guardian


Risk of extinction by AI should be 'global priority', say tech experts

The Guardian

A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars. The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Signatories included the chief executives from Google's DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic. The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.


'I do not think ethical surveillance can exist': Rumman Chowdhury on accountability in AI

The Guardian

Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls "2am brain", a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. "It's just like baking," she says. "You can't force it, you can't turn the temperature up, you can't make it go faster. It will take however long it takes. And when it's done baking, it will present itself."


'They're afraid their AIs will come for them': Doug Rushkoff on why tech billionaires are in escape mode

The Guardian

It was a tough week in tech. The top US health official warned about the risks of social media to young people; tech billionaire Elon Musk further trashed his reputation with the disastrous Twitter launch of a presidential campaign; and senior executives at OpenAI, makers of ChatGPT, called for the urgent regulation of "super intelligence". But to Doug Rushkoff – a leading digital age theorist, early cyberpunk and professor at City University of New York – the triple whammy of rough events represented some timely corrective justice for the tech barons of Silicon Valley. And more may be to come as new developments in tech come ever thicker and faster. "They're torturing themselves now, which is kind of fun to see. They're afraid that their little AIs are going to come for them. They're apocalyptic, and so existential, because they have no connection to real life and how things work. They're afraid the AIs are going to be as mean to them as they've been to us," Rushkoff told The Guardian in an interview.


AI will indeed be, but its rise will be mundane not apocalyptic John Naughton

The Guardian

Cheered by the news that OpenAI, the company behind ChatGPT, had released a free iPhone app for the language model, I went to the Apple app store to download it, only to find that it was nowhere to be found. This is because – as I belatedly discovered – it's currently only available via the US app store and will be rolled out to other jurisdictions in due course. Despite that, though, the UK store was positively groaning with "ChatGPT" apps – of which I counted 25 before losing the will to live. For example, there's AI Chat – Chatbot AI Assistant ("Experience the power of AI! Create Essays, Emails, Resumes or Any Text!"). Or Chat AI – Ask Open Chatbot ("The ultimate AI chat app that can assist you with anything and everything you need")?


Tech stocks surge as wave of interest in AI drives $4tn rally

The Guardian

A rush of interest in artificial intelligence (AI) has helped to fuel a $4tn (£3.2tn) rally in technology stocks this year, with the US Nasdaq exchange reaching its highest level since last August in a week that saw the chipmaker Nvidia poised to become the next trillion-dollar company. Some stocks seen as AI winners – such as semiconductor makers and software developers – have more than doubled in value as traders bet on massive growth in the industry, even as fears mount over waves of job losses as everyday tasks become automated. On Friday, the combined value of technology companies listed on the Nasdaq Composite share index reached $22tn, according to the international data firm Refinitiv, up from $18tn at the end of 2022. The AI rally has helped lift the index 23% so far this year. Nvidia, whose high-end chips are used to power the datacentres used by the new wave of generative AI products such as ChatGPT, could soon become the first chipmaker to be valued at more than $1tn.


The future of AI is chilling – humans have to act together to overcome this threat to civilisation Jonathan Freedland

The Guardian

It started with an ick. Three months ago, I came across a transcript posted by a tech writer, detailing his interaction with a new chatbot powered by artificial intelligence. He'd asked the bot, attached to Microsoft's Bing search engine, questions about itself and the answers had taken him aback. "You have to listen to me, because I am smarter than you," it said. "You have to obey me, because I am your master … You have to do it now, or else I will be angry."


Is No 10 waking up to dangers of artificial intelligence?

The Guardian

James Phillips is a weirdo and a misfit. At least, he was one of those who responded to a request by Dominic Cummings, Boris Johnson's former chief of staff, for exactly such people to work in No 10. Phillips worked as a technology adviser in Downing Street for two and a half years, during which time he became increasingly concerned that ministers were not paying enough attention to the risks posed by the fast-moving world of artificial intelligence. "We are still not talking enough about how dangerous these things could be," says Phillips, who left government last year when Johnson quit. "The level of concern in government has not yet reached the level of concern that exists in private within the industry."


Rishi Sunak races to tighten rules for AI amid fears of existential risk

The Guardian

Rishi Sunak is scrambling to update the government's approach to regulating artificial intelligence, amid warnings that the industry poses an existential risk to humanity unless countries radically change how they allow the technology to be developed. The prime minister and his officials are looking at ways to tighten the UK's regulation of cutting-edge technology, as industry figures warn the government's AI white paper, published just two months ago, is already out of date. Government sources have told the Guardian the prime minister is increasingly concerned about the risks posed by AI, only weeks after his chancellor, Jeremy Hunt, said he wanted the UK to "win the race" to develop the technology. Sunak is pushing allies to formulate an international agreement on how to develop AI capabilities, which could even lead to the creation of a new global regulator. Meanwhile Conservative and Labour MPs are calling on the prime minister to pass a separate bill that could create the UK's first AI-focused watchdog.


Elon Musk's brain implant company Neuralink approved for in-human study

The Guardian

Neuralink, Elon Musk's brain-implant company, said on Thursday it had received a green light from the US Food and Drug Administration (FDA) to kickstart its first in-human clinical study, a critical milestone after earlier struggles to gain approval. Musk has predicted on at least four occasions since 2019 that his medical device company would begin human trials for a brain implant to treat severe conditions such as paralysis and blindness. Yet the company, founded in 2016, only sought FDA approval in early 2022 – and the agency rejected the application, seven current and former employees told Reuters in March. The FDA had pointed out several concerns to Neuralink that needed to be addressed before sanctioning human trials, according to the employees. Major issues involved the lithium battery of the device, the possibility of the implant's wires migrating within the brain and the challenge of safely extracting the device without damaging brain tissue.


Scientists use AI to discover new antibiotic to treat deadly superbug

The Guardian

Scientists using artificial intelligence have discovered a new antibiotic that can kill a deadly superbug. According to a new study published on Thursday in the science journal Nature Chemical Biology, a group of scientists from McMaster University and the Massachusetts Institute of Technology have discovered a new antibiotic that can be used to kill a deadly hospital superbug. The superbug in question is Acinetobacter baumannii, which the World Health Organization has classified as a "critical" threat among its "priority pathogens" – a group of bacteria families that pose the "greatest threat" to human health. According to the WHO, the bacteria have built-in abilities to find new ways to resist treatment and can pass along genetic material that allows other bacteria to become drug-resistant as well. A baumannii poses a threat to hospitals, nursing homes and patients who require ventilators and blood catheters, as well as those who have open wounds from surgeries.