existential threat
AI Must not be Fully Autonomous
Adewumi, Tosin, Alkhaled, Lama, Imbert, Florent, Han, Hui, Habib, Nudrat, Löwenmark, Karl
Autonomous Artificial Intelligence (AI) has many benefits. It also has many risks. In this work, we identify the 3 levels of autonomous AI. We are of the position that AI must not be fully autonomous because of the many risks, especially as artificial superintelligence (ASI) is speculated to be just decades away. Fully autonomous AI, which can develop its own objectives, is at level 3 and without responsible human oversight. However, responsible human oversight is crucial for mitigating the risks. To ague for our position, we discuss theories of autonomy, AI and agents. Then, we offer 12 distinct arguments and 6 counterarguments with rebuttals to the counterarguments. We also present 15 pieces of recent evidence of AI misaligned values and other risks in the appendix.
- Europe > Sweden > Norrbotten County > Luleå (0.40)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (4 more...)
- Law (1.00)
- Information Technology > Security & Privacy (0.67)
- Government > Military (0.46)
- Government > Regional Government (0.46)
The End of Publishing as We Know It
When tech companies first rolled out generative-AI products, some critics immediately feared a media collapse. Every bit of writing, imagery, and video became suspect. But for news publishers and journalists, another calamity was on the horizon. Chatbots have proved adept at keeping users locked into conversations. They do so by answering every question, often through summarizing articles from news publishers.
- Law (1.00)
- Information Technology (0.91)
- Media > News (0.70)
AI data scrapers are an existential threat to Wikipedia
Wikipedia is one of the greatest knowledge resources ever assembled, containing crowdsourced contributions from millions of humans worldwide – and it faces a growing threat from artificial intelligence developers. The non-profit Wikimedia Foundation, which operates Wikipedia, says since January 2024 it has seen a 50 per cent increase in network traffic requesting image and video downloads from its catalogue. That surge mostly comes from automated data scraper programs, which developers use to collect training data for their AI models.…
Where has the left's technological audacity gone? Leigh Phillips
Techno-optimism – the belief that technology will usher in a golden age for humanity – is in vogue once more. In 2022, a clutch of pseudonymous San Francisco artificial intelligence (AI) scenesters published a Substack post entitled "Effective Accelerationism", which argued for maximum acceleration of technological advancement. The 10-point manifesto, which proclaimed that "the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness" was imminent, quickly went viral, as did follow-up posts. Effective accelerationism, or "e/acc", exploded from being a fringe movement dedicated to pushing back against AI extinction-fearing "doomers" to being namechecked by major Silicon Valley CEOs such as Garry Tan, the CEO of start-up accelerator Y Combinator; Sam Altman, head of OpenAI; Marc Andreessen, the billionaire software engineer; and Elon Musk. In 2023, Andreessen issued his Techno-Optimist Manifesto, expanding beyond the e/acc's focus on AI to encompass all questions of technological progress.
- North America > United States > California > San Francisco County > San Francisco (0.24)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.04)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Law (0.95)
- (2 more...)
'Godfather of AI' shortens odds of the technology wiping out humanity over next 30 years
The British-Canadian computer scientist often touted as a "godfather" of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is "much faster" than expected. Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a "10% to 20%" chance that AI would lead to human extinction within the next three decades. Previously Hinton had said there was a 10% chance of the technology triggering a catastrophic outcome for humanity. Asked on BBC Radio 4's Today programme if he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: "Not really, 10% to 20%." Hinton's estimate prompted Today's guest editor, the former chancellor Sajid Javid, to say "you're going up", to which Hinton replied: "If anything. You see, we've never had to deal with things more intelligent than ourselves before."
- Media > Radio (0.57)
- Leisure & Entertainment (0.57)
- Law > Statutes (0.37)
Canva Revolutionized Graphic Design. Will It Survive the Age of AI?
Design platform Canva launched in 2013 with the aim of democratizing visual creation through features like templates and drag-and-drop graphics. It focused on ease, offering a design suite less daunting for nonprofessionals than tools like Adobe's Photoshop, and simplified access with a web platform and freemium model. Since then, the Sydney-headquartered company has grown to 170 million monthly active users and an 11-figure valuation. But with the advent of generative AI, it's having to innovate to keep its place. Cofounder and CEO Melanie Perkins insists she never saw AI as an existential threat and is excited to embrace it: This year, Canva acquired text-to-image generator Leonardo.ai
Is AI Really an Existential Threat to Humanity?
Blaise Agüera y Arcas speaks at the Aspen Ideas Festival. Artificial intelligence, we have been told, is all but guaranteed to change everything. Often, it is foretold as bringing a series of woes: "extinction," "doom,"; AI is at risk of "killing us all." US lawmakers have warned of potential "biological, chemical, cyber, or nuclear" perils associated with advanced AI models and a study commissioned by the State Department on "catastrophic risks," urged the federal government to intervene and enact safeguards against the weaponization and uncontrolled use of this rapidly evolving technology. Employees at some of the main AI labs have made their safety concerns public and experts in the field, including the so-called "godfathers of AI," have argued that "mitigating the risk of extinction from AI" should be a global priority. Advancements in AI capabilities have heightened fears of the possible elimination of certain jobs and the misuse of the technology to spread disinformation and interfere in elections.
Shaping New Norms for AI
It is likely that 2023 will be remembered as the year of Artificial Intelligence (AI). ChatGPT [2] was the fastest internet service to reach 100million users until now (May 2023) [3] and the technology of Large Language Models (LLMs) at its core is a fundamental element of sister apps for images such as Dall-e2, Midjourney and many others. One of the most fascinating aspects of LLMs is that they exhibit unpredicted emergent features. While the media excitedly reported how AI art generator have developed their own taste [4] or chatbots are able to pass school level exams in a growing number of disciplines [5], only in 2023 it was released that, for the past two years, GPT models had consistently improved its performance in tests designed to measure theory of mind in children [6]. For anyone familiar with complexity science, observing emergent properties in a complex system made of billions of artificial neurons is perhaps not surprising, but the growth in human-, or even superhuman-, like capabilities has attracted huge attention from the media and the public, sparking a hectic debate between the technology apocalyptic and integrated [7]. While it is clear that AI could bring us spectacular benefits, from better medical diagnosing to drug discovering, the risks have so far catalysed most of the public attention. Perils associated to narrow AI include increasing opportunities for manipulation of people, enhancing and dehumanising weapons, and rendering human labour increasingly obsolescent [8]. On the other hand, selfimproving "artificial general intelligence" (AGI) could pose an existential threat to humanity itself.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy (0.04)
- (5 more...)
- Transportation (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (0.94)
- Government > Regional Government (0.68)
- North America > United States > Illinois > Cook County > Chicago (0.26)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.05)
- North America > United States > New York (0.05)
- Asia > Japan (0.05)
The latest industry upset with the use of AI: Fashion
New York City, USA – Last week, the fashion world descended on New York City for New York Fashion Week (NYFW). The bi-annual event celebrated the best in the industry and showcased the hottest trends for the season. NYFW is a massive money maker for the city and the fashion industry at large. On average, the event brings in a staggering 600m annually. But regardless of the stark economic and cultural value the event brings, it is overshadowed by the same existential threat hitting sectors like media and tech – artificial intelligence eroding existing jobs and limiting work opportunities in the future.
- North America > United States > New York (0.69)
- South America > Brazil (0.05)
- Asia > Singapore (0.05)
- Information Technology > Artificial Intelligence > Applied AI (0.41)
- Information Technology > Communications > Social Media (0.31)