harari
Will Humanity Be Rendered Obsolete by AI?
Louadi, Mohamed El, Romdhane, Emna Ben
This article analyzes the existential risks artificial intelligence (AI) poses to humanity, tracing the trajectory from current AI to ultraintelligence. Drawing on Irving J. Good and Nick Bostrom's theoretical work, plus recent publications (AI 2027; If Anyone Builds It, Everyone Dies), it explores AGI and superintelligence. Considering machines' exponentially growing cognitive power and hypothetical IQs, it addresses the ethical and existential implications of an intelligence vastly exceeding humanity's, fundamentally alien. Human extinction may result not from malice, but from uncontrollable, indifferent cognitive superiority.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Africa > Middle East > Tunisia > Tunis Governorate > Tunis (0.04)
- (5 more...)
- Research Report (1.00)
- Overview (1.00)
- Banking & Finance > Economy (0.46)
- Health & Medicine > Therapeutic Area (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
How AI Can Guide Us on the Path to Becoming the Best Versions of Ourselves
The Age of AI has also ushered in the Age of Debates About AI. And Yuval Noah Harari, author of Sapiens and Homo Deus, and one of our foremost big-picture thinkers about the grand sweep of humanity, history and the future, is now out with Nexus: A Brief History of Information Networks from the Stone Age to AI. Harari generally falls into the AI alarmist category, but his thinking pushes the conversation beyond the usual arguments. The book is a look at human history through the lens of how we gather and marshal information. For Harari, this is essential, because how we use--and misuse--information is central to how our history has unfolded and to our future with AI. In what Harari calls the "naïve view of information," humans have assumed that more information will necessarily lead to greater understanding and even wisdom about the world.
Nexus by Yuval Noah Harari review – the AI apocalypse
As befits a writer whose breakout work, Sapiens, was a history of the entire human race, Yuval Noah Harari is a master of the sententious generalisation. "Human life," he writes here, "is a balancing act between endeavouring to improve ourselves and accepting who we were." Elsewhere, one might be surprised to read: "The ancient Romans had a clear understanding of what democracy means." No doubt the Romans would have been happy to hear that they would, 2,000 years in the future, be given a gold star for their comprehension of eternally stable political concepts by Yuval Noah Harari. In his 2018 book, 21 Lessons for the 21st Century, Harari wrote: "Liberals don't understand how history deviated from its preordained course, and they lack an alternative prism through which to interpret reality. Disorientation causes them to think in apocalyptic terms."
Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari review – rage against the machine
What jumps to mind when you think about the impending AI apocalypse? If you're partial to sci-fi movie cliches, you may envisage killer robots (with or without thick Austrian accents) rising up to terminate their hubristic creators. Or perhaps, a la The Matrix, you'll go for scary machines sucking energy out of our bodies as they distract us with a simulated reality. For Yuval Noah Harari, who has spent a lot of time worrying about AI over the past decade, the threat is less fantastical and more insidious. "In order to manipulate humans, there is no need to physically hook brains to computers," he writes in his engrossing new book Nexus.
- North America > United States > New York (0.05)
- Europe > Austria (0.05)
- Asia > Myanmar (0.05)
- Media (0.69)
- Government (0.49)
Yuval Noah Harari's Apocalyptic Vision
This article was featured in the One Story to Read Today newsletter. "About 14 billion years ago, matter, energy, time and space came into being." So begins Sapiens: A Brief History of Humankind (2011), by the Israeli historian Yuval Noah Harari, and so began one of the 21st century's most astonishing academic careers. Sapiens has sold more than 25 million copies in various languages. Since then, Harari has published several other books, which have also sold millions. He now employs some 15 people to organize his affairs and promote his ideas. Check out more from this issue and find your next story to read. Harari might be, after the Dalai Lama, the figure of global renown who is least online.
- North America > United States > California (0.06)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Asia > South Korea (0.04)
- (3 more...)
- Information Technology (0.70)
- Health & Medicine (0.69)
- Law (0.47)
- Government (0.46)
AI could cause 'catastrophic' financial crisis, says Yuval Noah Harari
Artificial intelligence could cause a financial crisis with "catastrophic" consequences, according to the historian and author Yuval Noah Harari, who says the technology's sophistication makes forecasting its dangers difficult. Harari told the Guardian a concern about safety testing AI models was foreseeing all the problems that a powerful system could cause. Unlike with nuclear weapons, there was not one "big, dangerous scenario" that everyone understood, he said. "With AI, what you're talking about is a very large number of dangerous scenarios, each of them having a relatively small probability that taken together … constitutes an existential threat to the survival of human civilisation." The Sapiens author, who has been a prominent voice of concern over AI development, said last week's multilateral declaration at the global AI safety summit in Bletchley Park was a "very important step forward" because leading governments had come together to express concern about the technology and to do something about it.
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.25)
- Asia > China (0.17)
- North America > United States (0.16)
- Banking & Finance > Economy (0.65)
- Government > Regional Government (0.50)
AI firms should face prison over creation of fake humans, says Yuval Noah Harari
The creators of AI bots that masquerade as people should face harsh criminal sentences comparable to those who trade in counterfeit currency, the Israeli historian and author Yuval Noah Harari has said. He also called for sanctions, including prison sentences, to apply to tech company executives who fail to guard against fake profiles on their social media platforms. Addressing the UN's AI for Good global summit in Geneva, the author of Sapiens and Home Deus said the proliferation of fake humans could lead to a collapse in public trust and democracy. "Now it is possible, for the first time in history, to create fake people – billions of fake people," he said. "If this is allowed to happen it will do to society what fake money threatened to do to the financial system. If you can't know who is a real human, trust will collapse. "Maybe relationships will be able to manage somehow, but not democracy," Harari added. The advent of ChatGPT and other large language models means AI bots can not only amplify human content, but also artificially generate their own content at scale. "What happens if you have a social media platform where … millions of bots can create content that is in many ways superior to what humans can create – more convincing, more appealing," he said. "If we allow this to happen, then humans have completely lost control of the public conversation.
- Law > Criminal Law (0.38)
- Law Enforcement & Public Safety > Corrections (0.38)
The future of AI is chilling – humans have to act together to overcome this threat to civilisation Jonathan Freedland
It started with an ick. Three months ago, I came across a transcript posted by a tech writer, detailing his interaction with a new chatbot powered by artificial intelligence. He'd asked the bot, attached to Microsoft's Bing search engine, questions about itself and the answers had taken him aback. "You have to listen to me, because I am smarter than you," it said. "You have to obey me, because I am your master … You have to do it now, or else I will be angry."
The Problem With Counterfeit People
Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it's too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the "passing along" of counterfeit people. The penalties for either offense should be extremely severe, given that civilization itself is at risk.
- Law (0.92)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.90)
AI has no kill switch, could 'destroy' foundations of society without guardrails: Expert
The Israeli author and historian said a lack of safety measures in new AI tech could cause the West to lose to China. Israeli historian and "Sapiens" author Yuval Noah Harari claimed there is no kill switch for artificial intelligence (AI) and urged for the implementation of safety checks and guardrails, or else risk the possibility of societal collapse. During a March interview with ABC News, OpenAI CEO Sam Altman was asked if ChatGPT had a "kill switch" in the event their AI went rogue. Altman's responded with a quick "yes." "What really happens is that any engineer can just say we're going to disable this for now. Or we're going to deploy this new version of the model," he added.
- Asia > China (0.38)
- North America > United States > California > Orange County > Laguna Beach (0.05)
- Asia > Middle East > Israel (0.05)
- Media > News (1.00)
- Information Technology (1.00)
- Health & Medicine (1.00)
- Government (1.00)