Goto

Collaborating Authors

 never rule


Why AI will never rule the world

#artificialintelligence

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity -- for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. According to the theory, advances in AI -- specifically of the machine learning type that's able to take on new information and rewrite its code accordingly -- will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. Except that it will never happen. Co-authors University at Buffalo philosophy professor Barry Smith and Jobst Landgrebe, founder of German AI company Cognotekt argue that human intelligence won't be overtaken by "an immortal dictator" any time soon -- or ever.


Why Machines will Never Rule the World

#artificialintelligence

The book's core argument is that an artificial intelligence that could equal or exceed human intelligence--sometimes called artificial general intelligence (AGI)--is for mathematical reasons impossible. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence from mathematics, physics, computer science, philosophy, linguistics, and biology, setting up their book around three central questions: What are the essential marks of human intelligence? What is it that researchers try to do when they attempt to achieve "artificial intelligence" (AI)? And why, after more than 50 years, are our most common interactions with AI, for example with our bank's computers, still so unsatisfactory? Landgrebe and Smith show how a widespread fear about AI's potential to bring about radical changes in the nature of human beings and in the human social order is founded on an error.


Robots can never rule the world - Why?

#artificialintelligence

Is it even valid to assume that robots will be evil in the future and would seek to control humans? It is likely that in the future we will see different types of intelligent robots with different allegiances (just like human beings). AI is already being experimented with in many countries and tech companies. Thus, robots with different human groups may fight each other, but there is no chance that all robots will fight all humans. Also, It's not necessary that robots are either gonna fight.


The epic robot fails that say AI will never rule the world

@machinelearnbot

WE ALL know how it ends: the machines rise up to enslave their puny masters. Robots and artificial intelligences may so far have confined themselves to blameless pursuits such as vacuum cleaning, beating us at board games and recommending products we might also like. But as they continue their inexorable rise, entering a "singularity" of runaway self-improvement, they will inevitably turn their attention to robopocalypse. Stephen Hawking says AI could spell the end for humanity. Elon Musk thinks it could lead to world war three. Vladimir Putin says whoever controls AI will control the world.