Elon Musk


Elon Musk is right: we should all be worried about killer robots

#artificialintelligence

Tesla and SpaceX CEO Elon Musk, along with 115 other artificial intelligence and robotics specialists, has signed an open letter to urge the United Nations to recognize the dangers of lethal autonomous weapons and to ban their use internationally. There are already numerous weapons, like automatic anti-aircraft guns and drones, that can operate with minimal human oversight; advanced tech will eventually help them to carry out military functions entirely autonomously. To illustrate why this is a problem, consider the UK government's argument in which it opposed a ban on lethal autonomous weapons in 2015: it said that "international humanitarian law already provides sufficient regulation for this area," and that all weapons employed by UK armed forces would be "under human oversight and control." I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI's good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear).


Artificial Intelligence Explained

#artificialintelligence

The scope of Artificial Intelligence is much broader, including technologies like Virtual Agents, Natural Language Processing, Machine Learning Platforms and many other. The main focus in GE is on making machines smarter, leveraging machine learning to create "digital twins" – a digital replica, or data-based representation of an industrial machine. Unfortunately, SalesForce's Connected Small Business Report notes that only 21% of small businesses are currently using business intelligence and analytics. World's top technology leaders Stephen Hawking and Elon Musk are on the sceptical side of this debate, while Microsoft, Apple, Google and many others are already eagerly taking advantage of the AI technology.


Elon Musk: Artificial intelligence battle 'most likely cause' of WWIII

#artificialintelligence

Elon Musk says global race for artificial intelligence will cause World War III. A race toward "superiority" between countries over artificial intelligence will be the most likely cause of World War III, warns entrepreneur Elon Musk. May be initiated not by the country leaders, but one of the AI's, if it decides that a prepemptive strike is most probable path to victory Musk has emerged as a critic of AI safety, seeking ways for governments to regulate the technology before it gets out of control. Last month, Musk warned fears over the security of AI are more risky than the threat of nuclear war from North Korea.


Automation Nightmare: Philosopher Warns We Are Creating a World Without Consciousness

#artificialintelligence

Recently, a conference on artificial intelligence, tantalizingly titled "Superintelligence: Science or Fiction?", was hosted by the Future of Life Institute, which works to promote "optimistic visions of the future". The conference offered a range of opinions on the subject from a variety of experts, including Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of Google's DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark. The conversation's topics centered on the future benefits and risks of artificial superintelligence, with everyone generally agreeing that it's only a matter of time before AI becomes paramount in our lives. Eventually, AI will surpass human intelligence, with the ensuing risks and transformations.


Elon Musk's OpenAI has unveiled an unusual approach to building smarter machines

#artificialintelligence

In 2013 a British artificial-intelligence startup called DeepMind surprised computer scientists by showing off software that could learn to play classic Atari games better than an expert human player. DeepMind was soon acquired by Google, and the technique that beat the Atari games, reinforcement learning, has become a hot topic in the field of AI and robotics. Google used reinforcement learning to create software that beat a champion Go player last year. Now OpenAI, a nonprofit research institute cofounded and funded by Elon Musk, says it has discovered that an easier-to-use alternative to reinforcement learning can get rival results when it plays games and performs other tasks. At MIT Technology Review's EmTech Digital conference in San Francisco on Monday, OpenAI's research director, Ilya Sutskever, said that could allow researchers to make progress in machine learning faster.



Intel Unveils Upcoming Xeon Phi Chip for AI Workloads

#artificialintelligence

What could possibly go wrong? Elon Musk's AI set to try and learn the art of human conversation ... Intel's secretive Knights Mill mega-chip will challenge GPUs for AI domination



Devouring Reddit threads might help artificial intelligence understand language better

#artificialintelligence

The new machine, called a DGX-1, is optimized for the form of machine learning known as deep learning, which involves feeding data to a large network of crudely simulated neurons and has resulted in great strides in artificial intelligence in recent years. Language remains a very tricky problem for artificial intelligence, but in recent years researchers have made progress in applying deep learning to the problem (see "AI's Language Problem"). "This will allow us to train models on larger data sets, which we have found leads to progress in AI." OpenAI hopes to use reinforcement learning to build robots capable of performing useful chores around the home, although this may prove a time-consuming challenge (see "This Is the Robot Maid Elon Musk Is Funding" and "The Robot You Want Most Is Far from Reality").


This is why your fears about artificial intelligence are wrong

#artificialintelligence

Artificial intelligence will take over the world! "There's very smart people, whether it's Elon Musk or Jeff Bezos or Bill Gates or Stephen Hawking, who have said, 'Oh my gosh, this is really dangerous,'" Hawkins said. Hawkins stressed that Numenta is specifically trying to reverse engineer only part of the human brain: The neocortex, which is what lets us learn and create a model of the world based on our environments and experiences. You can listen to Recode Decode in the audio player above, or subscribe on iTunes, Google Play Music, TuneIn and Stitcher.