Elon Musk and many of the world's most respected artificial intelligence researchers have committed not to build autonomous killer robots. The public pledge not to make any "lethal autonomous weapons" comes amid increasing concern about how machine learning and AI will be used on the battlefields of the future. The signatories to the new pledge – which includes the founders of DeepMind, a founder of Skype, and leading academics from across the industry – promise that they will not allow the technology they create to be used to help create killing machines. The I.F.O. is fuelled by eight electric engines, which is able to push the flying object to an estimated top speed of about 120mph. The giant human-like robot bears a striking resemblance to the military robots starring in the movie'Avatar' and is claimed as a world first by its creators from a South Korean robotic company Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session A man looks at an exhibit entitled'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S The Jaguar I-PACE Concept car is the start of a new era for Jaguar.
The topic of AI has been a primary focus for Intel's Brian Krzanich, as he works to expand the chipmaker's scope from PCs to the next generation of technology breakthroughs. Intel's Chief Executive will be joining us on stage at TechCrunch Disrupt San Francisco 2017 in September to discuss the company's recent massive investments in AI, from multibillion dollar acquisitions to the formation of the Artificial Intelligence Products Group, which reports directly to Krzanich. Intel's CEO has been extremely bullish about forward facing technologies since taking the helm in 2013. Along with AI, under Krzanich's watch, the silicon juggernaut has become a leader in developing the underlying technologies that power 5G networks, self-driving cards, drones and cloud computing. It marks a strong contrast from the Intel Krzanich inherited as chief, which was still reeling from a failure to fully embrace mobile.
The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people. Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations. Failures of autonomous systems -- like the death last yearof a US motorist in a partially self-driving car from Tesla Motors -- have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley.
Movies like Blade Runner and Her have popularised the idea of fully conscious computers, and with AI (Artificial Intelligence) technology like Apple's Siri or Amazon's Alexa increasingly present in our lives, it'd be easy to believe that what you see on the silver screen is just around the corner. Whilst I enjoy a Sci-Fi epic as much as the next person, in my dual role as Professor of Computer Science at the University of San Francisco and Chief Scientist at data integration software provider SnapLogic, I investigate the practical applications of AI and am tasked with explaining and teaching the realities of what can be achieved. In other words, I separate the fact from the fiction, which is what I aim to do today. It's not self-aware or able to generate original thoughts. What many people call AI is actually a subfield called machine learning (ML).
Elon Musk wears many hats. So it's no surprise that over the course of an interview at California's Code conference, Musk revealed a number of things we didn't know before. Musk is no stranger to the work of philosopher Nick Bostrom, who has warned before that superintelligent AI might wipe out humanity. Musk cited that fear as a reason for investing in AI company DeepMind, before it was bought by Google. But now he's introduced the world to another concept popularised by Bostrum: the simulation problem.