An old Chinese proverb says, "The best time to plant a tree was 20 years ago. The second-best time is now." This seems to be the thinking of very smart people when it comes to doing something about protecting humanity from the possible dangers of artificial intelligence (AI). Sure, it might be 20, 50 or even 100 years before AI becomes more intelligent than humans, posing an existential problem for today's sapiens. Many luminaries like Elon Musk, Bill Gates and the late Stephen Hawking have warned that failing to prepare for this eventuality will guarantee our demise in some decades to come.
During her freshman year, Stephanie Tena, a 16-year-old programmer, was searching the internet for coding programs and came across a website for an organization called AI4All, which runs an artificial-intelligence summer camp for high-schoolers. On the site, a group of girls her age were gathered around an autonomous car in front of the iconic arches of Stanford's campus. "AI will change the world," the text read. "Who will change AI?" How technology and globalization are changing what it means to work Read more Tena thought maybe she could. She lives in a trailer park in California's Central Valley; her mom, a Mexican immigrant from Michoacán, picks strawberries in the nearby fields.
Frankfurt Airport's concierge robot is one of its creations and Furhat also helped develop a pilot in Swedish schools as well as partnering with Honda in the development of a "smart" care home. "Human beings are social creatures and hardwired to respond emotionally to almost everything around us," says co-founder and chief executive Samer Al Moubayed. "Building a social robot has the promise to tap into the emotional intelligence system we have built-in and to communicate with us on our own terms, so that robots can engage with us in much more fundamental and impactful ways.
Artificial Intelligence, or AI, is empowering people with physical disabilities, allowing them to take charge of their own lives but it's also having a surprising impact on people with neuro-diverse conditions like autism. It's easy to generalise about people on the autism spectrum; they like consistency, take things literally and like routine. They are built to provide consistency. They don't (yet) understand sarcasm and they like logic, a lot. But it's important to remember that although people on the autism spectrum will share certain difficulties, everyone's experience of the condition will be very different.
The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society. While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.
From Daniel Eran Dilger's More companies need to temper their Artificial Intelligence with authentic ethics: When Apple outlined that its new HomePod didn't initiate phone calls on its own, nobody jumped to the conclusion that this was because having a device in your home that anyone's voice could use to place a telephone call from your personal mobile number might be a bad idea. Instead, the company was generally lambasted for "again" failing to match one of the many features of Amazon's Alexa Echo always-listening appliances. This week, Alexa got famous for recording a private conversation and automatically sending it to a random contact of the owner. That's something HomePod doesn't do, not because Apple doesn't know how, but because Apple chose not to rush to make it possible to do things that might not be a good idea in the long run. My take: Does HomePod really know how to initiate a phone call?
Eric Schmidt, the former chairman of Google's parent company Alphabet and now its technical adviser, joined the list of people Friday who oppose Tesla and SpaceX CEO Elon Musk's views about the future of artificial intelligence. Musk has warned that AI, if unregulated, will eventually become an existentialist threat to humanity, and his opinion has both famous supporters, like the late Stephen Hawking, and dissenters like Schmidt. Speaking at the VivaTech conference in Paris on Friday, Schmidt's comments were in response to a question about Musk's dire warnings about AI. "I think Elon is exactly wrong. The fact of the matter is that AI and machine learning are so fundamentally good for humanity," Schmidt said, adding he shared Musk's concerns about the potential for misuse of technology. Alphabet's former executive chairman Eric Schmidt speaks on the phone during the World Economic Forum (WEF) annual meeting in Davos, Switzerland Jan. 24, 2018.
Every day brings considerable AI news, from breakthrough capabilities to dire warnings. A quick read of recent headlines shows both: an AI system that claims to predict dengue fever outbreaks up to three months in advance, and an opinion piece from Henry Kissinger that AI will end the Age of Enlightenment. Then there's the father of AI who doesn't believe there's anything to worry about. Meanwhile, Robert Downey, Jr. is in the midst of developing an eight-part documentary series about AI to air on Netflix. AI is more than just "hot," it's everywhere.