Artificial Intelligence System

CES 2017: Why your home and car will soon talk to each other


Your car may already feel like an extension of your home based on the hodgepodge of stuff collecting in the backseat, but it's apparent that technology will soon make the two more connected than ever. No one has proposed blurring the line more than Hyundai -- which unveiled a futuristic concept car last week that literally connects to the home via a hole in the wall. Hyundai envisions the car becoming a lounge-like extension of the living space that provides air conditioning and entertainment, and acts as a back-up generator. "By seamlessly blending features from the car with home and work environments, the user experience is uninterrupted whether socializing, working at home, or on the move," Hak Su Ha, Hyundai's design center director, said in a news release. But even if cars never plug into the home physically, they will digitally.

AI: The Future is Now


Comedian TJ Miller, of HBO's Silicon Valley, performs a standup in which he tells of an entertaining, yet extremely terrifying time in which he suffered a life-threatening brain malformation. He was in the middle of pitching a movie idea when he collapsed to the floor while seizing, and was rushed to the hospital. His story continues, he explains that he suffered from an arteriovenous malformation (AVM) hemorrhage, which is essentially an abnormal connection between the veins and arteries. When Miller awoke from his coma in the Cedars-Sinai ICU neurology ward, he found a nurse standing over him saying, "Your doctor cannot be here, but a proxy will be here in just a bit." He then explains how he was given little to no information about his condition.

Mark Zuckerberg's artificial intelligence system


At home with Mark Zuckerberg and Jarvis, the AI assistant he built for his family. Facebook's CEO still loves to code. Here's an exclusive peek at his new project, which plays music, makes toast, and occasionally annoys his wife.

MIT's latest breakthrough? Getting AIs to explain their decisions


Artificial intelligence systems are increasingly running more and more of our world, and its digital fabric, but in many cases just how they make their decisions is a "Black Box", this research aims to develop a new AI architecture that will help AI's explain their decisions Whether you like it or not artificial intelligence (AI) is here to stay, and it's inevitably going to play a greater role in all of our lives – whether that's as benign as helping you optimise your route into work in the morning, or as important as diagnosing disease, controlling autonomous vehicles and warships, creating new encryption schemas or running the global financial system. And there's no denying that brain-inspired deep learning neural networks have proven capable of making significant advances in a number of AI-related fields over the past decade. But, like us, they're not perfect and we've seen time and time again how AI systems can quickly become biased, sexist and even racist. Google's new quantum computer is 100 million times faster than your PC At the moment it could be argued that it is easy enough for us to take these "flawed" AI's offline, in a similar way to how Microsoft took it's Hilter loving sex bot offline earlier this year, and Google is even developing an AI "Kill Switch" but one day it's going to get harder for us to undo these AI's from the infrastructure of the internet and having all powerful AI's, that will one day control energy grids and air traffic control systems, to say nothing of everything else they'll be plugged into, that can't explain how they came to a decision is already freaking a lot of people out. Increasingly just how AI's "do their thing" is increasingly becoming a black box, one that even the systems designers don't understand and this is already becoming a problem.

Vatican weighs in on power, limits of artificial intelligence


Vatican City, Dec 4, 2016 / 03:03 am (CNA/EWTN News).- This week the Vatican hosted a high-level discussion in the world of science, gathering experts to discuss the progress, benefits and limits of advances in artificial intelligence. A new conference at the Vatican drew experts in various fields of science and technology for a two-day dialogue on the "Power and Limits of Artificial Intelligence," hosted by the Pontifical Academy for Sciences. Among the scheduled speakers were several prestigious scientists, including Stephen Hawkins, a prominent British professor at the University of Cambridge and a self-proclaimed atheist, as well as a number of major tech heads such as Demis Hassabis, CEO of Google DeepMind, and Yann LeCun of Facebook. The event, which ran from Nov. 30-Dec.

Artificial Intelligence That Can Learn Is A Step Closer To Reality


There are many companies working on artificial intelligence. The vast majority of these are looking at how artificial intelligence systems can recognise things and tell them apart – differentiate between an apple and a mango for example. But what do these artificial intelligence systems do with that information once they've perfected a method of gathering and defining it? This is where decision trees come in. Decision trees are used to map various consequences and possible actions an artificial intelligence system can take.

OT: Google Quickdraw


Just inaugurating my off-topic section with this post, that doesn't talk about virtual reality or startup life. Today I just want to talk about my little experience with Google Quickdraw. Google Quickdraw is like playing Draw Something (or Pictionary, for who, like me, spent some entertaining evenings playing with the board game), but against an Artificial Intelligence system… the one of Google… and you know how Google is smart with neural networks. The game is pretty simple: you are given 6 words, one at a time, and you have to draw it so that the AI will understand what you're drawing. You have only 20 seconds, but usually the system is so smart that it detects what you're drawing in just some seconds!

Google's 'Show and Tell' AI can tell you what's in a photo with nearly 94% accuracy

Daily Mail

Artificial intelligence systems have recently begun to try their hand at writing picture captions, often producing hilarious, and even offensive, blunders. But, Google's Show and Tell algorithm has almost perfected the craft. According to the firm, the AI can now describe images with nearly 94 percent accuracy and may even'understand' the context and deeper meaning of a scene. According to the firm, Google's AI can now describe images with nearly 94 percent accuracy and may even'understand' the context and deeper meaning of a scene. Google has released the open-source code for its image captioning system, allowing developers to take part, the firm revealed on its research blog.

Alphabet Inc (GOOGL) Ensure AI Safety With Implementation Of 5 Golden Rules


In a blog, Google researcher, Chris Olah, stated five ways for the company to ensure that AI systems never pose a threat to the human race. The first rule is "Avoiding Negative Side Effects" which means that artificial intelligence should complete its tasks as it was designed to and not indulge in disturbing its environment. The second rule is "Avoiding Reward Hacking" which means that artificial intelligence systems should complete its tasks accordingly and not look for shortcuts to completing tasks. The tech giant has outlined five unsolved challenges that they will tackle in order to perfect its artificial intelligence system and make any future domestic robot safe.