Goto

Collaborating Authors

Results


Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead

#artificialintelligence

Google's AI chief isn't fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute. "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," Giannandrea said before a recent Google conference on the relationship between humans and AI systems. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see "Biased Algorithms Are Everywhere, and No One Seems to Care").


Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead

#artificialintelligence

Google's AI chief isn't fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute. "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," Giannandrea said before a recent Google conference on the relationship between humans and AI systems. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see "Biased Algorithms Are Everywhere, and No One Seems to Care").


Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead

#artificialintelligence

Google's AI chief isn't fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute. "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," Giannandrea said before a recent Google conference on the relationship between humans and AI systems. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see "Biased Algorithms Are Everywhere, and No One Seems to Care").


The Future of Productivity: AI and Machine Learning

#artificialintelligence

I wanted to know what the future of artificial intelligence in project management would look like, so I reached out to founders, productivity experts and futurists who work in this space every day to ask what their predictions are for the next five and 10 years. We need to think of productivity systems as supporting systems for our decision process." Related: Rethinking Chatbots: They're Not Just for Customers Mark Mader, CEO at Smartsheet, thinks that thinking of AI as roving robots is missing the point, saying, "Looking further out, there's no doubt that automation -- don't think robots, think removing mundane and unproductive work steps from your day -- will increase. As a form of decision support, productivity expert Carl Pullein thinks that "machine learning and artificial intelligence [will move] towards creating productivity tools that can schedule your meetings and tasks for you and to be able to know what needs to be done based on your context, where you are and what needs to be done."


Rise of the robot workforce: Machine learning to shake up traditional employment - BizNews.com

#artificialintelligence

The so-called'fourth industrial revolution' is set to displace many career fields, introducing what could be the best of times and worst of times for workers, according to Dr. Roze Phillips, Managing Director for Accenture Consulting. We used to think it's just a low level administrative task, now machines are beating humans at jeopardy, and they're beating humans at poker. So, be digital, embrace digital, embrace connectivity, use the data that is now available, and absolutely embrace learning because that's how you're going to be change strong. I believe a World Economic Forum, to be honest, is diversity in motion, that's what I call it.


When to Trust Robots with Decisions, and When Not To

#artificialintelligence

Moving to the right, credit card fraud detection and spam filtering have higher levels of predictability, but current-day systems still generate significant numbers of false positives and false negatives. Consider two of the relatively higher predictability problems mentioned earlier--spam filtering and driverless cars. In contrast, above the frontier, we find that even the best current diabetes prediction systems still generate too many false positives and negatives, each with a cost that is too high to justify purely automated use. On the other hand, the availability of genomic and other personal data could improve prediction accuracy dramatically (long orange horizontal arrow) and create trustworthy robotic healthcare professionals in the future.