Goto

Collaborating Authors

Doctors, data, and diseases: How AI is transforming health care

#artificialintelligence

Health care doesn't have a big data problem. It has a big data opportunity, thanks to artificial intelligence. Think about the number of inefficiencies in your daily life -- long lines, traffic jams, a reliance on "snail mail" for certain bills or communications. Those inefficiencies are inconvenient and annoying, yes, but they are usually not a matter of life and death. The need for productivity in health care is different.


The real risks of artificial intelligence

#artificialintelligence

This story is part of a series inspired by the subjects and speakers appearing at BBC Future's World-Changing Ideas Summit in Sydney on 15 November. Find out more about the inspiring people coming to the meeting, including: Researcher Alex Gillespie on what artificial intelligence means for us Researcher Helen Christensen on how tech can spot and treat mental health issues Alan Finkel, Australia's chief scientist, on the future of energy BBC TV presenter Michael Mosley on the science of food and health Uber's Kevin Corti on the hidden patterns of city transport Researcher and TV presenter Emma Johnston on the impact of cities on oceans Experimental architect Rachel Armstrong on interstellar travel If you believe some AI-watchers, we are racing towards the Singularity – a point at which artificial intelligence outstrips our own and machines go on to improve themselves at an exponential rate. If that happens – and it's a big if – what will become of us? In the last few years, several high-profile voices, from Stephen Hawking to Elon Musk and Bill Gates have warned that we should be more concerned about possible dangerous outcomes of supersmart AI. And they've put their money where their mouth is: Musk is among several billionaire backers of OpenAI, an orgnisation dedicated to developing AI that will benefit humanity.


The real risks of artificial intelligence

#artificialintelligence

This story is part of a series inspired by the subjects and speakers appearing at BBC Future's World-Changing Ideas Summit in Sydney on 15 November. If you believe some AI-watchers, we are racing towards the Singularity – a point at which artificial intelligence outstrips our own and machines go on to improve themselves at an exponential rate. If that happens – and it's a big if – what will become of us? In the last few years, several high-profile voices, from Stephen Hawking to Elon Musk and Bill Gates have warned that we should be more concerned about possible dangerous outcomes of supersmart AI. And they've put their money where their mouth is: Musk is among several billionaire backers of OpenAI, an orgnisation dedicated to developing AI that will benefit humanity.


Google's AI researchers say these are the five key problems for robot safety

#artificialintelligence

Google is worried about artificial intelligence. No, not that it will become sentient and take over the world, but that, say, a helpful house robot might accidentally skewer its owner with a knife. The company's latest AI research paper delves into this issue under the title "Concrete Problems In AI Safety." Really, though, that's just a fancy way of saying "How Are We Going To Stop These Terror-Bots Killing Us All In Our Sleep." To answer this brain-tickler, Google's researchers have landed on five "practical research problems" -- key issues that programmers will need to consider before they start creating the next Johnny Five.


Technical challenges in machine ethics

#artificialintelligence

Machine ethics offers an alternative solution for artificial intelligence (AI) safety governance. In order to mitigate risks in human-robot interactions, robots will have to comply with humanity's ethical and legal norms, once they've merged into our daily life with highly autonomous capability. In terms of technical challenges, there are still many open questions in machine ethics. For example, what is deontic logic and how can it be used for improving AI safety? How do we fashion the knowledge representation for ethical robots? These are all significant questions for us to investigate. In this interview, we invite Prof. Ronald C. Arkin to share his insights on robot ethics, with a focus on its technical aspects.