Social Issues
How This One Woman Is Powerfully Shaping The Future Of Artificial Intelligence
A man walks through the Watson Premier display to learn about IBM Watson at CES in Las Vegas, Nevada, January 9, 2018. / AFP PHOTO / DAVID MCNEW (Photo credit should read DAVID MCNEW/AFP/Getty Images) As developments, standards and controversy around Artificial Intelligence (AI) explodes, compellin...
How AI will disrupt sports entertainment networks
Whether you're training to run a marathon or gearing up for a marathon of binge-watching TV, both athletes and casual sports fans can benefit from advances in sports video. Due to its widespread appeal, high demand, and abundance of related data, sports video is a prime candidate for innovation. Cog...
Artificial intelligence doesn't require burdensome regulation
One of the most important issues that Congress will face in 2018 is how and when to regulate our growing dependence on artificial intelligence (AI). During the U.S. National Governors Association summer meetings, Elon Musk urged the group to push forward with regulation "before it's too late," stati...
Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead
Google's AI chief isn't fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute. "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," Giannandrea said before a recent Google conference on the relationship between humans and AI systems. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see "Biased Algorithms Are Everywhere, and No One Seems to Care").
May seeks 'safe and ethical' AI tech
The prime minister says she wants the UK to lead the world in deciding how artificial intelligence can be deployed in a safe and ethical manner. In a speech at the World Economic Forum in Davos, Theresa May said a new advisory body, previously announced in the Autumn Budget, will co-ordinate efforts with other countries. In addition, she confirmed that the UK would join the Davos forum's own council on artificial intelligence. But others may have stronger claims. Earlier this week, Google picked France as the base for a new research centre dedicated to exploring how AI can be applied to health and the environment.
Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead
Google's AI chief isn't fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute. "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," Giannandrea said before a recent Google conference on the relationship between humans and AI systems. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see "Biased Algorithms Are Everywhere, and No One Seems to Care").
How AI is transforming the future of fintech
WIRED Money takes place in Studio Spaces, London on May 18, 2017. For more details and to purchase your ticket visit wiredevent.co.uk "Breaking: Two Explosions in the White House and Barack Obama is injured." At the time of the tweet, AP's account had around two million followers. The post was favourited, retweeted, and spread. At 13:13, AP confirmed the tweet was fake.
Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead
Google's AI chief isn't fretting about super-intelligent killer robots. Instead, John Giannandrea is concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute. "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," Giannandrea said before a recent Google conference on the relationship between humans and AI systems. The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it (see "Biased Algorithms Are Everywhere, and No One Seems to Care").
The Future of AI -- A Manifesto
This is still not directly definable, although we still know of human abilities that even the the best present programs on the fastest computers have not been able to emulate, such as playing master-level go and learning science from the Internet. Basic researchers in AI should measure their work as to the extent to which it advances this goal. AI research should not be dominated by near-term applications. DARPA should recall the extent to which its applied goals were benefitted by basic research. NSF should not let itself be seduced by impatience.