A UK parliamentary committee has urged the government to act proactively -- and to act now -- to tackle "a host of social, ethical and legal questions" arising from growing usage of autonomous technologies such as artificial intelligence. "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," says the committee. "Not only would this help to ensure that the UK remains focused on developing'socially beneficial' AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time." The committee kicked off an enquiry into AI and robotics this March, going on to take 67 written submissions and hear from 12 witnesses in person, in addition to visiting Google DeepMind's London office. Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need "serious, ongoing consideration" -- including: "[W]itnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed," it notes in the report.
The UK has a shot at leading the world in artificial intelligence and robotics governance, if not for Brexit. Britain's impending exit from the EU has cast doubts over crucial legal provisions for AI and robots, according to the results of an inquiry by the British parliament published today (Oct. To date, companies that stand to benefit from AI developments have so far been the ones leading the development of ethical guidelines around AI and robotics. Governments have been left behind, although the White House has also released its own, long awaited report on AI's impact today. The Science and Technology Committee of the House of Commons, the British parliament's lower chamber, published its report after six months of gathering evidence from academics, companies like Google DeepMind and Microsoft, and experts on AI and robotics in general.
Moving to the right, credit card fraud detection and spam filtering have higher levels of predictability, but current-day systems still generate significant numbers of false positives and false negatives. Consider two of the relatively higher predictability problems mentioned earlier--spam filtering and driverless cars. In contrast, above the frontier, we find that even the best current diabetes prediction systems still generate too many false positives and negatives, each with a cost that is too high to justify purely automated use. On the other hand, the availability of genomic and other personal data could improve prediction accuracy dramatically (long orange horizontal arrow) and create trustworthy robotic healthcare professionals in the future.
This week, self-driving Tesla had a fatal crash. Other than that – a lot about robots, can AI create an art, cloning animals and more! Ray Kurzweil and people like him believe the Singularity is just behind the corner and promise the new perfect world. They are very optimistic about the future. But sometimes you should listen to the other side to better understand the problem or vision.
As the automation of physical and knowledge work advances, many jobs will be redefined rather than eliminated--at least in the short term. The potential of artificial intelligence and advanced robotics to perform tasks once reserved for humans is no longer reserved for spectacular demonstrations by the likes of IBM's Watson, Rethink Robotics' Baxter, DeepMind, or Google's driverless car. Just head to an airport: automated check-in kiosks now dominate many airlines' ticketing areas. Pilots actively steer aircraft for just three to seven minutes of many flights, with autopilot guiding the rest of the journey. Passport-control processes at some airports can place more emphasis on scanning document bar codes than on observing incoming passengers.