AI accountability needs action now, say UK MPs

#artificialintelligence

A UK parliamentary committee has urged the government to act proactively -- and to act now -- to tackle "a host of social, ethical and legal questions" arising from growing usage of autonomous technologies such as artificial intelligence. "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," says the committee. "Not only would this help to ensure that the UK remains focused on developing'socially beneficial' AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time." The committee kicked off an enquiry into AI and robotics this March, going on to take 67 written submissions and hear from 12 witnesses in person, in addition to visiting Google DeepMind's London office. Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need "serious, ongoing consideration" -- including: "[W]itnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed," it notes in the report.


AI accountability needs action now, say UK MPs

#artificialintelligence

A UK parliamentary committee has urged the government to act proactively -- and to act now -- to tackle "a host of social, ethical and legal questions" arising from the rise of autonomous technologies such as artificial intelligence. "While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now," says the committee. "Not only would this help to ensure that the UK remains focused on developing'socially beneficial' AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time." The committee kicked off an enquiry into AI and robotics this March, going on to take 67 written submissions and hear from 12 witnesses in person, in addition to visiting Google DeepMind's London office. Publishing its report into robotics and AI today, the Science and Technology committee flags up several issues that it says need "serious, ongoing consideration" -- including: "[W]itnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed," it notes in the report.


Tim Cook Talks Artificial Intelligence, iPhone's Future - InformationWeek

#artificialintelligence

Apple CEO Tim Cook was the subject of an in-depth interview published over the weekend in The Washington Post in which he talked about a wide range of issues, including augmented reality (AR) and artificial intelligence (AI), the future of the iPhone, and the company's North Star. In the interview, Cook dismissed the idea of the iPhone accounting for two-thirds of Apple's revenues being problematic, calling the smartphone's dominance a privilege and expressing his belief that one day, every person on earth will own a smartphone. Cook also defended the company's progress in AI technology, pointing to the expanding capabilities of Siri, the digital assistant that Apple launched in 2011. Apple is opening up Siri to third-party developers so the technology can be used by other applications -- such as Uber or Lyft, as Cook pointed out -- to help users complete tasks faster and more efficiently. Earlier this month, the company reportedly bought Turi, a Seattle-based startup company and the latest purchase in a string of acquisitions aimed at bolstering its machine learning and AI capabilities.


Artificial intelligence explained

#artificialintelligence

When it comes to the future of artificial intelligence, the ultimate battle between man and machine may come to mind -- but that's really the stuff of science fiction. AI actually has a presence in our daily lives on a much more useful and less apocalyptic level. Think personal assistant devices and apps like Alexa, Cortana and Siri, web search predictions, movie suggestions on Netflix and self-driving cars. The term "artificial intelligence" was coined back in 1956. It describes a machine's ability to perform intelligent behavior such as decision-making or speech recognition.


Designing AI Systems that Obey Our Laws and Values

#artificialintelligence

Operational AI systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems ("AI Guardians") as an approach to addressing this challenge, and to respond to the potential risks associated with increasingly autonomous AI systems.a These AI oversight systems serve to verify that operational systems did not stray unduly from the guidelines of their programmers and to bring them back in compliance if they do stray. The introduction of such second-order, oversight systems is not meant to suggest strict, powerful, or rigid (from here on'strong') controls. Operations systems need a great degree of latitude in order to follow the lessons of their learning from additional data mining and experience and to be able to render at least semi-autonomous decisions (more about this later).