AI watchdog needed to regulate automated decision-making, say experts

The Guardian

An artificial intelligence watchdog should be set up to make sure people are not discriminated against by the automated computer systems making important decisions about their lives, say experts. The rise of artificial intelligence (AI) has led to an explosion in the number of algorithms that are used by employers, banks, police forces and others, but the systems can, and do, make bad decisions that seriously impact people's lives. But because technology companies are so secretive about how their algorithms work – to prevent other firms from copying them – they rarely disclose any detailed information about how AIs have made particular decisions. In a new report, Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, a research team at the Alan Turing Institute in London and the University of Oxford, call for a trusted third party body that can investigate AI decisions for people who believe they have been discriminated against. "What we'd like to see is a trusted third party, perhaps a regulatory or supervisory body, that would have the power to scrutinise and audit algorithms, so they could go in and see whether the system is actually transparent and fair," said Wachter.


Towards Moral Autonomous Systems

arXiv.org Artificial Intelligence

Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.


Predicting the future of artificial intelligence has always been a fool's game

AITopics Original Links

From the Darmouth Conferences to Turing's test, prophecies about AI have rarely hit the mark. In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting. The "spectacularly wrong prediction" of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate. The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.


Can Artificial Intelligence Replace The Content Writer?

#artificialintelligence

You don't have to look far to find statistics and predictions on the future impact of artificial intelligence (AI). But while self-driving cars and augmented reality headsets have excited consumers, enterprise headlines have focused more on the risk that it poses to workers. Analyst giant Forrester have claimed that 16% of jobs in the U.S. will be lost to artificial intelligence by 2025. Meanwhile, a recent report from PwC stated 30% of jobs in the UK were under threat from AI breakthroughs, putting 10 million British workers at risk of being'replaced by robots' in the next 15 years. We shouldn't expect a wide-scale revolution of robot workers across the entire workplace, of course.


The Future of Artificial Intelligence (AI)

#artificialintelligence

His research focuses at the intersection of computer vision, AI, machine learning, and graphics, with particular emphasis on systems that allow people to interact naturally with computers. These projects include the UK's biometric matching system and the International Technology Alliance research programme into novel sensor networks. Dr Waggett has extensive experience of innovative IT systems, including research into image processing at University College London and the Marconi Research Centre. His work includes responsibility for the delivery of innovative systems for a range of government and commercial organisations and he has been the Big Data subject matter expert for a range of projects and clients including the UK's biometric visa matching system.