Artificial Intelligence (AI): What Could Go Wrong?

#artificialintelligence

Elon Musk has warned about the long-term possibility of artificial intelligence can be seriously be harmful to humans. How can Artificial Intelligence be used responsibly? Can things like decency, fairness and morals be programmed into AI algorithms? I think the answer to this is clearly'Yes', but the question is more of whether how can we guarantee that the algorithms that we create will be decent, fair and moral? What is the incentive to build responsibility into an algorithm?


Social Responsibility of Algorithms – SRA 2017

#artificialintelligence

The workshop is jointly organized by LAMSADE and DIMACS with the support of the House of Public Affairs, the chair on Governance and Regulation of Université Paris Dauphine, the GDR Policy Analytics and the GDRI Algorithmic Decision Theory of the CNRS.


Islamic State Claims Responsibility for London Blast: Amaq News Agency

U.S. News

CAIRO (Reuters) - Islamic State has claimed responsibility for a blast on Friday that injured 22 people on a packed commuter train on the London underground network, the militant group's Amaq news agency said.


Dutch, Russia in Talks About Responsibility in MH17 Downing

U.S. News

The Netherlands is in diplomatic discussions with Russia about the European country's assertion that Moscow bears legal responsibility for its role in the 2014 downing of a passenger jet over Ukraine, the Dutch foreign minister said Thursday.


AI as a Black Box: How Did You Decide That?

#artificialintelligence

One of the biggest legal problems protecting AI users in the coming years will be accountability – dealing with the opacity of the black box and explaining decisions made by machine thinking. Understanding the logic behind an AI finding is not an issue where AI is assisting in spotting real-world risks that affect individuals – such as the current use of AI in radiology, where failure to use AI radiology analysis may soon be considered malpractice. As long as the AI is accurate and productive in showing where cancer may exist, we don't care how the machine picked that specific spot on the x-ray, we are just happy to have another tool that helps save lives. But where the AI proposes treatments or outcomes, your clients – healthcare and otherwise – will need to be ready to defend those decisions. This means an entirely different baseline organization and feature set for than the AI currently envisioned or in use.