Elon Musk has warned about the long-term possibility of artificial intelligence can be seriously be harmful to humans. How can Artificial Intelligence be used responsibly? Can things like decency, fairness and morals be programmed into AI algorithms? I think the answer to this is clearly'Yes', but the question is more of whether how can we guarantee that the algorithms that we create will be decent, fair and moral? What is the incentive to build responsibility into an algorithm?
One of the biggest legal problems protecting AI users in the coming years will be accountability – dealing with the opacity of the black box and explaining decisions made by machine thinking. Understanding the logic behind an AI finding is not an issue where AI is assisting in spotting real-world risks that affect individuals – such as the current use of AI in radiology, where failure to use AI radiology analysis may soon be considered malpractice. As long as the AI is accurate and productive in showing where cancer may exist, we don't care how the machine picked that specific spot on the x-ray, we are just happy to have another tool that helps save lives. But where the AI proposes treatments or outcomes, your clients – healthcare and otherwise – will need to be ready to defend those decisions. This means an entirely different baseline organization and feature set for than the AI currently envisioned or in use.