For the good of humanity, AI needs to know when it's incompetent

#artificialintelligence 

Everyone's had that coworker, the one who never asks for help even when fully out of their depth, unaware of their own incompetence. But what happens when your colleague isn't a human suffering from Dunning-Kruger but artificial intelligence? That's a question Vishal Chatrath has had to consider as the CEO and co-founder of Prowler.io, an AI platform for generalised decision making for businesses that aims to augment human work with machine learning. "The decision-making process can be quite similar [across different businesses], if abstracted at a low-enough level," he says. "In some cases, the decisions are fully automated, in some cases, there's a human in the loop. Keeping a human as part of the process is partially because of a lack of trust in machine-based decision making, but it's also an admission by Chatrath that we remain in the early years of AI. Such systems aren't perfect, and likely never will be, and one failing of AI is it doesn't inherently understand its own competency. If a human worker needs help, they can ask for it -- but how do you build an understanding of personal limitations into code? "In both crashes, the commonality was that the autopilot did not understand its own incompetence," Chatrath says. Prowler.io built an awareness of incompetence into its system, teaching its AI to not only understand its limitations but to forecast when it's going to reach a situation where it has no experience or background. "Then it gently taps the human on the shoulder, so to speak, for the human to take control," he says. The system can learn from those interactions, and after enough training may eventually be able to stop asking for help. Such limits to AI could be placed by regulators, as is the case in the financial industry where levels of risk are carefully weighed, or by the business itself. The fourth consideration is how are we even sure the AI is asking the right questions. "There is no cookie cutter answer to these," he says. If there's a 10 per cent chance a logistics scheduler is wrong, and a lorry is therefore a bit late, that's okay. If there's a 10 per cent chance that shape in front of a driverless car is a human, the car should stop -- the risk are too high for any uncertainty. "Rather than doing stupid things like running someone over, it brings the human into the [process]," Chatrath explains, as it's been told when the risks are too high for it to screw up. That's important, says Taha Yasseri, a researcher at the Oxford Internet Institute and the Alan Turing Institute for Data Science, because while we can delegate decision making to machines, we can't delegate responsibility. "The ultimate responsibility in implementing the decisions made by machines are on us," he says. In practice, whenever the expected accuracy of a human is higher than a machine, it is practically justified to use human judgment to overlook machine decisions."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found