Goto

Collaborating Authors

How explainable artificial intelligence can help humans innovate

AIHub

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.


This AI can explain how it solves Rubik's Cube--and that's a big deal

#artificialintelligence

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations. One field of AI, called reinforcement learning, studies how computers can learn from their own experiences.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively.


Why it's vital that AI is able to explain the decisions it makes

#artificialintelligence

Currently, our algorithm is able to consider a human plan for solving the Rubik's Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik's Cube that a person can understand. Our team's next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik's Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively. Even if it was based on real facts, this raw explanation conditioned the project's continuity at that time, unless we could provide a full explanation that the senior executive could understand and trust.