Goto

Collaborating Authors

How explainable artificial intelligence can help humans innovate

#artificialintelligence

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.


This AI can explain how it solves Rubik's Cube--and that's a big deal

#artificialintelligence

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations. One field of AI, called reinforcement learning, studies how computers can learn from their own experiences.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively.


Self-Taught AI Masters Rubik's Cube in Just 44 Hours

#artificialintelligence

Incredibly, the system learned to dominate the classic 3D puzzle in just 44 hours and without any human intervention. "A generally intelligent agent must be able to teach itself how to solve problems in complex domains with minimal human supervision," write the authors of the new paper, published online at the arXiv preprint server. Indeed, if we're ever going to achieve a general, human-like machine intelligence, we'll have to develop systems that can learn and then apply those learnings to real-world applications. Recent breakthroughs in machine learning have produced systems that, without any prior knowledge, have learned to master games like chess and Go. But these approaches haven't translated very well to the Rubik's Cube.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively. Even if it was based on real facts, this raw explanation conditioned the project's continuity at that time, unless we could provide a full explanation that the senior executive could understand and trust.