Goto

Collaborating Authors

rubik


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively. Even if it was based on real facts, this raw explanation conditioned the project's continuity at that time, unless we could provide a full explanation that the senior executive could understand and trust.


A deep learning technique to solve Rubik's cube and other problems step-by-step

#artificialintelligence

Colin G. Johnson, an associate professor at the University of Nottingham, recently developed a deep-learning technique that can learn a so-called "fitness function" from a set of sample solutions to a problem. This technique, presented in a paper published in Wiley's Expert Systems journal, was initially trained to solve the Rubik's cube, the popular 3-D combination puzzle invented by Hungarian sculptor Ernő Rubik. "The aim of our paper was to use machine learning to learn to solve the Rubik's cube," Colin G. Johnson, one of the researchers who carried out the study, told TechXplore. "Rubik's cube is a very complex puzzle, but any of the vast number of combinations is at most 20 steps from a solution. So the approach we take here is to try and solve the problem by learning to do each of those steps individually."


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively.


How explainable artificial intelligence can help humans innovate

AIHub

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.


Why it's vital that AI is able to explain the decisions it makes

#artificialintelligence

Currently, our algorithm is able to consider a human plan for solving the Rubik's Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik's Cube that a person can understand. Our team's next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik's Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.


How explainable artificial intelligence can help humans innovate

#artificialintelligence

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.


Efficient Black-Box Planning Using Macro-Actions with Focused Effects

arXiv.org Artificial Intelligence

The difficulty of classical planning increases exponentially with search-tree depth. Heuristic search can make planning more efficient, but good heuristics can be expensive to compute or may require domain-specific information, and such information may not even be available in the more general case of black-box planning. Rather than treating a given planning problem as fixed and carefully constructing a heuristic to match it, we instead rely on the simple and general-purpose "goal-count" heuristic and construct macro-actions to make it more accurate. Our approach searches for macro-actions with focused effects (i.e. macros that modify only a small number of state variables), which align well with the assumptions made by the goal-count heuristic. Our method discovers macros that dramatically improve black-box planning efficiency across a wide range of planning domains, including Rubik's cube, where it generates fewer states than the state-of-the-art LAMA planner with access to the full SAS$^+$ representation.


Predicting Sim-to-Real Transfer with Probabilistic Dynamics Models

arXiv.org Artificial Intelligence

We propose a method to predict the sim-to-real transfer performance of RL policies. Our transfer metric simplifies the selection of training setups (such as algorithm, hyperparameters, randomizations) and policies in simulation, without the need for extensive and time-consuming real-world rollouts. A probabilistic dynamics model is trained alongside the policy and evaluated on a fixed set of real-world trajectories to obtain the transfer metric. Experiments show that the transfer metric is highly correlated with policy performance in both simulated and real-world robotic environments for complex manipulation tasks. We further show that the transfer metric can predict the effect of training setups on policy transfer performance.


New Rubik's Official Cube App Solves the World's Favourite Puzzle

#artificialintelligence

And if this was not enough, the new Rubik's Official App, available initially on IOS, also includes games with a virtual Cube, so you can learn and have fun simply by swiping a finger, even if you don't own the twisty Cube. You can select your choice of game from Rubik's Mini (2 2) or the original Rubik's Cube (3 3), to the more challenging Rubik's Master (4 4) or the Rubik's Professor (5 5). The app will also allow you to keep track of your solving times and allow you to share the information digitally. Christoph Bettin, the CEO for Rubik's Brand, said, "The Cube has fascinated fans for four decades and I'm the first to admit that the puzzle can be challenging. Research supports the view that solving a Cube links brilliantly with the teaching of science, technology, engineering and maths (STEM), and this high-tech app captures this, while creating fun and excitement."


This Week in AI - Issue #10 Rubik's Code

#artificialintelligence

Rubik's Code is a boutique data science and software service company with more than 10 years of experience in Machine Learning, Artificial Intelligence & Software development. Check out the services we provide. Eager to learn how to build Deep Learning systems using Tensorflow 2 and Python? Get our'Deep Learning for Programmers' ebook here!