Fults, Scott
The Metacognitive Loop: An Architecture for Building Robust Intelligent Systems
Shahri, Hamid Haidarian (University of Maryland) | Dinalankara, Wikum (University of Maryland) | Fults, Scott (University of Maryland) | Wilson, Shomir (University of Maryland) | Perlis, Donald (University of Maryland) | Schmill, Matt (University of Maryland Baltimore County) | Oates, Tim (University of Maryland Baltimore County) | Josyula, Darsana (Bowie State University) | Anderson, Michael (Franklin and Marshall College)
What commonsense knowledge do intelligent systems need, in order to recover from failures or deal with unexpected situations? It is impractical to represent predetermined solutions to deal with every unanticipated situation or provide predetermined fixes for all the different ways in which systems may fail. We contend that intelligent systems require only a finite set of anomaly-handling strategies to muddle through anomalous situations. We describe a generalized metacognition module that implements such a set of anomaly-handling strategies and that in principle can be attached to any host system to improve the robustness of that system. Several implemented studies are reported, that support our contention.
A Self-Help Guide For Autonomous Systems
Anderson, Michael L. (Franklin &) | Fults, Scott (Marshall College) | Josyula, Darsana P. (University of Maryland) | Oates, Tim (Bowie State University) | Perlis, Don (University of Maryland Baltimore County) | Wilson, Shomir (University of Maryland) | Wright, Dean (University of Maryland)
When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artificial systems depend on their human designers to program in responses to every eventuality and therefore typically don't even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our past and current work on the Meta-Cognitive Loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust and less dependent on their human designers.
A Self-Help Guide For Autonomous Systems
Anderson, Michael L. (Franklin &) | Fults, Scott (Marshall College) | Josyula, Darsana P. (University of Maryland) | Oates, Tim (Bowie State University) | Perlis, Don (University of Maryland Baltimore County) | Wilson, Shomir (University of Maryland) | Wright, Dean (University of Maryland)
Humans learn from their mistakes. When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artificial systems depend on their human designers to program in responses to every eventuality and therefore typically don’t even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our past and current work on the Meta-Cognitive Loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust and less dependent on their human designers.