Josyula, Darsana
Emulating a Brain System
M' (Bowie State University) | Balé, Kenneth M. (Bowie State University) | Josyula, Darsana
Can brain-mapping data be used to reverse engineer a brain Noam Chomsky discusses the evolution of the field of system in silico? This is actually the question of whether artificial intelligence from 1956, when John McCarthy consciousness is fully contained within the physical defined the science, until today (Ramsay, 2012). The goal structure that is the brain. Do the brain and its supporting of AI was to study intelligence by implementing its systems fully account for consciousness or are there other essential features using man-made technology. This goal components that transcend the body that are also at play? If has resulted in several practical applications people use metaphysical components play a role, then the answer is every day. The field has produced significant advances in negative, since mapping just the anatomical aspects of the search engines, data mining, speech recognition, image consciousness system would leave a critical component processing, and expert systems, to name a few.
Integrating Metacognition into Artificial Agents
Mbale, Kenneth M. (Bowie State University) | Josyula, Darsana (Bowie State University)
Artificial agents need to adapt in order to performeffectively in situations outside of their normal operation specifications.Agents that do not have the capability to adapt to unanticipated situations cannotrecover from unforeseen failures and hence are brittle systems. One approach todeal with the brittleness problem is to have a metacognitive component thatwatches the performance of a host agent and suggests corrective actions torecover from failures. This paper presents the architecture of a metacognitiveagent that can be integrated with any host cognitive agent so that theresulting system can dynamically create expectations about observations from ahost agent’s sensors, and make use of these expectations to notice expectationviolations, assess the cause of a violation and guide a correction if requiredto deal with the violation. The agent makes use of the metacognitive loop (MCL)and three generic ontologies — indications of failures, causes of failures andresponses to deal with failures. This paper describes the work undertaken toenhance the current version of an MCL based agent with the ability toautomatically generate expectations.
Reports of the AAAI 2010 Conference Workshops
Aha, David W. (Naval Research Laboratory) | Boddy, Mark (Adventium Labs) | Bulitko, Vadim (University of Alberta) | Garcez, Artur S. d'Avila (City University London) | Doshi, Prashant (University of Georgia) | Edelkamp, Stefan (TZI, Bremen University) | Geib, Christopher (University of Edinburgh) | Gmytrasiewicz, Piotr (University of Illinois, Chicago) | Goldman, Robert P. (Smart Information Flow Technologies) | Hitzler, Pascal (Wright State University) | Isbell, Charles (Georgia Institute of Technology) | Josyula, Darsana (University of Maryland, College Park) | Kaelbling, Leslie Pack (Massachusetts Institute of Technology) | Kersting, Kristian (University of Bonn) | Kunda, Maithilee (Georgia Institute of Technology) | Lamb, Luis C. (Universidade Federal do Rio Grande do Sul (UFRGS)) | Marthi, Bhaskara (Willow Garage) | McGreggor, Keith (Georgia Institute of Technology) | Nastase, Vivi (EML Research gGmbH) | Provan, Gregory (University College Cork) | Raja, Anita (University of North Carolina, Charlotte) | Ram, Ashwin (Georgia Institute of Technology) | Riedl, Mark (Georgia Institute of Technology) | Russell, Stuart (University of California, Berkeley) | Sabharwal, Ashish (Cornell University) | Smaus, Jan-Georg (University of Freiburg) | Sukthankar, Gita (University of Central Florida) | Tuyls, Karl (Maastricht University) | Meyden, Ron van der (University of New South Wales) | Halevy, Alon (Google, Inc.) | Mihalkova, Lilyana (University of Maryland) | Natarajan, Sriraam (University of Wisconsin)
The AAAI-10 Workshop program was held Sunday and Monday, July 11–12, 2010 at the Westin Peachtree Plaza in Atlanta, Georgia. The AAAI-10 workshop program included 13 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were AI and Fun, Bridging the Gap between Task and Motion Planning, Collaboratively-Built Knowledge Sources and Artificial Intelligence, Goal-Directed Autonomy, Intelligent Security, Interactive Decision Theory and Game Theory, Metacognition for Robust Social Systems, Model Checking and Artificial Intelligence, Neural-Symbolic Learning and Reasoning, Plan, Activity, and Intent Recognition, Statistical Relational AI, Visual Representations and Reasoning, and Abstraction, Reformulation, and Approximation. This article presents short summaries of those events.
Reports of the AAAI 2010 Conference Workshops
Aha, David W. (Naval Research Laboratory) | Boddy, Mark (Adventium Labs) | Bulitko, Vadim (University of Alberta) | Garcez, Artur S. d' (City University London) | Avila (University of Georgia) | Doshi, Prashant (TZI, Bremen University) | Edelkamp, Stefan (University of Edinburgh) | Geib, Christopher (University of Illinois, Chicago) | Gmytrasiewicz, Piotr (Smart Information Flow Technologies) | Goldman, Robert P. (Wright State University) | Hitzler, Pascal (Georgia Institute of Technology) | Isbell, Charles (University of Maryland, College Park) | Josyula, Darsana (Massachusetts Institute of Technology) | Kaelbling, Leslie Pack (University of Bonn) | Kersting, Kristian (Georgia Institute of Technology) | Kunda, Maithilee (Universidade Federal do Rio Grande do Sul (UFRGS)) | Lamb, Luis C. (Willow Garage) | Marthi, Bhaskara (Georgia Institute of Technology) | McGreggor, Keith (EML Research gGmbH) | Nastase, Vivi (University College Cork) | Provan, Gregory (University of North Carolina, Charlotte) | Raja, Anita (Georgia Institute of Technology) | Ram, Ashwin (Georgia Institute of Technology) | Riedl, Mark (University of California, Berkeley) | Russell, Stuart (Cornell University) | Sabharwal, Ashish (University of Freiburg) | Smaus, Jan-Georg (University of Central Florida) | Sukthankar, Gita (Maastricht University) | Tuyls, Karl (University of New South Wales) | Meyden, Ron van der (Google, Inc.) | Halevy, Alon (University of Maryland) | Mihalkova, Lilyana (University of Wisconsin) | Natarajan, Sriraam
The AAAI-10 Workshop program was held Sunday and Monday, July 11–12, 2010 at the Westin Peachtree Plaza in Atlanta, Georgia. The AAAI-10 workshop program included 13 workshops covering a wide range of topics in artificial intelligence. The titles of the workshops were AI and Fun, Bridging the Gap between Task and Motion Planning, Collaboratively-Built Knowledge Sources and Artificial Intelligence, Goal-Directed Autonomy, Intelligent Security, Interactive Decision Theory and Game Theory, Metacognition for Robust Social Systems, Model Checking and Artificial Intelligence, Neural-Symbolic Learning and Reasoning, Plan, Activity, and Intent Recognition, Statistical Relational AI, Visual Representations and Reasoning, and Abstraction, Reformulation, and Approximation. This article presents short summaries of those events.
The Metacognitive Loop: An Architecture for Building Robust Intelligent Systems
Shahri, Hamid Haidarian (University of Maryland) | Dinalankara, Wikum (University of Maryland) | Fults, Scott (University of Maryland) | Wilson, Shomir (University of Maryland) | Perlis, Donald (University of Maryland) | Schmill, Matt (University of Maryland Baltimore County) | Oates, Tim (University of Maryland Baltimore County) | Josyula, Darsana (Bowie State University) | Anderson, Michael (Franklin and Marshall College)
What commonsense knowledge do intelligent systems need, in order to recover from failures or deal with unexpected situations? It is impractical to represent predetermined solutions to deal with every unanticipated situation or provide predetermined fixes for all the different ways in which systems may fail. We contend that intelligent systems require only a finite set of anomaly-handling strategies to muddle through anomalous situations. We describe a generalized metacognition module that implements such a set of anomaly-handling strategies and that in principle can be attached to any host system to improve the robustness of that system. Several implemented studies are reported, that support our contention.
Metacognition for Detecting and Resolving Conflicts in Operational Policies
Josyula, Darsana (Bowie State University) | Donahue, Bette (Bowie State University) | McCaslin, Matthew (Bowie State University) | Snowden, Michelle (Franklin and Marshall College) | Anderson, Michael (University of Maryland Baltimore County) | Oates, Timothy (University of Maryland Baltimore County) | Schmill, Matthew (University of Maryland, College Park) | Perlis, Donald
Informational conflicts in operational policies cause agents to run into situations where responding based on the rules in one policy violates the same or another policy. Static checking of these conflicts is infeasible and impractical in a dynamic environment. This paper discusses a practical approach to handling policy conflicts in real-time domains within the context of a hierarchical military command and control simulated system that consists of a central command, squad leaders and squad members. All the entities in the domain function according to preset communication and action protocols in order to perform successful missions. Each entity in the domain is equipped with an instance of a metacognitive component to provide on-board/on-time analysis of actions and recommendations during the operation of the system. The metacognitive component is the Metacognitive Loop (MCL) which is a general purpose anomaly processor designed to function as a cross-domain plugin system. It continuously monitors expectations and notices when they are violated, assesses the cause of the violation and guides the host system to an appropriate response. MCL makes use of three ontologies—indications, failures and responses—to perform the notice, assess and guide phases when a conflict occurs. Conflicts in the set of rules (within a policy or between policies) manifest as expectation violations in the real world. These expectation violations trigger nodes in the indication ontology which, in turn, activate associated nodes in the failure ontology. The responding failure nodes then activate the appropriate nodes in the response ontology. Depending on which response node gets activated, the actual response may vary from ignoring the conflict to prioritizing, modifying or deleting one or more conflicting rules.