Choi, Dongkyu
A Probabilistic-Logic based Commonsense Representation Framework for Modelling Inferences with Multiple Antecedents and Varying Likelihoods
Jaiswal, Shantanu, Yan, Liu, Choi, Dongkyu, Kwok, Kenneth
Commonsense knowledge-graphs (CKGs) are important resources towards building machines that can 'reason' on text or environmental inputs and make inferences beyond perception. While current CKGs encode world knowledge for a large number of concepts and have been effectively utilized for incorporating commonsense in neural models, they primarily encode declarative or single-condition inferential knowledge and assume all conceptual beliefs to have the same likelihood. Further, these CKGs utilize a limited set of relations shared across concepts and lack a coherent knowledge organization structure resulting in redundancies as well as sparsity across the larger knowledge graph. Consequently, today's CKGs, while useful for a first level of reasoning, do not adequately capture deeper human-level commonsense inferences which can be more nuanced and influenced by multiple contextual or situational factors. Accordingly, in this work, we study how commonsense knowledge can be better represented by -- (i) utilizing a probabilistic logic representation scheme to model composite inferential knowledge and represent conceptual beliefs with varying likelihoods and (ii) incorporating a hierarchical conceptual ontology to identify salient concept-relevant relations and organize beliefs at different conceptual levels. Our resulting knowledge representation framework can encode a wider variety of world knowledge and represent beliefs flexibly using grounded concepts as well as free-text phrases. As a result, the framework can be utilized as both a traditional free-text knowledge graph and a grounded logic-based inference system more suitable for neuro-symbolic applications. We describe how we extend the PrimeNet knowledge base with our framework through crowd-sourcing and expert-annotation, and demonstrate its application for more interpretable passage-based semantic parsing and question answering.
Creating and Using Tools in a Hybrid Cognitive Architecture
Choi, Dongkyu (University of Kansas) | Langley, Pat (Institute for the Study of Learning and Expertise) | To, Son Thanh (Institute for the Study of Learning and Expertise)
People regularly use objects in the environment as tools to achieve their goals. In this paper we report extensions to the ICARUS cognitive architecture that let it create and use combinations of objects inthis manner. These extensions include the ability to represent virtual objects composed of simpler ones and to reason about their quantitative features. They also include revised modules for planning and execution that operate over this hybrid representation, taking into account both relational structures and numeric attributes. We demonstrate the extended architecture's behavior on a number of tasks that involve tool construction and use, after which we discuss related research and plans for future work.
Explainable Agency for Intelligent Autonomous Systems
Langley, Pat (University of Auckland) | Meadows, Ben (University of Auckland) | Sridharan, Mohan (University of Auckland) | Choi, Dongkyu (University of Kansas)
As intelligent agents become more autonomous, sophisticated, and prevalent, it becomes increasingly important that humans interact with them effectively. Machine learning is now used regularly to acquire expertise, but common techniques produce opaque content whose behavior is difficult to interpret. Before they will be trusted by humans, autonomous agents must be able to explain their decisions and the reasoning that produced their choices. We will refer to this general ability as explainable agency. This capacity for explaining decisions is not an academic exercise. When a self-driving vehicle takes an unfamiliar turn, its passenger may desire to know its reasons. When a synthetic ally in a computer game blocks a player's path, he may want to understand its purpose. When an autonomous military robot has abandoned a high-priority goal to pursue another one, its commander may request justification. As robots, vehicles, and synthetic characters become more self-reliant, people will require that they explain their behaviors on demand. The more impressive these agents' abilities, the more essential that we be able to understand them.
Dynamic Goal Recognition Using Windowed Action Sequences
Menager, David (University of Kansas) | Choi, Dongkyu (University of Kansas) | Floyd, Michael W. (Knexus Research Corporation) | Task, Christine (Knexus Research Corporation) | Aha, David W. (Naval Research Laboratory)
In goal recognition, the basic problem domain consists of the following: Recent advances in robotics and artificial intelligence have brought a variety of assistive robots designed to help humans - a set E of environment fluents; accomplish their goals. However, many have limited autonomy and lack the ability to seamlessly integrate with - a state S that is a value assignment to those fluents; human teams. One capability that can facilitate such humanrobot - a set A of actions that describe potential transitions between teaming is the robot's ability to recognize its teammates' states (with preconditions and effects defined over goals, and react appropriately. This function permits E, and parameterized over a set of environment objects the robot to actively assist the team and avoid performing O); and redundant or counterproductive actions.
ActorSim, A Toolkit for Studying Cross-Disciplinary Challenges in Autonomy
Roberts, Mark (Naval Research Laboratory) | Hiatt, Laura M. (Naval Research Laboratory) | Coman, Alexandra (Naval Research Laboratory) | Choi, Dongkyu (University of Kansas) | Johnson, Benjamin (Naval Research Laboratory) | Aha, David W. (Naval Research Laboratory)
We introduce ActorSim, the Actor Simulator, a toolkit for studying situated autonomy. As background, we review three goal-reasoning projects implemented in ActorSim: one project that uses information metrics in foreign disaster relief and two projects that learn subgoal selection for sequential decision making in Minecraft. We then discuss how ActorSim can be used to address cross-disciplinary gaps in several ongoing projects. To varying degrees, the projects integrate concerns within distinct specializations of AI and between AI and other more human-focused disciplines. These areas include automated planning, learning, cognitive architectures, robotics, cognitive modeling, sociology, and psychology.
Interoperating Learning Mechanisms in a Cognitive Architecture
Choi, Dongkyu (University of Illinois at Chicago) | Ohlsson, Stellan (University of Illinois at Chicago)
People acquire new knowledge in various ways and this helps them to adapt to changing environment properly. In this paper, we investigatethe interoperation of multiple learning mechanisms within a single system. We extend a cognitive architecture, ICARUS, to have three different modes of learning. Through experiments in a modified Blocks World and a route generation domain, we test and demonstrate the system's ability to get synergistic effects from these learning mechanisms.