If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Pillai, Parvathy Sudhir, Leong, Tze-Yun
Early detection of Alzheimer's disease (AD) and identification of potential risk/beneficial factors are important for planning and administering timely interventions or preventive measures. In this paper, we learn a disease model for AD that combines genotypic and phenotypic profiles, and cognitive health metrics of patients. We propose a probabilistic generative subspace that describes the correlative, complementary and domain-specific semantics of the dependencies in multi-view, multi-modality medical data. Guided by domain knowledge and using the latent consensus between abstractions of multi-view data, we model the fusion as a data generating process. We show that our approach can potentially lead to i) explainable clinical predictions and ii) improved AD diagnoses.
We propose a new method for transferring a policy from a source task to a target task in model-based reinforcement learning. Our work is motivated by scenarios where a robotic agent operates in similar but challenging environments, such as hospital wards, differentiated by structural arrangements or obstacles, such as furniture. We address problems that require fast responses adapted from incomplete, prior knowledge of the agent in new scenarios. We present an efficient selective exploration strategy that maximally reuses the source task policy. Reuse efficiency is effected through identifying sub-spaces that are different in the target environment, thus limiting the exploration needed in the target task. We empirically show that SEAPoT performs better in terms of jump starts and cumulative average rewards, as compared to existing state-of-the-art policy reuse methods.
We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the state-of-the-art algorithms, and scales well in very large problems.
Leong, Tze-Yun (Singapore Management University)
We envision an integrated framework for supporting the development and deployment of human-aware, general artificial intelligence (AI) that needs to collaborate in uncertain, changing environments. We examine the technology and system requirements of building assistive care agents for dementia or cognitive impaired patients through the continuum of care. We summarize the new AI capabilities and show examples of how an evolving, adaptive development approach would be able to support the basic functionalities and applications in a sound, practical, and scalable manner. We highlight the challenges and the opportunities involved in realizing the proposed framework, and call for future research and development efforts from the AI community to work in this challenging and important domain.
Finding optimal policies for Markov Decision Processes with large state spaces is in general intractable. Nonetheless, simulation-based algorithms inspired by Sparse Sampling (SS) such as Upper Confidence Bound applied in Trees (UCT) and Forward Search Sparse Sampling (FSSS) have been shown to perform reasonably well in both theory and practice, despite the high computational demand. To improve the efficiency of these algorithms, we adopt a simple enhancement technique with a heuristic policy to speed up the selection of optimal actions. The general method, called Aux, augments the look-ahead tree with auxiliary arms that are evaluated by the heuristic policy. In this paper, we provide theoretical justification for the method and demonstrate its effectiveness in two experimental benchmarks that showcase the faster convergence to a near optimal policy for both SS and FSSS. Moreover, to further speed up the convergence of these algorithms at the early stage, we present a novel mechanism to combine them with UCT so that the resulting hybrid algorithm is superior to both of its components.
This paper outlines a methodology for analyzing the representational support for knowledge-based decision-modeling in a broad domain. A relevant set of inference patterns and knowledge types are identified. By comparing the analysis results to existing representations, some insights are gained into a design approach for integrating categorical and uncertain knowledge in a context sensitive manner.
Automated decision making is often complicated by the complexity of the knowledge involved. Much of this complexity arises from the context sensitive variations of the underlying phenomena. We propose a framework for representing descriptive, context-sensitive knowledge. Our approach attempts to integrate categorical and uncertain knowledge in a network formalism. This paper outlines the basic representation constructs, examines their expressiveness and efficiency, and discusses the potential applications of the framework.
Nguyen, Truong-Huy Dinh (National University of Singapore) | Hsu, David (National University of Singapore) | Lee, Wee-Sun (National University of Singapore) | Leong, Tze-Yun (National University of Singapore) | Kaelbling, Leslie Pack (Massachusetts Institute of Technology) | Lozano-Perez, Tomas (Massachusetts Institute of Technology) | Grant, Andrew Haydn (Singapore-MIT GAMBIT Game Lab)
We apply decision theoretic techniques to construct non-player characters that are able to assist a human player in collaborative games. The method is based on solving Markov decision processes, which can be difficult when the game state is described by many variables. To scale to more complex games, the method allows decomposition of a game task into subtasks, each of which can be modelled by a Markov decision process. Intention recognition is used to infer the subtask that the human is currently performing, allowing the helper to assist the human in performing the correct task. Experiments show that the method can be effective, giving near-human level performance in helping a human in a collaborative game.
Model minimization in Factored Markov Decision Processes (FMDPs) is concerned with finding the most compact partition of the state space such that all states in the same block are action-equivalent. This is an important problem because it can potentially transform a large FMDP into an equivalent but much smaller one, whose solution can be readily used to solve the original model. Previous model minimization algorithms are iterative in nature, making opaque the relationship between the input model and the output partition. We demonstrate that given a set of well-defined concepts and operations on partitions, we can express the model minimization problem in an analytic fashion. The theoretical results developed can be readily applied to solving problems such as estimating the size of the minimum partition, refining existing algorithms, and so on.
Anderson, Michael L., Barkowsky, Thomas, Berry, Pauline, Blank, Douglas, Chklovski, Timothy, Domingos, Pedro, Druzdzel, Marek J., Freksa, Christian, Gersh, John, Hegarty, Mary, Leong, Tze-Yun, Lieberman, Henry, Lowe, Ric, Luperfoy, Susann, Mihalcea, Rada, Meeden, Lisa, Miller, David P., Oates, Tim, Popp, Robert, Shapiro, Daniel, Schurr, Nathan, Singh, Push, Yen, John
The Association for the Advancement of Artificial Intelligence presented its 2005 Spring Symposium Series on Monday through Wednesday, March 21-23, 2005 at Stanford University in Stanford, California. The topics of the eight symposia in this symposium series were (1) AI Technologies for Homeland Security; (2) Challenges to Decision Support in a Changing World; (3) Developmental Robotics; (4) Dialogical Robots: Verbal Interaction with Embodied Agents and Situated Devices; (5) Knowledge Collection from Volunteer Contributors; (6) Metacognition in Computation; (7) Persistent Assistants: Living and Working with AI; and (8) Reasoning with Mental and External Diagrams: Computational Modeling and Spatial Assistance.