Goto

Collaborating Authors

Results


Op-Ed: Prevent future L.A. City Council scandals by fixing our planning system

Los Angeles Times

Corruption has again been exposed at Los Angeles City Hall, with one council member under indictment in a development scandal and another having pleaded guilty to his part in it. The transgressions highlight the real-world consequences of failing to modernize outdated planning codes and leaving decision-making power over development projects in the hands of City Council members. To try to prevent future corruption, the city needs to fix what's broken about L.A. planning -- by fully updating planning and zoning laws according to the recommendations of an outside commission, not the council. Some City Council members have proposed incremental reforms in reaction to the indictment of council member Jose Huizar, who has been charged with a running a "pay-to-play" scheme to shake down real estate developers for cash bribes and campaign donations in exchange for his help getting high-rise development projects approved. Former council member Mitch Englander pleaded guilty to falsifying material facts related to the scheme.


A Hierarchical Architecture for Human-Robot Cooperation Processes

arXiv.org Artificial Intelligence

In this paper we propose FlexHRC+, a hierarchical human-robot cooperation architecture designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in high-variability shop-floor tasks. The architecture encompasses three levels, namely for perception, representation, and action. Building up on previous work, here we focus on (i) an in-the-loop decision making process for the operations of collaborative robots coping with the variability of actions carried out by human operators, and (ii) the representation level, integrating a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic. The architecture is accompanied by experiments including collaborative furniture assembly and object positioning tasks.


Machine Reasoning Explainability

arXiv.org Artificial Intelligence

As a field of AI, Machine Reasoning (MR) uses largely symbolic means to formalize and emulate abstract reasoning. Studies in early MR have notably started inquiries into Explainable AI (XAI) -- arguably one of the biggest concerns today for the AI community. Work on explainable MR as well as on MR approaches to explainability in other areas of AI has continued ever since. It is especially potent in modern MR branches, such as argumentation, constraint and logic programming, planning. We hereby aim to provide a selective overview of MR explainability techniques and studies in hopes that insights from this long track of research will complement well the current XAI landscape. This document reports our work in-progress on MR explainability.


Report on the First and Second ICAPS Workshops on Hierarchical Planning

Interactive AI Magazine

Hierarchical planning has attracted renewed interest in the last couple of years. As a consequence, the time was right to establish a workshop devoted entirely to hierarchical planning – an insight shared by many supporters. In this paper we report on the first ICAPS workshop on Hierarchical Planning held in Delft, The Netherlands, in 2018 as well as on the second workshop held in Berkeley, CA, USA, in 2019. Hierarchical planning approaches incorporate hierarchies in the domain model. In the most common form, the hierarchy is defined among tasks, leading to the distinction between primitive and abstract tasks.


Proof-Carrying Plans: a Resource Logic for AI Planning

arXiv.org Artificial Intelligence

Recent trends in AI verification and Explainable AI have raised the question of whether AI planning techniques can be verified. In this paper, we present a novel resource logic, the Proof Carrying Plans (PCP) logic that can be used to verify plans produced by AI planners. The PCP logic takes inspiration from existing resource logics (such as Linear logic and Separation logic) as well as Hoare logic when it comes to modelling states and resource-aware plan execution. It also capitalises on the Curry-Howard approach to logics, in its treatment of plans as functions and plan pre- and post-conditions as types. This paper presents two main results. From the theoretical perspective, we show that the PCP logic is sound relative to the standard possible world semantics used in AI planning. From the practical perspective, we present a complete Agda formalisation of the PCP logic and of its soundness proof. Moreover, we showcase the Curry-Howard, or functional, value of this implementation by supplementing it with the library that parses AI plans into Agda's proofs automatically. We provide evaluation of this library and the resulting Agda functions.


Neural Manipulation Planning on Constraint Manifolds

arXiv.org Artificial Intelligence

The presence of task constraints imposes a significant challenge to motion planning. Despite all recent advancements, existing algorithms are still computationally expensive for most planning problems. In this paper, we present Constrained Motion Planning Networks (CoMPNet), the first neural planner for multimodal kinematic constraints. Our approach comprises the following components: i) constraint and environment perception encoders; ii) neural robot configuration generator that outputs configurations on/near the constraint manifold(s), and iii) a bidirectional planning algorithm that takes the generated configurations to create a feasible robot motion trajectory. We show that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems. Furthermore, it generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates. When compared to the state-of-the-art constrained motion planning algorithms, CoMPNet outperforms by order of magnitude improvement in computational speed with a significantly lower variance.


On Controllability of AI

arXiv.org Artificial Intelligence

The unprecedented progress in Artificial Intelligence (AI) [1-6], over the last decade, came alongside of multiple AI failures [7, 8] and cases of dual use [9] causing a realization [10] that it is not sufficient to create highly capable machines, but that it is even more important to make sure that intelligent machines are beneficial [11] for the humanity. This lead to the birth of the new subfield of research commonly known as AI Safety and Security [12] with hundreds of papers and books published annually on different aspects of the problem [13-31]. All such research is done under the assumption that the problem of controlling highly capable intelligent machines is solvable, which has not been established by any rigorous means. However, it is a standard practice in computer science to first show that a problem doesn't belong to a class of unsolvable problems [32, 33] before investing resources into trying to solve it or deciding what approaches to try. Unfortunately, to the best of our knowledge no mathematical proof or even rigorous argumentation has been published demonstrating that the AI control problem may be solvable, even in principle, much less in practice. Or as Gans puts it citing Bostrom: "Thusfar, AI researchers and philosophers have not been able to come up with methods of control that would ensure [bad] outcomes did not take place …" [34].


Learning Combinatorial Optimization on Graphs: A Survey with Applications to Networking

arXiv.org Artificial Intelligence

OMBINATORIAL optimization problems arise in various and heterogeneous domains such as routing, combinatorial challenges. We note that the inherent structure scheduling, planning, decision-making processes, transportation of the problems in numerous fields or the data itself is that of and telecommunications, and therefore have a direct a graph [2]. In this light, it is of paramount interest to examine impact on practical scenarios [1]. Existing approaches suffer the potential of machine learning for addressing combinatorial from certain limitations when applied to practical problems: optimization problems on graphs and in particular, for forbidding execution time and the need to hand engineer overcoming the limitations of the traditional approaches.


Deployment and Evaluation of a Flexible Human-Robot Collaboration Model Based on AND/OR Graphs in a Manufacturing Environment

arXiv.org Artificial Intelligence

The Industry 4.0 paradigm promises shorter development times, increased ergonomy, higher flexibility, and resource efficiency in manufacturing environments. Collaborative robots are an important tangible technology for implementing such a paradigm. A major bottleneck to effectively deploy collaborative robots to manufacturing industries is developing task planning algorithms that enable them to recognize and naturally adapt to varying and even unpredictable human actions while simultaneously ensuring an overall efficiency in terms of production cycle time. In this context, an architecture encompassing task representation, task planning, sensing, and robot control has been designed, developed and evaluated in a real industrial environment. A pick-and-place palletization task, which requires the collaboration between humans and robots, is investigated. The architecture uses AND/OR graphs for representing and reasoning upon human-robot collaboration models online. Furthermore, objective measures of the overall computational performance and subjective measures of naturalness in human-robot collaboration have been evaluated by performing experiments with production-line operators. The results of this user study demonstrate how human-robot collaboration models like the one we propose can leverage the flexibility and the comfort of operators in the workplace. In this regard, an extensive comparison study among recent models has been carried out.


Current Advancements on Autonomous Mission Planning and Management Systems: an AUV and UAV perspective

arXiv.org Artificial Intelligence

Analyzing encircling situation is the most crucial part of autonomous adaptation. Since there are many unknown and constantly changing factors in the real environment, momentary adjustment to the consistently alternating circumstances is highly required for addressing autonomy. To respond properly to changing environment, an utterly self-ruling vehicle ought to have the capacity to realize/comprehend its particular position and the surrounding environment. However, these vehicles extremely rely on human involvement to resolve entangled missions that cannot be precisely characterized in advance, which restricts their applications and accuracy. Reducing dependence on human supervision can be achieved by improving level of autonomy. Over the previous decades, autonomy and mission planning have been extensively researched on different structures and diverse conditions; nevertheless, aiming at robust mission planning in extreme conditions, here we provide exhaustive study of UVs autonomy as well as its related properties in internal and external situation awareness. In the following discussion, different difficulties in the scope of AUVs and UAVs will be discussed.