Goto

Collaborating Authors

Planning & Scheduling


China Overtakes The US In AI Journal Citation - Stanford AI Index Report

#artificialintelligence

Stanford publishes its AI Index Report that focuses on the developments of the complex artificial intelligence landscape since 2017. The latest report -- 2021 -- shed some light on the impact of COVID-19 in AI research, countries leading the race in research, and more. In a total of 7 chapters, the Stanford AI Index Report also covers aspects like AI education, research and development, diversity in AI, and AI policy. One of the most surprising revelations, for many, is that China overtakes the US in terms of journal citation, pinpointing the advancement in their research. This comes after China surpassed the US in the terms of the number of artificial intelligence research publications in 2017 after briefly overtaking in 2004.


Artificial intelligence research continues to grow as China overtakes US in AI journal citations

#artificialintelligence

That's a higher percentage growth than 2018 to 2019 when the volume of publications increased by 19.6 percent. China continues to be a growing force in AI R&D, overtaking the US for overall journal citations in artificial intelligence research last year. The country already publishes more AI papers than any other country, but the United States still has more cited papers at AI conferences -- one indicator of the novelty and significance of the underlying research. These figures come from the fourth annual AI Index, a collection of statistics, benchmarks, and milestones meant to gauge global progress in artificial intelligence. The report is collated with the help of Stanford University, and you can read all 222 pages here.


Appointments pushed back, confusion reigns over 2nd COVID-19 vaccine dose

Los Angeles Times

The instructions upon getting a first dose of COVID-19 vaccine are clear: People should get the second shot three or four weeks later. But things get a lot murkier when it comes to actually getting an appointment to meet that deadline. As more Los Angeles County residents than ever receive their first doses, tightening vaccine supplies and online scheduling problems are hampering their ability to finish the two-dose vaccination process. On Thursday, potentially thousands of people had their vaccine appointments postponed after the Ralphs supermarket chain -- a large vaccine distributor -- said the county's Department of Public Health, at the request of state officials, had "recovered" 10,000 doses previously intended for scheduled appointments, according to emails obtained by The Times. A Ralphs spokesperson said only first-dose customers were affected, but it only added to the confusion.


Solving QSAT problems with neural MCTS

arXiv.org Artificial Intelligence

Recent achievements from AlphaZero using self-play has shown remarkable performance on several board games. It is plausible to think that self-play, starting from zero knowledge, can gradually approximate a winning strategy for certain two-player games after an amount of training. In this paper, we try to leverage the computational power of neural Monte Carlo Tree Search (neural MCTS), the core algorithm from AlphaZero, to solve Quantified Boolean Formula Satisfaction (QSAT) problems, which are PSPACE complete. Knowing that every QSAT problem is equivalent to a QSAT game, the game outcome can be used to derive the solutions of the original QSAT problems. We propose a way to encode Quantified Boolean Formulas (QBFs) as graphs and apply a graph neural network (GNN) to embed the QBFs into the neural MCTS. After training, an off-the-shelf QSAT solver is used to evaluate the performance of the algorithm. Our result shows that, for problems within a limited size, the algorithm learns to solve the problem correctly merely from self-play.


Hierarchical Width-Based Planning and Learning

arXiv.org Artificial Intelligence

Width-based search methods have demonstrated state-of-the-art performance in a wide range of testbeds, from classical planning problems to image-based simulators such as Atari games. These methods scale independently of the size of the state-space, but exponentially in the problem width. In practice, running the algorithm with a width larger than 1 is computationally intractable, prohibiting IW from solving higher width problems. In this paper, we present a hierarchical algorithm that plans at two levels of abstraction. A high-level planner uses abstract features that are incrementally discovered from low-level pruning decisions. We illustrate this algorithm in classical planning PDDL domains as well as in pixel-based simulator domains. In classical planning, we show how IW(1) at two levels of abstraction can solve problems of width 2. For pixel-based domains, we show how in combination with a learned policy and a learned value function, the proposed hierarchical IW can outperform current flat IW-based planners in Atari games with sparse rewards.


Stabilized Nested Rollout Policy Adaptation

arXiv.org Artificial Intelligence

Nested Rollout Policy Adaptation (NRPA) is a Monte Carlo search algorithm for single player games. In this paper we propose to modify NRPA in order to improve the stability of the algorithm. Experiments show it improves the algorithm for different application domains: SameGame, Traveling Salesman with Time Windows and Expression Discovery.


Improving non-deterministic uncertainty modelling in Industry 4.0 scheduling

arXiv.org Artificial Intelligence

The latest Industrial revolution has helped industries in achieving very high rates of productivity and efficiency. It has introduced data aggregation and cyber-physical systems to optimize planning and scheduling. Although, uncertainty in the environment and the imprecise nature of human operators are not accurately considered for into the decision making process. This leads to delays in consignments and imprecise budget estimations. This widespread practice in the industrial models is flawed and requires rectification. Various other articles have approached to solve this problem through stochastic or fuzzy set model methods. This paper presents a comprehensive method to logically and realistically quantify the non-deterministic uncertainty through probabilistic uncertainty modelling. This method is applicable on virtually all Industrial data sets, as the model is self adjusting and uses epsilon-contamination to cater to limited or incomplete data sets. The results are numerically validated through an Industrial data set in Flanders, Belgium. The data driven results achieved through this robust scheduling method illustrate the improvement in performance.


Cost-optimal Planning, Delete Relaxation, Approximability, and Heuristics

Journal of Artificial Intelligence Research

Cost-optimal planning is a very well-studied topic within planning, and it has proven to be computationally hard both in theory and in practice. Since cost-optimal planning is an optimisation problem, it is natural to analyse it through the lens of approximation. An important reason for studying cost-optimal planning is heuristic search; heuristic functions that guide the search in planning can often be viewed as algorithms solving or approximating certain optimisation problems. Many heuristic functions (such as the ubiquitious h+ heuristic) are based on delete relaxation, which ignores negative effects of actions. Planning for instances where the actions have no negative effects is often referred to as monotone planning. The aim of this article is to analyse the approximability of cost-optimal monotone planning, and thus the performance of relevant heuristic functions. Our findings imply that it may be beneficial to study these kind of problems within the framework of parameterised complexity and we initiate work in this direction.


Argument Schemes and Dialogue for Explainable Planning

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. In order to establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. In this paper, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements; and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.


Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery

arXiv.org Artificial Intelligence

With the growing capabilities of intelligent systems, the integration of robots in our everyday life is increasing. However, when interacting in such complex human environments, the occasional failure of robotic systems is inevitable. The field of explainable AI has sought to make complex-decision making systems more interpretable but most existing techniques target domain experts. On the contrary, in many failure cases, robots will require recovery assistance from non-expert users. In this work, we introduce a new type of explanation, that explains the cause of an unexpected failure during an agent's plan execution to non-experts. In order for error explanations to be meaningful, we investigate what types of information within a set of hand-scripted explanations are most helpful to non-experts for failure and solution identification. Additionally, we investigate how such explanations can be autonomously generated, extending an existing encoder-decoder model, and generalized across environments. We investigate such questions in the context of a robot performing a pick-and-place manipulation task in the home environment. Our results show that explanations capturing the context of a failure and history of past actions, are the most effective for failure and solution identification among non-experts. Furthermore, through a second user evaluation, we verify that our model-generated explanations can generalize to an unseen office environment, and are just as effective as the hand-scripted explanations.