Goto

Collaborating Authors

Results


A Logic-Based Explanation Generation Framework for Classical and Hybrid Planning Problems

Journal of Artificial Intelligence Research

In human-aware planning systems, a planning agent might need to explain its plan to a human user when that plan appears to be non-feasible or sub-optimal. A popular approach, called model reconciliation, has been proposed as a way to bring the model of the human user closer to the agent’s model. To do so, the agent provides an explanation that can be used to update the model of human such that the agent’s plan is feasible or optimal to the human user. Existing approaches to solve this problem have been based on automated planning methods and have been limited to classical planning problems only. In this paper, we approach the model reconciliation problem from a different perspective, that of knowledge representation and reasoning, and demonstrate that our approach can be applied not only to classical planning problems but also hybrid systems planning problems with durative actions and events/processes. In particular, we propose a logic-based framework for explanation generation, where given a knowledge base KBa (of an agent) and a knowledge base KBh (of a human user), each encoding their knowledge of a planning problem, and that KBa entails a query q (e.g., that a proposed plan of the agent is valid), the goal is to identify an explanation ε ⊆ KBa such that when it is used to update KBh, then the updated KBh also entails q. More specifically, we make the following contributions in this paper: (1) We formally define the notion of logic-based explanations in the context of model reconciliation problems; (2) We introduce a number of cost functions that can be used to reflect preferences between explanations; (3) We present algorithms to compute explanations for both classical planning and hybrid systems planning problems; and (4) We empirically evaluate their performance on such problems. Our empirical results demonstrate that, on classical planning problems, our approach is faster than the state of the art when the explanations are long or when the size of the knowledge base is small (e.g., the plans to be explained are short). They also demonstrate that our approach is efficient for hybrid systems planning problems. Finally, we evaluate the real-world efficacy of explanations generated by our algorithms through a controlled human user study, where we develop a proof-of-concept visualization system and use it as a medium for explanation communication.


Agent-Based Modeling for Predicting Pedestrian Trajectories Around an Autonomous Vehicle

Journal of Artificial Intelligence Research

This paper addresses modeling and simulating pedestrian trajectories when interacting with an autonomous vehicle in a shared space. Most pedestrian–vehicle interaction models are not suitable for predicting individual trajectories. Data-driven models yield accurate predictions but lack generalizability to new scenarios, usually do not run in real time and produce results that are poorly explainable. Current expert models do not deal with the diversity of possible pedestrian interactions with the vehicle in a shared space and lack microscopic validation. We propose an expert pedestrian model that combines the social force model and a new decision model for anticipating pedestrian–vehicle interactions. The proposed model integrates different observed pedestrian behaviors, as well as the behaviors of the social groups of pedestrians, in diverse interaction scenarios with a car. We calibrate the model by fitting the parameters values on a training set. We validate the model and evaluate its predictive potential through qualitative and quantitative comparisons with ground truth trajectories. The proposed model reproduces observed behaviors that have not been replicated by the social force model and outperforms the social force model at predicting pedestrian behavior around the vehicle on the used dataset. The model generates explainable and real-time trajectory predictions. Additional evaluation on a new dataset shows that the model generalizes well to new scenarios and can be applied to an autonomous vehicle embedded prediction.


Giving Zebrafish Psychotropic Drugs to Train AI Algorithms - Neuroscience News

#artificialintelligence

Summary: Researchers trained an AI to determine which psychotropic agent a zebrafish had been exposed to based on the animal's behaviors and locomotion patterns. Neuroscientists from St. Petersburg University, led by Professor Allan V. Kalueff, in collaboration with an international team of IT specialists, have become the first in the world to apply the artificial intelligence (AI) algorithms to phenotype zebrafish psychoactive drug responses. They managed to train AI to determine--by fish response--which psychotropic agents were used in the experiment. The research findings are published in the journal Progress in Neuro-Psychopharmacology and Biological Psychiatry. The zebrafish (Danio rerio) is a freshwater bony fish that is presently the second-most (after mice) used model organism in biomedical research.


Gaming the Known and Unknown via Puzzle Solving With an Artificial Intelligence Agent

#artificialintelligence

Researchers design multiple strategies for an artificial intelligent (AI) agent to solve a stochastic puzzle like Minesweeper. For decades, efforts in solving games had been exclusive to solving two-player games (i.e., board games like checkers, chess-like games, etc.), where the game outcome can be correctly and efficiently predicted by applying some artificial intelligence (AI) search technique and collecting a massive amount of gameplay statistics. However, such a method and technique cannot be applied directly to the puzzle-solving domain since puzzles are generally played alone (single-player) and have unique characteristics (such as stochastic or hidden information). So then, a question arose as to how the AI technique can retain its performance for solving two-player games but instead applied to a single-agent puzzle? For years, puzzles and games had been regarded as interchangeable or one part of the other.


The Application of Machine Learning Techniques for Predicting Match Results in Team Sport: A Review

Journal of Artificial Intelligence Research

Predicting the results of matches in sport is a challenging and interesting task. In this paper, we review a selection of studies from 1996 to 2019 that used machine learning for predicting match results in team sport. Considering both invasion sports and striking/fielding sports, we discuss commonly applied machine learning algorithms, as well as common approaches related to data and evaluation. Our study considers accuracies that have been achieved across different sports, and explores whether evidence exists to support the notion that outcomes of some sports may be inherently more difficult to predict. We also uncover common themes of future research directions and propose recommendations for future researchers. Although there remains a lack of benchmark datasets (apart from in soccer), and the differences between sports, datasets and features makes between-study comparisons difficult, as we discuss, it is possible to evaluate accuracy performance in other ways. Artificial Neural Networks were commonly applied in early studies, however, our findings suggest that a range of models should instead be compared. Selecting and engineering an appropriate feature set appears to be more important than having a large number of instances. For feature selection, we see potential for greater inter-disciplinary collaboration between sport performance analysis, a sub-discipline of sport science, and machine learning.


Researchers claim biometric deepfake detection method improves state-of-the-art

#artificialintelligence

Biometrics can effectively be used to detect deepfakes, according to a paper from a team of Italian and German researchers reported by Unite.AI, and could be a less "unwieldy" method of doing so than detecting synthetic artefacts and other methods. The framework for the method specifies the use of at least ten genuine videos of the subject to train the biometric model, the researchers from the University of Federico II in Naples and the Technical University of Munich write. The research into'Audio-Visual Person-of-Interest DeepFake Detection' has been posted to Arxive, and describes what the authors say is a new state-of-the-art in deepfake detection. In testing against well-known datasets, the researchers improved area under curve (AUC) scores by 3 and 10 for accuracy identifying genuine high and low-quality videos, respectively, and 7 percent for deepfake videos. Interestingly, on high-quality videos, the worst-performing system delivered deepfake detection accuracy of above 69 percent.


Multiobjective Tree-Structured Parzen Estimator

Journal of Artificial Intelligence Research

Practitioners often encounter challenging real-world problems that involve a simultaneous optimization of multiple objectives in a complex search space. To address these problems, we propose a practical multiobjective Bayesian optimization algorithm. It is an extension of the widely used Tree-structured Parzen Estimator (TPE) algorithm, called Multiobjective Tree-structured Parzen Estimator (MOTPE). We demonstrate that MOTPE approximates the Pareto fronts of a variety of benchmark problems and a convolutional neural network design problem better than existing methods through the numerical results. We also investigate how the configuration of MOTPE affects the behavior and the performance of the method and the effectiveness of asynchronous parallelization of the method based on the empirical results.


Global Big Data Conference

#artificialintelligence

The multimodal neural network is used to predict user sentiment from multimodal features such as text, audio, and visual data. Speech and language recognition technology is a rapidly developing field, which has led to the emergence of novel speech dialog systems, such as Amazon Alexa and Siri. A significant milestone in the development of dialog artificial intelligence (AI) systems is the addition of emotional intelligence. A system able to recognize the emotional states of the user, in addition to understanding language, would generate a more empathetic response, leading to a more immersive experience for the user. "Multimodal sentiment analysis" is a group of methods that constitute the gold standard for an AI dialog system with sentiment detection.


Fracture Detection: Study Suggests AI Assessment May Be as Effective as Clinician Assessment

#artificialintelligence

Could artificial intelligence (AI) assessment have comparable diagnostic accuracy to clinician assessment for fracture detection? In a recently published meta-analysis of 42 studies, the study authors noted 92 percent sensitivity and 91 percent specificity for AI in comparison to 91 percent sensitivity and 92 percent specificity for clinicians based on internal validation test sets. For the external validation test sets, clinicians had 94 percent specificity and sensitivity in comparison to 91 percent specificity and sensitivity for AI, according to the study. In essence, the study authors found no statistically significant differences between AI and clinician diagnosis of fractures. "The results from this meta-analysis cautiously suggest that AI is noninferior to clinicians in terms of diagnostic performance in fracture detection, showing promise as a useful diagnostic tool," wrote Dominic Furniss, DM, MA, MBBCh, FRCS(Plast), a professor of plastic and reconstructive surgery in the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences at the Botnar Research Centre in Oxford, United Kingdom., and colleagues.


Predicting Decisions in Language Based Persuasion Games

Journal of Artificial Intelligence Research

Sender-receiver interactions, and specifically persuasion games, are widely researched in economic modeling and artificial intelligence, and serve as a solid foundation for powerful applications. However, in the classic persuasion games setting, the messages sent from the expert to the decision-maker are abstract or well-structured application-specific signals rather than natural (human) language messages, although natural language is a very common communication signal in real-world persuasion setups. This paper addresses the use of natural language in persuasion games, exploring its impact on the decisions made by the players and aiming to construct effective models for the prediction of these decisions. For this purpose, we conduct an online repeated interaction experiment. At each trial of the interaction, an informed expert aims to sell an uninformed decision-maker a vacation in a hotel, by sending her a review that describes the hotel. While the expert is exposed to several scored reviews, the decision-maker observes only the single review sent by the expert, and her payoff in case she chooses to take the hotel is a random draw from the review score distribution available to the expert only. The expert’s payoff, in turn, depends on the number of times the decision-maker chooses the hotel. We also compare the behavioral patterns in this experiment to the equivalent patterns in similar experiments where the communication is based on the numerical values of the reviews rather than the reviews’ text, and observe substantial differences which can be explained through an equilibrium analysis of the game. We consider a number of modeling approaches for our verbal communication setup, differing from each other in the model type (deep neural network (DNN) vs. linear classifier), the type of features used by the model (textual, behavioral or both) and the source of the textual features (DNN-based vs. hand-crafted). Our results demonstrate that given a prefix of the interaction sequence, our models can predict the future decisions of the decision-maker, particularly when a sequential modeling approach and hand-crafted textual features are applied. Further analysis of the hand-crafted textual features allows us to make initial observations about the aspects of text that drive decision making in our setup.