Goto

Collaborating Authors

 Cohen, Robin


Autonomous Vehicle Visual Signals for Pedestrians: Experiments and Design Recommendations

arXiv.org Artificial Intelligence

Autonomous Vehicles (AV) will transform transportation, but also the interaction between vehicles and pedestrians. In the absence of a driver, it is not clear how an AV can communicate its intention to pedestrians. One option is to use visual signals. To advance their design, we conduct four human-participant experiments and evaluate six representative AV visual signals for visibility, intuitiveness, persuasiveness, and usability at pedestrian crossings. Based on the results, we distill twelve practical design recommendations for AV visual signals, with focus on signal pattern design and placement. Moreover, the paper advances the methodology for experimental evaluation of visual signals, including lab, closed-course, and public road tests using an autonomous vehicle. In addition, the paper also reports insights on pedestrian crosswalk behaviours and the impacts of pedestrian trust towards AVs on the behaviors. We hope that this work will constitute valuable input to the ongoing development of international standards for AV lamps, and thus help mature automated driving in general.


Towards Provably Moral AI Agents in Bottom-Up Learning Frameworks

AAAI Conferences

We examine moral decision making in autonomous systems as inspired by a central question posed by Rossi with respect to moral preferences: can AI systems based on statistical machine learning (which do not provide a natural way to explain or justify their decisions) be used for embedding morality into a machine in a way that allows us to prove that nothing morally wrong will happen? We argue for an evaluation which is held to the same standards as a human agent, removing the demand that ethical behavior is always achieved. We introduce four key meta-qualities desired for our moral standards, and then proceed to clarify how we can prove that an agent will correctly learn to perform moral actions given a set of samples within certain error bounds. Our group-dynamic approach enables us to demonstrate that the learned models converge to a common function to achieve stability. We further explain a valuable intrinsic consistency check made possible through the derivation of logical statements from the machine learning model. In all, this work proposes an approach for building ethical AI systems, coming from the perspective of artificial intelligence research, and sheds important light on understanding how much learning is required in order for an intelligent agent to behave morally with negligible error.


An Architecture for a Military AI System with Ethical Rules

AAAI Conferences

The current era of computer science has seen a significant increase in the application of machine learning (ML) and knowledge representation (KR). The problem with the current situation regarding ethics and AI is the weaknesses of ML and KR when used separately. ML will โ€œlearnโ€ ethical behaviour as it is observed and may therefore disagree with human morals. On the other hand, KR is too rigid and can only process scenarios that have been predefined. This paper proposes a solution to the question posed by Rossi (2016) โ€œHow to combine bottom-up learning approaches with top-down rule-based approaches in defining ethical principles for AI systems?โ€ This system focuses on potential unethical behaviors that are caused by human nature instead of ethical dilemmas caused by technology insufficiency in the wartime scenarios. Our solution is an architecture that combines a classifier to identify targets in wartime scenarios and a rules-based system in the form of ontologies to guide an AI agentโ€™s behaviour in the given circumstance.


Conventional Machine Learning for Social Choice

AAAI Conferences

Deciding the outcome of an election when voters have provided only partial orderings over their preferences requires voting rules that accommodate missing data. While existing techniques, including considerable recent work, address missingness through circumvention, we propose the novel application of conventional machine learning techniques to predict the missing components of ballots via latent patterns in the information that voters are able to provide. We show that suitable predictive features can be extracted from the data, and demonstrate the high performance of our new framework on the ballots from many real world elections, including comparisons with existing techniques for voting with partial orderings. Our technique offers a new and interesting conceptualization of the problem, with stronger connections to machine learning than conventional social choice techniques.


On Manipulablity of Random Serial Dictatorship in Sequential Matching with Dynamic Preferences

AAAI Conferences

We consider the problem of repeatedly matching a set of alternatives to a set of agents in the absence of monetary transfer. We propose a generic framework for evaluating sequential matching mechanisms with dynamic preferences, and show that unlike single-shot settings, the random serial dictatorship mechanism is manipulable.


Matching with Dynamic Ordinal Preferences

AAAI Conferences

We consider the problem of repeatedly matching a set of alternatives to a set of agents with dynamic ordinal preferences. Despite a recent focus on designing one-shot matching mechanisms in the absence of monetary transfers, little study has been done on strategic behavior of agents in sequential assignment problems. We formulate a generic dynamic matching problem via a sequential stochastic matching process. We design a mechanism based on random serial dictatorship (RSD) that, given any history of preferences and matching decisions, guarantees global stochastic strategyproofness while satisfying desirable local properties. We further investigate the notion of envyfreeness in such sequential settings.


A Market-Based Coordination Mechanism for Resource Planning Under Uncertainty

AAAI Conferences

Multiagent Resource Allocation (MARA) distributes a set of resources among a set of intelligent agents in order to respect the preferences of the agents and to maximize some measure of global utility, which may include minimizing total costs or maximizing total return. We are interested in MARA solutions that provide optimal or close-to-optimal allocation of resources in terms of maximizing a global welfare function with low communication and computation cost, with respect to the priority of agents, and temporal dependencies between resources. We propose an MDP approach for resource planning in multiagent environments. Our approach formulates internal preference modeling and success of each individual agent as a single MDP and then to optimize global utility, we apply a market-based solution to coordinate these decentralized MDPs.


Exploring the Effects of Errors in Assessment and Time Requirements of Learning Objects in a Peer-Based Intelligent Tutoring System

AAAI Conferences

We revisit a framework for designing peer-based intelligent tutoring systems motivated by McCalla's ecological approach, where learning is facilitated by the previous experiences of peers with a corpus of learning objects. Prior research demonstrated the value of a proposed algorithm for modeling student learning and for selecting the most beneficial learning objects to present to new students. In this paper, we first adjust the validation of this approach to demonstrate its ability to cope with errors in assessing the learning of student peers. We then deepen the representation of learning objects to reflect the expected time to completion and demonstrate how this may lead to more effective selection of learning objects for students, and thus more effective learning. As part of our exploration of these new adjustments, we offer insights into how the size of learning object repositories may affect student learning, suggesting future extensions for the model and its validation.