Kraus, Sarit


Advice Provision for Energy Saving in Automobile Climate-Control System

AI Magazine

Reducing energy consumption of climate control systems is important in order to reduce human environmental footprint. Our approach takes into account both the energy consumption of the climate control system and the expected comfort level of the driver. We therefore build two models, one for assessing the energy consumption of the climate control system as a function of the system's settings, and the other, models human comfort level as a function of the climate control system's settings. Using these models, the agent provides advice to the driver considering how to set the climate control system.


Advice Provision for Energy Saving in Automobile Climate-Control System

AI Magazine

Reducing energy consumption of climate control systems is important in order to reduce human environmental footprint. The need to save energy becomes even greater when considering an electric car, since heavy use of the climate control system may exhaust the battery. In this article we consider a method for an automated agent to provide advice to drivers which will motivate them to reduce the energy consumption of their climate control unit. Our approach takes into account both the energy consumption of the climate control system and the expected comfort level of the driver. We therefore build two models, one for assessing the energy consumption of the climate control system as a function of the system’s settings, and the other, models human comfort level as a function of the climate control system’s settings. Using these models, the agent provides advice to the driver considering how to set the climate control system. The agent advises settings which try to preserve a high level of comfort while consuming as little energy as possible. We empirically show that drivers equipped with our agent which provides them with advice significantly save energy as compared to drivers not equipped with our agent.


Leveraging Fee-Based, Imperfect Advisors in Human-Agent Games of Trust

AAAI Conferences

This paper explores whether the addition of costly, imperfect, and exploitable advisors to Berg's investment game enhances or detracts from investor performance in both one-shot and multi-round interactions.We then leverage our findings to develop an automated investor agent that performs as well as or better than humans in these games.To gather this data, we extended Berg's game and conducted a series of experiments using Amazon's Mechanical Turk to determine how humans behave in these potentially adversarial conditions.Our results indicate that, in games of short duration, advisors do not stimulate positive behavior and are not useful in providing actionable advice.In long-term interactions, however, advisors do stimulate positive behavior with significantly increased investments and returns.By modeling human behavior across several hundred participants, we were then able to develop agent strategies that maximized return on investment and performed as well as or significantly better than humans.In one-shot games, we identified an ideal investment value that, on average, resulted in positive returns as long as advisor exploitation was not allowed.For the multi-round games, our agents relied on the corrective presence of advisors to stimulate positive returns on maximum investment.


How to Change a Group’s Collective Decision?

AAAI Conferences

Persuasion is a common social and economic activity. It usually arises when conflicting interests among agents exist, and one of the agents wishes to sway the opinions of others. This paper considers the problem of an automated agent that needs to influence the decision of a group of self-interested agents that must reach an agreement on a joint action. For example, consider an automated agent that aims to reduce the energy consumption of a nonresidential building, by convincing a group of people who share an office to agree on an economy mode of the air-conditioning and low light intensity. In this paper we present four problems that address issues of minimality and safety of the persuasion process. We discuss the relationships to similar problems from social choice, and show that if the agents are using Plurality or Veto as their voting rule all of our problems are in P. We also show that with k-Approval, Bucklin and Borda voting rules some problems become intractable. We thus present heuristics for efficient persuasion with Borda, and evaluate them through simulations.


Towards Adapting Cars to their Drivers

AI Magazine

Such interactive activity leads us to consider intelligent and advanced ways of interaction leading to cars that can adapt to their drivers.In this paper, we focus on the Adaptive Cruise Control (ACC) technology that allows a vehicle to automatically adjust its speed to maintain a preset distance from the vehicle in front of it based on the driver's preferences. We introduce a method to combine machine learning algorithms with demographic information and expert advice into existing automated assistive systems. This method can reduce the interactions between drivers and automated systems by adjusting parameters relevant to the operation of these systems based on their specific drivers and context of drive. While generic packages such as Weka were successful in learning drivers' behavior, we found that improved learning models could be developed by adding information on drivers' demographics and a previously developed model about different driver types.


Towards Adapting Cars to their Drivers

AI Magazine

Traditionally, vehicles have been considered as machines that are controlled by humans for the purpose of transportation. A more modern view is to envision drivers and passengers as actively interacting with a complex automated system. Such interactive activity leads us to consider intelligent and advanced ways of interaction leading to cars that can adapt to their drivers.In this paper, we focus on the Adaptive Cruise Control (ACC) technology that allows a vehicle to automatically adjust its speed to maintain a preset distance from the vehicle in front of it based on the driver’s preferences. Although individual drivers have different driving styles and preferences, current systems do not distinguish among users. We introduce a method to combine machine learning algorithms with demographic information and expert advice into existing automated assistive systems. This method can reduce the interactions between drivers and automated systems by adjusting parameters relevant to the operation of these systems based on their specific drivers and context of drive. We also learn when users tend to engage and disengage the automated system. This method sheds light on the kinds of dynamics that users develop while interacting with automation and can teach us how to improve these systems for the benefit of their users. While generic packages such as Weka were successful in learning drivers’ behavior, we found that improved learning models could be developed by adding information on drivers’ demographics and a previously developed model about different driver types. We present the general methodology of our learning procedure and suggest applications of our approach to other domains as well.


Identifying Missing Node Information in Social Networks

AAAI Conferences

In recent years, social networks have surged in popularity as one of the main applications of the Internet. This has generated great interest in researching these networks by various fields in the scientific community. One key aspect of social network research is identifying important missing information which is not explicitly represented in the network, or is not visible to all. To date, this line of research typically focused on what connections were missing between nodes,or what is termed the "Missing Link Problem." This paper introduces a new Missing Nodes Identification problem where missing members in the social network structure must be identified. Towards solving this problem, we present an approach based on clustering algorithms combined with measures from missing link research. We show that this approach has beneficial results in the missing nodes identification process and we measure its performance in several different scenarios.


Comparing Agents' Success against People in Security Domains

AAAI Conferences

The interaction of people with autonomous agents has become increasingly prevalent. Some of these settings include security domains, where people can be characterized as uncooperative, hostile, manipulative, and tending to take advantage of the situation for their own needs. This makes it challenging to design proficient agents to interact with people in such environments. Evaluating the success of the agents automatically before evaluating them with people or deploying them could alleviate this challenge and result in better designed agents. In this paper we show how Peer Designed Agents (PDAs) -- computer agents developed by human subjects -- can be used as a method for evaluating autonomous agents in security domains. Such evaluation can reduce the effort and costs involved in evaluating autonomous agents interacting with people to validate their efficacy. Our experiments included more than 70 human subjects and 40 PDAs developed by students. The study provides empirical support that PDAs can be used to compare the proficiency of autonomous agents when matched with people in security domains.


Manipulating Boolean Games Through Communication

AAAI Conferences

We address the issue of manipulating games through communication. In the specific setting we consider (a variation of Boolean games), we assume there is some set of environment variables, the value of which is not directly accessible to players; each player has their own beliefs about these variables, and makes decisions about what actions to perform based on these beliefs. The communication we consider takes the form of (truthful) announcements about the value of some environment variables; the effect of an announcement about some variable is to modify the beliefs of the players who hear the announcement so that they accurately reflect the value of the announced variables. By choosing announcements appropriately, it is possible to perturb the game away from certain rational outcomes and towards others. We specifically focus on the issue of stabilisation: making announcements that transform a game from having no stable states to one that has stable configurations.


Intentions in Equilibrium

AAAI Conferences

Intentions have been widely studied in AI, both in the context of decision-making within individual agents and in multi-agent systems. Work on intentions in multi-agent systems has focused on joint intention models, which characterise the mental state of agents with a shared goal engaged in teamwork. In the absence of shared goals, however, intentions play another crucial role in multi-agent activity: they provide a basis around which agents can mutually coordinate activities. Models based on shared goals do not attempt to account for or explain this role of intentions. In this paper, we present a formal model of multi-agent systems in which belief-desire-intention agents choose their intentions taking into account the intentions of others. To understand rational mental states in such a setting, we formally define and investigate notions of multi-agent intention equilibrium, which are related to equilibrium concepts in game theory.