Goto

Collaborating Authors

Designing for Human-Agent Interaction

AI Magazine

Interacting with a computer requires adopting some metaphor to guide our actions and expectations. Most human-computer interfaces can be classified according to two dominant metaphors: (1) agent and (2) environment. Interactions based on an agent metaphor treat the computer as an intermediary that responds to user requests. In the environment metaphor, a model of the task domain is presented for the user to interact with directly. The term agent has come to refer to the automation of aspects of human-computer interaction (HCI), such as anticipating commands or autonomously performing actions. Norman's 1984 model of HCI is introduced as reference to organize and evaluate research in human-agent interaction (HAI). A wide variety of heterogeneous research involving HAI is shown to reflect automation of one of the stages of action or evaluation within Norman's model. Improvements in HAI are expected to result from a more heterogeneous use of methods that target multiple stages simultaneously.


Designing for Human-Agent Interaction

AI Magazine

Interacting with a computer requires adopting some metaphor to guide our actions and expectations. Most human-computer interfaces can be classified according to two dominant metaphors: (1) agent and (2) environment. Interactions based on an agent metaphor treat the computer as an intermediary that responds to user requests. In the environment metaphor, a model of the task domain is presented for the user to interact with directly. The term agent has come to refer to the automation of aspects of human-computer interaction (HCI), such as anticipating commands or autonomously performing actions.


Hired help

#artificialintelligence

Not long ago, a startup founder in San Francisco was trying to organise a meeting with someone visiting from Europe, and setting a time required dozens of e-mails back and forth. The European arrived with a bottle of wine for the founder's personal assistant, Clara, as a gesture of thanks for putting up with the scheduling hassle. But the assistant could not accept the gift. Clara is a software service from a startup of the same name that helps schedule meetings via e-mail. It is powered by artificial intelligence (AI), with some human supervision.


Clara is applying to be your virtual personal assistant, no benefits required

AITopics Original Links

Clara Labs' product, dubbed Clara, is a virtual assistant that when looped in on email conversations is able to schedule appointments based on the requests in those emails. SAN FRANCISCO – As with many tech company epiphanies, Maran Nelson had hers in the wee hours of the morning. It was 2 a.m., and Nelson had scheduled herself to call an important potential investor in Singapore. But her time zone calculation was off. She missed the call and the sale.


When Multiple Agents Learn to Schedule: A Distributed Radio Resource Management Framework

arXiv.org Machine Learning

Interference among concurrent transmissions in a wireless network is a key factor limiting the system performance. One way to alleviate this problem is to manage the radio resources in order to maximize either the average or the worst-case performance. However, joint consideration of both metrics is often neglected as they are competing in nature. In this article, a mechanism for radio resource management using multi-agent deep reinforcement learning (RL) is proposed, which strikes the right trade-off between maximizing the average and the $5^{th}$ percentile user throughput. Each transmitter in the network is equipped with a deep RL agent, receiving partial observations from the network (e.g., channel quality, interference level, etc.) and deciding whether to be active or inactive at each scheduling interval for given radio resources, a process referred to as link scheduling. Based on the actions of all agents, the network emits a reward to the agents, indicating how good their joint decisions were. The proposed framework enables the agents to make decisions in a distributed manner, and the reward is designed in such a way that the agents strive to guarantee a minimum performance, leading to a fair resource allocation among all users across the network. Simulation results demonstrate the superiority of our approach compared to decentralized baselines in terms of average and $5^{th}$ percentile user throughput, while achieving performance close to that of a centralized exhaustive search approach. Moreover, the proposed framework is robust to mismatches between training and testing scenarios. In particular, it is shown that an agent trained on a network with low transmitter density maintains its performance and outperforms the baselines when deployed in a network with a higher transmitter density.