Goto

Collaborating Authors

 crandall


Why does the beach make you so tired?

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. No responsibilities and little to do but enjoy yourself. Yet somehow, after a whole day of blissful nothing, you find yourself completely zonked. If taking in the sea air is supposed to be restorative, why can a restful day at the beach end up feeling so tiring? There's no one certain answer, but science offers a few possibilities.


Crandall

AAAI Conferences

In repeated stochastic games (RSGs), an agent must quickly adapt to the behavior of previously unknown associates, who may themselves be learning. This machine-learning problem is particularly challenging due, in part, to the presence of multiple (even infinite) equilibria and inherently large strategy spaces. In this paper, we introduce a method to reduce the strategy space of two-player general-sum RSGs to a handful of expert strategies. This process, called mega, effectually reduces an RSG to a bandit problem. We show that the resulting strategy space preserves several important properties of the original RSG, thus enabling a learner to produce robust strategies within a reasonably small number of interactions. To better establish strengths and weaknesses of this approach, we empirically evaluate the resulting learning system against other algorithms in three different RSGs.


Luddy Center for Artificial Intelligence to Open This Month

#artificialintelligence

From the technology that helps self-driving cars recognize stop signs, to medical advancements that help produce COVID-19 vaccines, to studying the unconscious bias found in algorithms, the Luddy School of Informatics, Computing and Engineering is involved in all parts of AI development. As artificial intelligence continues to infiltrate everyday life, IU's researchers are focused on developing these technologies, while working to ensure their research is safe and ethical. The Luddy Center for Artificial Intelligence is set to open this month, providing researchers a place to focus on the intersection of robotics, complex networks, health and social media. Kay Connelly, Luddy School's associate dean of research, studies proactive health and AI technologies that can help the terminally ill and older people as they age, specifically wearable devices. She said proactive health is like "Fitbit before Fitbit."


AI algorithm with 'social skills' teaches humans how to collaborate

#artificialintelligence

An international team has developed an AI algorithm with social skills that has outperformed humans in the ability to cooperate with people and machines in playing a variety of two-player games. The researchers, led by Iyad Rahwan, PhD, an MIT Associate Professor of Media Arts and Sciences, tested humans and the algorithm, called S# ("S sharp"), in three types of interactions: machine-machine, human-machine, and human-human. In most instances, machines programmed with S# outperformed humans in finding compromises that benefit both parties. "Two humans, if they were honest with each other and loyal, would have done as well as two machines," said lead author BYU computer science professor Jacob Crandall. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are better [since it's programmed to not lie] and it also learns to maintain cooperation once it emerges."


In Your Face! AI-Enabled Machines Cooperate Better Than Humans In Tests CleanTechnica

#artificialintelligence

The emergence of driverless cars, autonomous trading algorithms, and autonomous drone technologies highlight a larger trend in which artificial intelligence is enabling machines to autonomously carry out complex tasks on behalf of their human stakeholders. To effectively represent their stakeholders in many tasks, these autonomous machines must interact with other people and machines that do not fully share the same goals and preferences. While the majority of AI milestones have focused on developing human-level wherewithal to compete with people or to interact with people as teammates that share a common goal, many scenarios in which AI must interact with people and other machines are neither zero-sum nor common-interest interactions. As such, AI must also have the ability to cooperate even in the midst of conflicting interests and threats of being exploited.


Non-myopic learning in repeated stochastic games

Crandall, Jacob W.

arXiv.org Artificial Intelligence

In repeated stochastic games (RSGs), an agent must quickly adapt to the behavior of previously unknown associates, who may themselves be learning. This machine-learning problem is particularly challenging due, in part, to the presence of multiple (even infinite) equilibria and inherently large strategy spaces. In this paper, we introduce a method to reduce the strategy space of two-player general-sum RSGs to a handful of expert strategies. This process, called Mega, effectually reduces an RSG to a bandit problem. We show that the resulting strategy space preserves several important properties of the original RSG, thus enabling a learner to produce robust strategies within a reasonably small number of interactions. To better establish strengths and weaknesses of this approach, we empirically evaluate the resulting learning system against other algorithms in three different RSGs.


Is marijuana killing the planet? Energy consumption by cannabis farms may soon rival that of data centres

Daily Mail - Science & tech

There are many arguments surrounding whether or not marijuana should be grown and used for medical reasons. But the impact on the climate is one factor in the debate that may have been overlooked, - until now. A new report, by a clean energy policy research institute, has found growing marijuana makes up one per cent of energy use in states like Colorado and Washington. The impact on the climate is one factor in the debate that may have been overlooked, until now. A new report, by a clean energy policy research institute, has found growing marijuana makes up 1 per cent of energy use in states like Colorado and Washington.


Robust Learning for Repeated Stochastic Games via Meta-Gaming

Crandall, Jacob W. (Masdar Institute of Science and Technology)

AAAI Conferences

In repeated stochastic games (RSGs), an agent must quickly adapt to the behavior of previously unknown associates, who may themselves be learning. This machine-learning problem is particularly challenging due, in part, to the presence of multiple (even infinite) equilibria and inherently large strategy spaces. In this paper, we introduce a method to reduce the strategy space of two-player general-sum RSGs to a handful of expert strategies. This process, called mega, effectually reduces an RSG to a bandit problem. We show that the resulting strategy space preserves several important properties of the original RSG, thus enabling a learner to produce robust strategies within a reasonably small number of interactions. To better establish strengths and weaknesses of this approach, we empirically evaluate the resulting learning system against other algorithms in three different RSGs.


E-HBA: Using Action Policies for Expert Advice and Agent Typification

Albrecht, Stefano Vittorino (The University of Edinburgh) | Crandall, Jacob William (Masdar Institute of Science and Technology) | Ramamoorthy, Subramanian (The University of Edinburgh)

AAAI Conferences

Past research has studied two approaches to utilise pre-defined policy sets in repeated interactions: as experts, to dictate our own actions, and as types, to characterise the behaviour of other agents. In this work, we bring these complementary views together in the form of a novel meta-algorithm, called Expert-HBA (E-HBA), which can be applied to any expert algorithm that considers the average (or total) payoff an expert has yielded in the past. E-HBA gradually mixes the past payoff with a predicted future payoff, which is computed using the type-based characterisation. We present results from a comprehensive set of repeated matrix games, comparing the performance of several well-known expert algorithms with and without the aid of E-HBA. Our results show that E-HBA has the potential to significantly improve the performance of expert algorithms.