Goto

Collaborating Authors

Committing Bandits

Neural Information Processing Systems

We consider a multi-armed bandit problem where there are two phases. The first phase is an experimentation phase where the decision maker is free to explore multiple options. In the second phase the decision maker has to commit to one of the arms and stick with it. Cost is incurred during both phases with a higher cost during the experimentation phase. We analyze the regret in this setup, and both propose algorithms and provide upper and lower bounds that depend on the ratio of the duration of the experimentation phase to the duration of the commitment phase.


[Perspective] Photonic multitasking enabled with geometric phase

Science

The constructive and destructive interference of waves is often exploited in optics and signal transmission. The interference pattern is a direct measure of the phase difference between two or more beams. Such a phase difference may result from the difference between the optical paths traversed by the light beams. However, phase can change for a single beam if it propagates through an "anisotropic parameter space," a medium that curves the light; this property is called geometric or topological phase (1–4). On page 1202 of this issue, Maguid et al. (5) use metasurfaces--ultrathin, planar engineered structures (6–9)--to form shared-aperture antenna arrays that impart geometric phase to optical signals.


GSMDPs for Multi-Robot Sequential Decision-Making

AAAI Conferences

Markov Decision Processes (MDPs) provide an extensive theoretical background for problems of decision-making under uncertainty. In order to maintain computational tractability, however, real-world problems are typically discretized in states and actions as well as in time. Assuming synchronous state transitions and actions at fixed rates may result in models which are not strictly Markovian, or where agents are forced to idle between actions, losing their ability to react to sudden changes in the environment. In this work, we explore the application of Generalized Semi-Markov Decision Processes (GSMDPs) to a realistic multi-robot scenario. A case study will be presented in the domain of cooperative robotics, where real-time reactivity must be preserved, and synchronous discrete-time approaches are therefore sub-optimal. This case study is tested on a team of real robots, and also in realistic simulation. By allowing asynchronous events to be modeled over continuous time, the GSMDP approach is shown to provide greater solution quality than its discrete-time counterparts, while still being approximately solvable by existing methods.


The OODA Loop: How to turn uncertainty into opportunity

#artificialintelligence

As the world moves faster and faster, we need to improve our ability to change our minds based on a changing reality. By doing this, we can turn uncertainty into opportunity and ambiguity into advantage. OODA is a model of individual and organizational learning and adaptation to do just that. The OODA loop is a decision-making model developed by military strategist John Boyd to explain how individuals and organizations can win in uncertain and chaotic environments. It is a description of a process that you are already doing every minute of every day.


Reinforcement Learning in a Physics-Inspired Semi-Markov Environment

arXiv.org Artificial Intelligence

Reinforcement learning (RL) has been demonstrated to have great potential in many applications of scientific discovery and design. Recent work includes, for example, the design of new structures and compositions of molecules for therapeutic drugs. Much of the existing work related to the application of RL to scientific domains, however, assumes that the available state representation obeys the Markov property. For reasons associated with time, cost, sensor accuracy, and gaps in scientific knowledge, many scientific design and discovery problems do not satisfy the Markov property. Thus, something other than a Markov decision process (MDP) should be used to plan / find the optimal policy. In this paper, we present a physics-inspired semi-Markov RL environment, namely the phase change environment. In addition, we evaluate the performance of value-based RL algorithms for both MDPs and partially observable MDPs (POMDPs) on the proposed environment. Our results demonstrate deep recurrent Q-networks (DRQN) significantly outperform deep Q-networks (DQN), and that DRQNs benefit from training with hindsight experience replay. Implications for the use of semi-Markovian RL and POMDPs for scientific laboratories are also discussed.