Goto

Collaborating Authors

Will Mexico advance in the World Cup? Here are the scenarios heading into the final group stage game

Los Angeles Times

Heading into its final group stage game with Sweden on Wednesday, Mexico is facing several scenarios that could either send it on to the knockout rounds as Group F champion or runner-up, or send it home.


DevSecOps: Policy-as-code with Azure Pipelines

#artificialintelligence

Every organization irrespective of its size, has IT policies to help define what compliance means to them. Azure Policy is a service that offers both built-in and user-defined policies across categories mapping the various Azure services such as Compute, Storage or even AKS (announced recently). These policies can be defined on the Azure Portal and assigned to one or more subscriptions/resource groups (referred as scope). You can read more on that here. Besides defining guardrails around what is allowed and what's not, these policies also allow application teams a certain level of freedom by reducing hard dependencies on their IT teams.


Multi-Agent Deep Reinforcement Learning with Adaptive Policies

arXiv.org Artificial Intelligence

We propose a novel approach to address one aspect of the non-stationarity problem in multi-agent reinforcement learning (RL), where the other agents may alter their policies due to environment changes during execution. This violates the Markov assumption that governs most single-agent RL methods and is one of the key challenges in multi-agent RL. To tackle this, we propose to train multiple policies for each agent and postpone the selection of the best policy at execution time. Specifically, we model the environment non-stationarity with a finite set of scenarios and train policies fitting each scenario. In addition to multiple policies, each agent also learns a policy predictor to determine which policy is the best with its local information. By doing so, each agent is able to adapt its policy when the environment changes and consequentially the other agents alter their policies during execution. We empirically evaluated our method on a variety of common benchmark problems proposed for multi-agent deep RL in the literature. Our experimental results show that the agents trained by our algorithm have better adaptiveness in changing environments and outperform the state-of-the-art methods in all the tested environments.


Joint Inference of Reward Machines and Policies for Reinforcement Learning

arXiv.org Artificial Intelligence

Incorporating high-level knowledge is an effective way to expedite reinforcement learning (RL), especially for complex tasks with sparse rewards. We investigate an RL problem where the high-level knowledge is in the form of reward machines, i.e., a type of Mealy machine that encodes the reward functions. We focus on a setting in which this knowledge is a priori not available to the learning agent. We develop an iterative algorithm that performs joint inference of reward machines and policies for RL (more specifically, q-learning). In each iteration, the algorithm maintains a hypothesis reward machine and a sample of RL episodes. It derives q-functions from the current hypothesis reward machine, and performs RL to update the q-functions. While performing RL, the algorithm updates the sample by adding RL episodes along which the obtained rewards are inconsistent with the rewards based on the current hypothesis reward machine. In the next iteration, the algorithm infers a new hypothesis reward machine from the updated sample. Based on an equivalence relationship we defined between states of reward machines, we transfer the q-functions between the hypothesis reward machines in consecutive iterations. We prove that the proposed algorithm converges almost surely to an optimal policy in the limit if a minimal reward machine can be inferred and the maximal length of each RL episode is sufficiently long. The experiments show that learning high-level knowledge in the form of reward machines can lead to fast convergence to optimal policies in RL, while standard RL methods such as q-learning and hierarchical RL methods fail to converge to optimal policies after a substantial number of training steps in many tasks.