Goto

Collaborating Authors

 cogment


AI For Knuckleheads -- Santa Cruz Works

#artificialintelligence

What comes to mind when you think of Artificial Intelligence (AI)? Chances are, the first thing that pops in your head is a group of robots that take over the planet and rule over humans. Thanks to the reoccurring themes in Hollywood and the media, AI has gotten a bad rap as something that is created by evil corporations but wreaks havoc on all of humanity in the future. However, you might be forgetting that AI is already embedded in your day-to-day routine. Autocorrect, self-driving cars, robo advisors, and of course--Siri and Alexa--are all under the umbrella of AI.


Cogment: Open Source Framework For Distributed Multi-actor Training, Deployment & Operations

Redefined, AI, Gottipati, Sai Krishna, Kurandwad, Sagar, Mars, Clodéric, Szriftgiser, Gregory, Chabot, François

arXiv.org Artificial Intelligence

Involving humans directly for the benefit of AI agents' training is getting traction thanks to several advances in reinforcement learning and human-in-the-loop learning. Humans can provide rewards to the agent, demonstrate tasks, design a curriculum, or act in the environment, but these benefits also come with architectural, functional design and engineering complexities. We present Cogment, a unifying open-source framework that introduces an actor formalism to support a variety of humans-agents collaboration typologies and training approaches. It is also scalable out of the box thanks to a distributed micro service architecture, and offers solutions to the aforementioned complexities.


Human and Multi-Agent collaboration in a human-MARL teaming framework

Navidi, Neda, Chabot, Francois, Kurandwad, Sagar, Lustigman, Irv, Robert, Vincent, Szriftgiser, Gregory, Schuch, Andrea

arXiv.org Artificial Intelligence

Collaborative multi-agent reinforcement learning (MARL) as a specific category of reinforcement learning provides effective results with agents learning from their observations, received rewards, and internal interactions between agents. However, centralized learning methods with a joint global policy in a highly dynamic environment present unique challenges in dealing with large amounts of information. This study proposes two innovative solutions to address the complexities of a collaboration between a human and multiple reinforcement learning (RL)-based agents (referred to thereafter as Human-MARL teaming) where the goals pursued cannot be achieved by a human alone or agents alone. The first innovation is the introduction of a new open-source MARL framework, called COGMENT, to unite humans and agents in real-time complex dynamic systems and efficiently leverage their interactions as a source of learning. The second innovation is our proposal of a new hybrid MARL method, named Dueling Double Deep Q learning MADDPG (D3-MADDPG) to allow agents to train decentralized policies parallelly in a joint centralized policy. This method can solve the overestimation problem in Q-learning methods of value-based MARL. We demonstrate these innovations by using a designed real-time environment with unmanned aerial vehicles driven by RL agents, collaborating with a human to fight fires. The team of RL agent drones autonomously look for fire seats and the human pilot douses the fires. The results of this study show that the proposed collaborative paradigm and the open-source framework leads to significant reductions in both human effort and exploration costs. Also, the results of the proposed hybrid MARL method shows that it effectively improves the learning process to achieve more reliable Q-values for each action, by decoupling the estimation between state value and advantage value.