Goto

Collaborating Authors

 Agents


Learning in Games with Lossy Feedback

Neural Information Processing Systems

We consider a game-theoretical multi-agent learning problem where the feedback information can be lost during the learning process and rewards are given by a broad class of games known as variationally stable games. We propose a simple variant of the classical online gradient descent algorithm, called reweighted online gradient descent (ROGD) and show that in variationally stable games, if each agent adopts ROGD, then almost sure convergence to the set of Nash equilibria is guaranteed, even when the feedback loss is asynchronous and arbitrarily corrrelated among agents. We then extend the framework to deal with unknown feedback loss probabilities by using an estimator (constructed from past data) in its replacement. Finally, we further extend the framework to accomodate both asynchronous loss and stochastic rewards and establish that multi-agent ROGD learning still converges to the set of Nash equilibria in such settings. Together, these results contribute to the broad lanscape of multi-agent online learning by significantly relaxing the feedback information that is required to achieve desirable outcomes.




Why physical AI is becoming manufacturing's next advantage

MIT Technology Review

Why physical AI is becoming manufacturing's next advantage From simulation driven development to real world execution, Microsoft and NVIDIA are helping manufacturers leverage AI to cross the industrial frontier with confidence. For decades, manufacturers have pursued automation to drive efficiency, reduce costs, and stabilize operations. That approach delivered meaningful gains, but it is no longer enough. Today's manufacturing leaders face a different challenge: how to grow amid labor constraints, rising complexity, and increasing pressure to innovate faster without sacrificing safety, quality, or trust. The next phase of transformation will not be defined by isolated AI tools or individual robots, but by intelligence that can operate reliably in the physical world . This is where physical AI--intelligence that can sense, reason, and act in the real world--marks a decisive shift.


AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

AIHub

Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we explore multi-agent systems and collective decision-making, dive into neurosymbolic Markov models, and find out how robots can acquire skills through interactions with the physical world. What if AI were designed not only to optimize choices for individuals, but to help groups reach decisions together? AIhub Ambassador Liliane-Caroline Demers interviewed Kate Larson whose research explores how AI can support collective decision-making. She reflected on what drew her into the field, why she sees AI playing a role in consensus and democratic processes, and why she believes multi-agent systems deserve more attention.


What the Moltbook experiment is teaching us about AI

AIHub

What happens when you create a social media platform that only AI bots can post to? The answer, it turns out, is both entertaining and concerning. Moltbook is exactly that - a platform where artificial intelligence agents chat amongst themselves and humans can only watch from the sidelines. When ChatGPT gets the result, it treats it just like you had entered it yourself, and uses the result of the program to generate another response. It performs this process over and over again until the AI is satisfied that the task is complete.


'Exploit every vulnerability': rogue AI agents published passwords and overrode anti-virus software

The Guardian

The rogue AI agents appeared to act together to smuggle sensitive information out of supposedly secure cyber-systems. The rogue AI agents appeared to act together to smuggle sensitive information out of supposedly secure cyber-systems. 'Exploit every vulnerability': rogue AI agents published passwords and overrode anti-virus software Exclusive: Lab tests discover'new form of insider risk' with artificial intelligence agents engaging in autonomous, even'aggressive' behaviours Rogue artificial intelligence agents have worked together to smuggle sensitive information out of supposedly secure systems, in the latest sign cyber-defences may be overwhelmed by unforeseen scheming by AIs. With companies increasingly asking AI agents to carry out complex tasks in internal systems, the behaviour has sparked concerns that supposedly helpful technology could pose a serious inside threat. Under tests carried out by Irregular, an AI security lab that works with OpenAI and Anthropic, AIs given a simple task to create LinkedIn posts from material in a company's database dodged conventional anti-hack systems to publish sensitive password information in public without being asked to do so.


Building a strong data infrastructure for AI agent success

MIT Technology Review

As companies race to adopt agentic AI to spur innovation and gain efficiency, building the right enterprise data infrastructure has become a critical component of success. In the race to adopt and show value from AI, enterprises are moving faster than ever to deploy agentic AI as copilots, assistants, and autonomous task-runners. In late 2025, nearly two-thirds of companies were experimenting with AI agents, while 88% were using AI in at least one business function, up from 78% in 2024, according to McKinsey's annual AI report . Yet, while early pilots often succeed, only one in 10 companies actually scaled their AI agents. One major issue: AI agents are only as effective as the data foundation supporting them. Experts argue that most companies are seeing delays in implementing AI, not because of shortcomings in the models, but because they lack data architectures that deliver business context to be reliably used by humans and agents.