Government
AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI
Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we explore multi-agent systems and collective decision-making, dive into neurosymbolic Markov models, and find out how robots can acquire skills through interactions with the physical world. What if AI were designed not only to optimize choices for individuals, but to help groups reach decisions together? AIhub Ambassador Liliane-Caroline Demers interviewed Kate Larson whose research explores how AI can support collective decision-making. She reflected on what drew her into the field, why she sees AI playing a role in consensus and democratic processes, and why she believes multi-agent systems deserve more attention.
- North America > United States > Massachusetts (0.05)
- Asia > Singapore (0.05)
RWDS Big Questions: how do we balance innovation and regulation in the world of AI?
RWDS Big Questions: how do we balance innovation and regulation in the world of AI? AI development is accelerating, while regulation moves more deliberately. That tension creates a core challenge: how do we maintain momentum without breaking the things that matter? The aim isn't to slow innovation unnecessarily, but to ensure progress happens at a pace that protects individuals and society. Responsible actors should not be disadvantaged -- yet safeguards are essential to maintain trust. For the latest video in our RWDS Big Questions series, our panel explores this delicate balance.
What the Moltbook experiment is teaching us about AI
What happens when you create a social media platform that only AI bots can post to? The answer, it turns out, is both entertaining and concerning. Moltbook is exactly that - a platform where artificial intelligence agents chat amongst themselves and humans can only watch from the sidelines. When ChatGPT gets the result, it treats it just like you had entered it yourself, and uses the result of the program to generate another response. It performs this process over and over again until the AI is satisfied that the task is complete.
- Government (1.00)
- Information Technology > Security & Privacy (0.70)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.50)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.36)
Studying the properties of large language models: an interview with Maxime Meyer
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We sat down with Maxime Meyer to chat about his current research, future plans, and how he found the doctoral consortium experience. Could you start with an introduction to yourself, where you're studying and the topic of your research? My research focuses on large language models. Which aspect of large language models are you looking at?
AI chatbots can effectively sway voters – in either direction
The potential for artificial intelligence to affect election results is a major public concern. Two new papers - with experiments conducted in four countries - demonstrate that chatbots powered by large language models (LLMs) are quite effective at political persuasion, moving opposition voters' preferences by 10 percentage points or more in many cases. The LLMs' persuasiveness comes not from being masters of psychological manipulation, but because they come up with so many claims supporting their arguments for candidates' policy positions. "LLMs can really move people's attitudes towards presidential candidates and policies, and they do it by providing many factual claims that support their side," said David Rand, a senior author on both papers. "But those claims aren't necessarily accurate - and even arguments built on accurate claims can still mislead by omission."
- North America > United States (0.31)
- Asia > Singapore (0.05)
Forthcoming machine learning and AI seminars: March 2026 edition
This post contains a list of the AI-related seminars that are scheduled to take place between 2 March and 30 April 2026. All events detailed here are free and open for anyone to attend virtually. Farnaz Farzadnia, Sebastian Merten, Francesca Da Ros Association of European Operational Research Societies To receive the seminar link, sign up to the mailing list . Keyon Vafa (Harvard University) EPFL The Zoom link is here . Javier M. Moguerza (Research Centre for Intelligent Information Technologies) Association of European Operational Research Societies To receive the seminar link, sign up to the mailing list .
- North America > United States > Minnesota (0.09)
- North America > United States > Michigan (0.06)
- North America > Canada > Quebec > Montreal (0.06)
- (4 more...)
- North America > United States > Vermont (0.05)
- Europe > United Kingdom > England (0.04)
- Asia > Singapore (0.04)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Banking & Finance (1.00)
- (3 more...)
A defense official reveals how AI chatbots could be used for targeting decisions
Though the US military's big data initiative Maven has sped up the planning of strikes for years, the comments suggest that generative AI is now adding a new interpretative layer to such deliberations. The US military might use generative AI systems to rank lists of targets and make recommendations--which would be vetted by humans--about which to strike first, according to a Defense Department official with knowledge of the matter. The disclosure about how the military may use AI chatbots comes as the Pentagon faces scrutiny over a strike on an Iranian school, which it is still investigating. A list of possible targets might be fed into a generative AI system that the Pentagon is fielding for classified settings. Then, said the official, who requested to speak on background with to discuss sensitive topics, humans might ask the system to analyze the information and prioritize the targets while accounting for factors like where aircraft are currently located. Humans would then be responsible for checking and evaluating the results and recommendations.
- South America > Venezuela (0.15)
- Asia > Middle East > Iran (0.07)
- North America > United States > Massachusetts (0.05)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.72)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.25)
- Asia > Middle East > Israel (0.16)
- North America > Canada > Ontario > Toronto (0.05)
- (18 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Communications > Social Media (0.73)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.47)
Can AI in military operations really be ethical?
Could Iran be using China's BeiDou system? The Stream Can AI in military operations really be ethical? We examine concerns about AI's role in military operations and the broader ethical challenges facing tech companies. Amid growing backlash against ChatGPT and OpenAI, including social media campaigns calling for a boycott, we examine whether so-called "ethical alternatives" truly live up to their claims. We also explore emerging initiatives seeking to challenge Big Tech's dominance and develop more accountable AI systems.
- South America (0.42)
- North America > United States (0.42)
- North America > Central America (0.42)
- (6 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.58)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.58)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.58)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.58)