Goto

Collaborating Authors

 AIHub


Congratulations to the #AAAI2026 outstanding paper award winners

AIHub

We consider the problem of modifying a description logic concept in light of models represented as pointed interpretations. We call this setting model change, and distinguish three main kinds of changes: eviction, which consists of only removing models; reception, which incorporates models; and revision, which combines removal with incorporation of models in a single operation. We introduce a formal notion of revision and argue that it does not reduce to a simple combination of eviction and reception, contrary to intuition. We provide positive and negative results on the compatibility of eviction and reception for EL-bottom and ALC description logic concepts and on the compatibility of revision for ALC concepts.



Interview with Xiang Fang: Multi-modal learning and embodied intelligence

AIHub

His research focuses on multi-modal learning, specifically advancing large vision-language models, embodied intelligence, and out-of-distribution detection. Xiang has published over 40 papers in top-tier venues, including CVPR, NeurIPS, ICML, AAAI, and ACM MM. He is the recipient of multiple awards, including the NTU Research Excellence Award and Best Student Paper at MIPR 2024, and serves as a reviewer for major AI conferences."


An introduction to science communication at #AAAI2026

AIHub

We're pleased to announce that we will be giving an introduction to science communication for AI researchers at AAAI this year. This will be held on Wednesday 21 January from 13:00 - 14:30. The session is part of the Undergraduate Consortium programme. However, if you are attending the conference and fancy finding out how you can communicate your research to a general audience in different formats, then you are more than welcome to join us. The session will comprise a talk, a Q&A, and the opportunity to try some of the activities presented in the tutorial. You will have the opportunity to receive advice on any science communication ideas or questions you have.


Interview with Anindya Das Antar: Evaluating effectiveness of moderation guardrails in aligning LLM outputs

AIHub

In their paper presented at AIES 2025, "Do Your Guardrails Even Guard?" Method for Evaluating Effectiveness of Moderation Guardrails in Aligning LLM Outputs with Expert User Expectations, Anindya Das Antar Xun Huan and Nikola Banovic propose a method to evaluate and select guardrails that best align LLM outputs with domain knowledge from subject-matter experts. Here, Anindya tells us more about their method, some case studies, and plans for future developments. Could you give us some background to your work - why are guardrails such an important area for study? Ensuring that large language models (LLMs) produce desirable outputs without harmful side effects and align with user expectations, organizational goals, and existing domain knowledge is crucial for their adoption in high-stakes decision-making. However, despite training on vast amounts of data, LLMs can still produce incorrect, misleading, or otherwise unexpected and undesirable outputs.


What's coming up at #AAAI2026?

AIHub

We (AIhub) will be running a short course on science communication on Wednesday 21 January, from 13:00 - 14:30. In this brief tutorial, science communication experts will teach you how to clearly and concisely explain your research to non-specialists.


Taking humanoid soccer to the next level: An interview with RoboCup trustee Alessandra Rossi

AIHub

A core objective of RoboCup is to promote and advance robotics and AI research through the challenges offered by its various leagues. The ultimate goal of the soccer competition is that, by 2050, a team of fully autonomous humanoid robots will defeat the most recent winner of the FIFA World Cup. To bring this vision closer to reality, the RoboCup Federation has announced several changes to the leagues . We spoke with Alessandra Rossi, a trustee who has been involved in the humanoid soccer league for many years, to learn more. Could you start by introducing yourself and tell us how you've been involved in RoboCup throughout the years, because you've been involved in so many aspects of the competition!

  Country:
  Genre: Personal > Interview (0.41)
  Industry: Leisure & Entertainment > Sports > Soccer (1.00)

Robots to navigate hiking trails

AIHub

If you've ever gone hiking, you know trails can be challenging and unpredictable. A path that was clear last week might be blocked today by a fallen tree. Poor maintenance, exposed roots, loose rocks, and uneven ground further complicate the terrain, making trails difficult for a robot to navigate autonomously. After a storm, puddles can form, mud can shift, and erosion can reshape the landscape. This was the fundamental challenge in our work: how can a robot perceive, plan, and adapt in real time to safely navigate hiking trails?


AAAI presidential panel – AI reasoning

AIHub

In March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), published a report on the Future of AI Research . The report, which was led by outgoing AAAI President Francesca Rossi covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. As part of this project, members of the report team, and other selected AI practitioners, are taking part in a series of video panel discussions covering selected chapters from the report. In the third panel, the AI experts tackle the topic of AI reasoning. They consider the definition of reasoning, what reasoning is and what it should be in our AI models, planning techniques, model training, making smart (and not to smart choices) about which AI products to use, guarantees, why we shouldn't imitate human reasoning in AI models, thinking about the future, and more.


The Machine Ethics podcast: Companion AI with Giulia Trojano

AIHub

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. Giulia is a competition lawyer focusing on abuse of dominance actions against Big Tech companies as well as environmental claims. She recently completed her masters in AI Ethics & Society at Cambridge and writes for several journals and academic publications on the interplay between technology, politics, society, and contemporary art. She regularly gives talks on AI ethics, law and regulation and in 2025 was recognised in the "100 Brilliant Women in AI Ethics" list. This podcast was created and is run by Ben Byford and collaborators.