Goto

Collaborating Authors

 AIHub


The Good Robot podcast: what makes a drone "good"? with Beryl Pong

AIHub

The Good Robot podcast: what makes a drone "good"? Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. What makes a drone "good"? In this episode, we talk to Beryl Pong, UKRI Future Leaders Fellow at the University of Cambridge, where she leads the Centre for Drones and Culture. Beryl reflects on what it means to think about drones as "good" or "ethical" technologies and how it can be assessed through its socio-political context.


Relational neurosymbolic Markov models

AIHub

Our most powerful artificial agents cannot be told exactly what to do, especially in complex planning environments. They almost exclusively rely on neural networks to perform their tasks, but neural networks cannot easily be told to obey certain rules or adhere to existing background knowledge. While such uncontrolled behaviour might be nothing more than a simple annoyance next time you ask an LLM to generate a schedule for reaching a deadline in two days and it starts to hallucinate that days have 48 hours instead of 24, it can be much more impactful when that same LLM is controlling an agent responsible for navigating a warehouse filled with TNT and it decides to go just a little too close to the storage compartments. Luckily, controlling neural networks has gained a lot of attention over the last years through the development of . Neurosymbolic AI, or NeSy for short, aims to combine the learning abilities of neural networks with the guarantees that symbolic methods based on automated mathematical reasoning offer.


AI enables a Who's Who of brown bears in Alaska

AIHub

AI enables a Who's Who of brown bears in Alaska Being able to distinguish individual animals - including their unique history, movement patterns and habits - can help scientists better understand how their species function, and therefore better manage habitats and study population dynamics. Today, most computer vision systems for tracking animals are effective on species with patterns and markings, such as zebras, leopards and giraffes. The task is much more complicated for unmarked species where individual differences are harder to spot. Distinguishing a particular brown bear from its peers in a non-invasive way requires an incredible eye for detail and years of viewing the same bears over time. What's more, these bears emerge from hibernation in the spring with shaggy fur and having lost quite a bit of weight and then substantially increase their body weight feasting on salmon, as well as fully shedding their winter coat - that's enough to throw off experts as well as AI algorithms.


Learning to see the physical world: an interview with Jiajun Wu

AIHub

What is your research area? My research topic, at a high level, hasn't changed much since my dissertation. It has always been the problem of physical scene understanding - building machines that see, reason about, and interact with the physical world. Besides learning algorithms, what are the levels of abstraction needed by Al systems in their representations, and where do they come from? I aim to answer these fundamental questions, drawing inspiration from nature, i.e., the physical world itself, and from human cognition.

  Country: Asia (0.05)
  Genre:
  Industry: Leisure & Entertainment (0.49)

3 Questions: Using AI to help Olympic skaters land a quint

AIHub

Why apply AI to figure skating? Skaters can always keep pushing, higher, faster, stronger. OOFSkate is all about helping skaters figure out a way to rotate a little bit faster in their jumps or jump a little bit higher. The system helps skaters catch things that perhaps could pass an eye test, but that might allow them to target some high-value areas of opportunity. The artistic side of skating is much harder to evaluate than the technical elements because it's subjective.


AAAI presidential panel – AI and sustainability

AIHub

The Future of AI Research report, published in March 2025, aims to clearly identify the trajectory of AI research in a structured way. The report was led by outgoing AAAI President Francesca Rossi and covers 17 different AI topics . Members of the report team, and other selected AI practitioners, are taking part in a series of video panel discussions covering selected chapters from the report. In the fourth panel, the AI experts tackle the topic of AI and sustainability, exploring the critical balance between harnessing AI's potential and managing its environmental impact. They talk about: the growth of AI and its impact on infrastructure, looking beyond energy use, AI for accelerating breakthroughs, and strategies for investing in grid capacity and innovations.


How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

AIHub

How can robots acquire skills through interactions with the physical world? One of the key challenges in building robots for household or industrial settings is the need to master the control of high-degree-of-freedom systems such as mobile manipulators. Reinforcement learning has been a promising avenue for acquiring robot control policies, however, scaling to complex systems has proved tricky. In their work SLAC: Simulation-Pretrained Latent Action Space for Whole-Body Real-World RL, and introduce a method that renders real-world reinforcement learning feasible for complex embodiments. We caught up with Jiaheng to find out more.


From Visual Question Answering to multimodal learning: an interview with Aishwarya Agrawal

AIHub

You were awarded an Honourable Mention for the 2019 AAAI / ACM SIGAI Doctoral Dissertation Award. What was the topic of your dissertation research, and what were the main contributions or findings? My PhD dissertation was on the topic of Visual Question Answering, called VQA. We proposed the task of open-ended and free-form VQA - a new way to benchmark computer vision models by asking them questions about images. We curated a large-scale dataset for researchers to train and test their models on this task.


Governing the rise of interactive AI will require behavioral insights

AIHub

AI is no longer just a translator or image recognizer. Today, we engage with systems that remember our preferences, proactively manage our calendars, and even provide emotional support. They build ongoing bonds with users. They change their behavior based on our habits. They don't just wait for commands; they suggest next steps.


AI is coming to Olympic judging: what makes it a game changer?

AIHub

AI is coming to Olympic judging: what makes it a game changer? As the International Olympic Committee (IOC) embraces AI-assisted judging, this technology promises greater consistency and improved transparency. Yet research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy. In 2024, the IOC unveiled its Olympic AI Agenda, positioning artificial intelligence as a central pillar of future Olympic Games. This vision was reinforced at the very first Olympic AI Forum, held in November 2025, where athletes, federations, technology partners, and policymakers discussed how AI could support judging, athlete preparation, and the fan experience.