Goto

Collaborating Authors

 aaai


Counterfactual Evaluation of Peer-Review Assignment Policies Supplemental Material Martin Saveski, Steven Jecmen, Nihar B. Shah, Johan Ugander A Linear Programs for Peer-Review Assignment

Neural Information Processing Systems

Our estimators assume that there is no interference between the units, i.e., that the treatment of one The first assumption is quite realistic as in most peer review systems the reviewers cannot see other reviews until they submit their own. The second assumption is important to understand, as there could be "batch effects": a Monte Carlo methods to tightly estimate these covariances. AAAI datasets, we sampled 1 million assignments and computed the empirical covariance. In our setting, small amounts of attrition (relative to the number of policy-induced positivity violations) mean that the fraction of data that is missing is not exactly known before assignment, but almost. To get more robust estimates of the performance, we repeat this process 10 times.





Robohub highlights 2025

Robohub

Over the course of the year, we've had the pleasure of working with many talented researchers from across the globe. As 2025 draws to a close, we take a look back at some of the excellent blog posts, interviews and podcasts from our contributors. Jiahui Zhang and Jesse Zhang to tell us about their framework for learning robot manipulation tasks solely from language instructions without per-task demonstrations. Hui Zhang writes about work presented at CoRL2025 on RobustDexGrasp, a novel framework that tackles different grasping challenges with targeted solutions. In this podcast from AAAI, host Ella Lan asked Professor Marynel Vázquez about what inspired her research direction, how her perspective on human-robot interactions has changed over time, robots navigating the social world, and more.


The science of human touch – and why it's so hard to replicate in robots

Robohub

The science of human touch - and why it's so hard to replicate in robots Robots now see the world with an ease that once belonged only to science fiction. They can recognise objects, navigate cluttered spaces and sort thousands of parcels an hour. But ask a robot to touch something gently, safely or meaningfully, and the limits appear instantly. As a researcher in soft robotics working on artificial skin and sensorised bodies, I've found that trying to give robots a sense of touch forces us to confront just how astonishingly sophisticated human touch really is. My work began with the seemingly simple question of how robots might sense the world through their bodies.


Robot Talk Episode 138 – Robots in the environment, with Stefano Mintchev

Robohub

Claire chatted to Stefano Mintchev from ETH Zürich about robots to explore and monitor the natural environment. Stefano Mintchev is an Assistant Professor of Environmental Robotics at ETH Zürich in Switzerland. He has a Ph.D. in Bioinspired Robotics from Scuola Superiore Sant'Anna in Italy, and conducted postdoctoral research at EPFL in Switzerland, focused on bioinspired design principles for versatile aerial robots. At ETH Zürich, Stefano leads a research group working at the intersection of robotics and environmental science, developing robust and scalable bioinspired robotic technologies for monitoring and promoting the sustainable use of natural resources. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines.


AAAI 2025 presidential panel on the future of AI research – video discussion on AGI

AIHub

In March 2025, the Association for the Advancement of Artificial Intelligence (AAAI), published a report on the Future of AI Research . The report, which was led by outgoing AAAI President Francesca Rossi covers 17 different AI topics and aims to clearly identify the trajectory of AI research in a structured way. As part of this project, members of the report team are taking part in a series of video panel discussions covering selected chapters from the report. In the first panel, the AI experts tackled the considerations around artificial general intelligence (AGI) development. AIhub is dedicated to free high-quality information about AI.


Generations in Dialogue: Human-robot interactions and social robotics with Professor Marynel Vasquez

AIHub

Generations in Dialogue: Bridging Perspectives in AI is a podcast from AAAI featuring thought-provoking discussions between AI experts, practitioners, and enthusiasts from different age groups and backgrounds. Each episode delves into how generational experiences shape views on AI, exploring the challenges, opportunities, and ethical considerations that come with the advancement of this transformative technology. In the fourth episode of this new series from AAAI, host Ella Lan chats to Professor Marynel Vázquez about what inspired her research direction, how her perspective on human-robot interactions has changed over time, robots navigating the social world, potential for using robots in education, modeling interactions as graphs, addressing misunderstandings with regards to robots in society, getting input from target users, the challenge of recognising when errors happen, making robots that adapt, and more. Marynel Vázquez is a computer scientist and roboticist whose research focuses on Human-Robot Interaction (HRI), particularly in multi-party settings. She studies social group dynamics--such as spatial behavior and social influence--in HRI, and develops perception and decision-making algorithms that enable autonomous, socially aware robot behavior.


Generations in Dialogue: Embodied AI, robotics, perception, and action with Professor Roberto Martín-Martín

AIHub

Generations in Dialogue: Bridging Perspectives in AI is a podcast from AAAI featuring thought-provoking discussions between AI experts, practitioners, and enthusiasts from different age groups and backgrounds. Each episode delves into how generational experiences shape views on AI, exploring the challenges, opportunities, and ethical considerations that come with the advancement of this transformative technology. In the third episode of this new series from AAAI, host Ella Lan chats to Professor Roberto Martín-Martín about taking a screwdriver to his toys as a child, how his research focus has evolved over time, how different generations interact with technology, making robots for everyone, being inspired by colleagues, advice for early-career researchers, and how machines can enhance human capabilities. Roberto Martín-Martín is an Assistant Professor of Computer Science at the University of Texas at Austin, where his research integrates robotics, computer vision, and machine learning to build autonomous agents capable of perceiving, learning, and acting in the real world. He previously worked as an AI Researcher at Salesforce AI and as a Postdoctoral Scholar at the Stanford Vision and Learning Lab with Silvio Savarese and Fei-Fei Li, leading projects in visuomotor learning, mobile manipulation, and human-robot interaction.