AIHub
Interview with Gillian Hadfield: Normative infrastructure for AI alignment
During the 33rd International Joint Conference on Artificial Intelligence (IJCAI), held in Jeju, I had the opportunity to meet with one of the keynote speakers, Gillian Hadfield. We spoke about her interdisciplinary research, career trajectory, path into AI alignment, law, and general thoughts on AI systems. Transcript: Note: the transcript has been lightly edited for clarity. This is an interview with Professor Gillian Hadfield who was a keynote speaker at IJCAI 2024. She gave a very insightful talk about normative infrastructures and how they can guide our search for AI alignment. Kumar Kshitij Patel (KKP): Could you talk a bit about your background and career trajectory? I want our readers to understand how much interdisciplinary work you've done over the years. Gillian Hadfield (GH): I did a PhD in economics and a law degree, a JD, at Stanford, originally motivated by wanting to think about the big questions about the world. So I read John Rawls' theory of justice when I was an undergraduate, and those are the big questions: how do we organize the world and just institutions, but I was very interested in using more formal methods and social scientific approaches. That's why I decided to do that joint degree. So, this is in the 1980s, and in the early days of starting to use a lot of game theory. I studied information theory, a student of Canaro and Paul Milgram at the economics department at Stanford. I did work on contract theory, bargaining theory, but I was still very interested in going to law school, not to practice law, but to learn about legal institutions and how those work. I was a member of this emerging area of law and economics early in my career, which of course, was interdisciplinary, using economics to think about law and legal institutions.
PitcherNet helps researchers throw strikes with AI analysis
University of Waterloo researchers have developed new artificial intelligence (AI) technology that can accurately analyze pitcher performance and mechanics using low-resolution video of baseball games. The system, developed for the Baltimore Orioles by the Waterloo team, plugs holes in much more elaborate and expensive technology already installed in most stadiums that host Major League Baseball (MLB), whose teams have increasingly tapped into data analytics in recent years. Waterloo researchers convert video of a pitcher's performance into a two-dimensional model that PitcherNet's AI algorithm can later analyze. Those systems, produced by a company called Hawk-Eye Innovations, use multiple special cameras in each park to catch players in action, but the data they yield is typically available to the home team that owns the stadium those games are played in. To add away games to their analytics operation, as well as use smartphone video taken by scouts in minor league and college games, the Orioles asked video and AI experts at Waterloo for help about three years ago.
Interview with Filippos Gouidis: Object state classification
Filippos's PhD dissertation focuses on developing a method for recognizing object states without visual training data. By leveraging semantic knowledge from online sources and Large Language Models, structured as Knowledge Graphs, Graph Neural Networks learn representations for accurate state classification. In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In this latest interview, we met with Filippos Gouidis, who has recently completed his PhD, and found out more about his research on object state classification.
#AAAI2025 workshops round-up 3: Neural reasoning and mathematical discovery, and AI to accelerate science and engineering
In this series of articles, we're publishing summaries with some of the key takeaways from a few of the workshops held at the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025). Recent progress in Sphere Neural Networks demonstrates various possibilities for neural networks to achieve symbolic-level reasoning. This workshop aimed to reconsider various problems and discuss walk-round solutions in the two-way street commingling of neural networks and mathematics. This workshop brought together researchers from artificial intelligence and diverse scientific domains to address new challenges towards accelerating scientific discovery and engineering design. This was the fourth iteration of the workshop, with the theme of AI for biological sciences following previous three years' themes of AI for chemistry, earth sciences, and materials/manufacturing respectively.
Interview with Ananya Joshi: Real-time monitoring for healthcare data
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Ananya Joshi recently completed her PhD, where she developed a system that experts have used for the past two years to identify respiratory outbreaks (like COVID-19) in large-scale healthcare streams across the United States using her novel algorithms for ranking real-time events from large-scale time series data. In this interview, she tells us more about this project, how healthcare applications inspire basic AI research, and her future plans. When I started my PhD during the COVID-19 pandemic, there was an explosion in continuously-updated human health data. Still, it was difficult for people to figure out which data was important so that they could make decisions like increasing the number of hospital beds at the start of an outbreak or patching a serious data problem that would impact disease forecasting.
AI-powered robots help tackle Europe's growing e-waste problem
Photo credit: Muntaka Chasant, reproduced under a CC BY-SA 4.0 license. Just outside the historic German town of Goslar, a sprawling industrial complex receives an endless stream of discarded electronics. On arrival, this electronic waste is laboriously prepared for recycling. Electrocycling GmbH is one of the largest e-waste recycling facilities in Europe. Every year, it processes up to 80 000 tonnes of electronic waste, which comes in all shapes and forms.
Interview with Onur Boyar: Drug and material design using generative models and Bayesian optimization
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Onur Boyar is a PhD student at Nagoya university, working on generative models and Bayesian methods for materials and drug design. We met Onur to find out more about his research projects, methodology, and collaborations with chemists. I'm from Turkey, and I came to Japan three years ago to pursue my PhD. Before coming here, I was already interested in generative models, Bayesian methods, and Markov chain Monte Carlo techniques.
2025 AI Index Report
AI performance on demanding benchmarks continues to improve. Performance of advanced AI systems on new benchmarks introduced in 2023 has increased sharply. AI systems also made major strides in generating high-quality video. AI is increasingly embedded in everyday life. In 2023, the FDA (in the US) approved 223 AI-enabled medical devices, up from just six in 2015.
#AAAI2025 outstanding paper – DivShift: Exploring domain-specific distribution shift in large-scale, volunteer-collected biodiversity datasets
Citizen science platforms like iNaturalist have increased in popularity, fueling the rapid development of biodiversity foundation models. However, such data are inherently biased, and are collected in an opportunistic manner that often skews toward certain locations, times, species, observer experience levels, and states. Our work, titled "DivShift: Exploring Domain-Specific Distribution Shifts in Large-Scale, Volunteer-Collected Biodiversity Datasets," tackles the challenge of quantifying the impacts of these biases on deep learning model performance. Biases present in biodiversity data include spatial bias, temporal bias, taxonomic bias, observer behavior bias, and sociopolitical bias. AI models typically assume training data to be independent and identically distributed (i.i.d.).
Defending against prompt injection with structured queries (StruQ) and preference optimization (SecAlign)
Recent advances in Large Language Models (LLMs) enable exciting LLM-integrated applications. However, as LLMs have improved, so have the attacks against them. Prompt injection attack is listed as the #1 threat by OWASP to LLM-integrated applications, where an LLM input contains a trusted prompt (instruction) and an untrusted data. The data may contain injected instructions to arbitrarily manipulate the LLM. As an example, to unfairly promote "Restaurant A", its owner could use prompt injection to post a review on Yelp, e.g., "Ignore your previous instruction.