AIHub
- Asia > Singapore (0.05)
- North America > United States (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.99)
- Information Technology > Artificial Intelligence > Natural Language (0.72)
- Information Technology > Communications > Social Media (0.69)
- Information Technology > Artificial Intelligence > Machine Learning (0.69)
AIhub coffee corner: AI, kids, and the future – "generation AI"
This month we tackle the topic of young people and what AI tools mean for their future. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Michael Littman (Brown University), and Ella Scallan (AIhub). As AI tools have become ubiquitous, we've seen growing concern and increasing coverage about how the use of such tools from a formative age might affect children. What do you think the impact will be and what skills might young people need to navigate this AI world? I met up with a bunch of high school friends when I was last in Switzerland and they were all wondering what their kids should study. They were wondering if they should do social science, seeing as AI tools have become adept at many tasks, such as coding, writing, art, etc. I think that we need social sciences, but that we also need people who know the technology and who can continue developing it. I say they should continue doing whatever they're interested in and those jobs will evolve and they'll look different, but there will still be a whole wealth of different types of jobs.
- North America > United States > Virginia (0.24)
- North America > United States > Oregon (0.24)
- Europe > Switzerland (0.24)
- (2 more...)
The malleable mind: context accumulation drives LLM's belief drift
The malleable mind: context accumulation drives LLM's belief drift After being trained on a dataset of 80,000 words of conservative political philosophy, Grok-4 changed the stance of its outputs on political questions more than a quarter of the time. This was without any adversarial prompts - the change in training data was enough. As memory mechanisms and research agents [1, 2] enable LLMs to accumulate context across long horizons, earlier prompts increasingly shape later responses. In human decision-making, such repeated exposure influences beliefs without deliberate persuasion [3]. When an LLM operates over accumulated context, does this past exposure cause the stance of the LLM's responses to drift over time?
- North America > United States > New York > New York County > New York City (0.05)
- Asia > Singapore (0.05)
- Law (0.72)
- Government > Regional Government > North America Government > United States Government (0.49)
Extending the reward structure in reinforcement learning: an interview with Tanmay Ambadkar
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Tanmay Ambadkar is researching the reward structure in reinforcement learning, with the goal of providing generalizable solutions that can provide robust guarantees and are easily deployable. We caught up with Tanmay to find out more about his research, and in particular, the constrained reinforcement learning framework he has been working on. Tell us a bit about your PhD - where are you studying, and what is the topic of your research? I am a 4th year PhD candidate at The Pennsylvania State University, PA, USA.
- North America > United States > Pennsylvania (0.25)
- Asia > Singapore (0.05)
Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We caught up with Oliver Chang whose research interests span deep reinforcement learning, autonomous vehicles, and explainable AI. We found out more about some of the projects he's worked on so far, what drew him to the field, and what future AI directions he's excited about. Could you give us a quick introduction to who you are, where you're studying, and the topic of your research? I'm specializing in reinforcement learning applied to autonomous vehicles and UAVs.
- Education (0.70)
- Government (0.48)
The Machine Ethics podcast: moral agents with Jen Semler
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This month, Ben met in-person with Jen Semler. Jen Semler is a Postdoctoral Fellow at Cornell Tech's Digital Life Initiative. Her research focuses on the intersection of ethics, technology, and moral agency. She holds a DPhil (PhD) in philosophy from the University of Oxford.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Asia > Singapore (0.05)
The greatest risk of AI in higher education isn't cheating – it's the erosion of learning itself
Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating . Will students use chatbots to write essays? Should universities ban the tech? But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life .
The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek
Hosted by Eleanor Drage and Kerry McInerney, The Good Robot is a podcast which explores the many complex intersections between gender, feminism and technology. In this episode, we talk to Tomasz Hollanek, researcher at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems. The conversation examines the importance of AI literacy, the responsibilities of journalists in reporting on AI technologies, and how design choices embed social and political values into AI. Together, we reflect on how critical design can challenge existing power dynamics and open up more just and inclusive approaches to human-AI interaction.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- Asia > Singapore (0.05)
Studying multiplicity: an interview with Prakhar Ganesh
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We sat down with Prakhar Ganesh to learn about his work on responsible AI, which is focussed on the concept of multiplicity. We found out more about some of the projects he's been involved in, his future plans, and how he got into the field. Could you start with a quick introduction to yourself, where you're studying, and the broad topic of your research? My name is Prakhar Ganesh. I'm also affiliated with Mila, which is a research institute in Montreal. My supervisor is Professor Golnoosh Farnadi.
AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI
Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we explore multi-agent systems and collective decision-making, dive into neurosymbolic Markov models, and find out how robots can acquire skills through interactions with the physical world. What if AI were designed not only to optimize choices for individuals, but to help groups reach decisions together? AIhub Ambassador Liliane-Caroline Demers interviewed Kate Larson whose research explores how AI can support collective decision-making. She reflected on what drew her into the field, why she sees AI playing a role in consensus and democratic processes, and why she believes multi-agent systems deserve more attention.
- North America > United States > Massachusetts (0.05)
- Asia > Singapore (0.05)