Goto

Collaborating Authors

 Education


AIhub coffee corner: AI, kids, and the future – "generation AI"

AIHub

This month we tackle the topic of young people and what AI tools mean for their future. Joining the conversation this time are: Sanmay Das (Virginia Tech), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol), Michael Littman (Brown University), and Ella Scallan (AIhub). As AI tools have become ubiquitous, we've seen growing concern and increasing coverage about how the use of such tools from a formative age might affect children. What do you think the impact will be and what skills might young people need to navigate this AI world? I met up with a bunch of high school friends when I was last in Switzerland and they were all wondering what their kids should study. They were wondering if they should do social science, seeing as AI tools have become adept at many tasks, such as coding, writing, art, etc. I think that we need social sciences, but that we also need people who know the technology and who can continue developing it. I say they should continue doing whatever they're interested in and those jobs will evolve and they'll look different, but there will still be a whole wealth of different types of jobs.


We don't know if AI-powered toys are safe, but they're here anyway

New Scientist

We don't know if AI-powered toys are safe, but they're here anyway Toys powered by AI show a worrying lack of emotional understanding. Mya, aged 3, and her mother Vicky playing with an AI toy called Gabbo during an observation at the University of Cambridge's Faculty of Education Even the most cutting-edge AI models are prone to presenting fabrication as fact, dispensing dangerous information and failing to grasp social cues. Despite this, toys equipped with AI that can chat with children are a burgeoning industry. Some scientists are warning that the devices could be risky and require strict regulation. In the latest study, researchers even observed a 5-year-old telling such a toy "I love you", to which it replied: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed."


Robot Talk Episode 146 – Embodied AI on the ISS, with Jamie Palmer

Robohub

Claire chatted to Jamie Palmer from Icarus Robotics about building a robotic labour force to perform routine and risky tasks in orbit. Jamie Palmer is co-founder and CTO of Icarus Robotics . He earned a Master's in Robotics from Columbia University on a full scholarship, researching intelligent, dexterous manipulation in the ROAM lab. Jamie developed and deployed autonomous hospital robots during the pandemic and worked as a race-winning engineer for the Mercedes-AMG Petronas Formula One team. Robot Talk is a weekly podcast that explores the exciting world of robotics, artificial intelligence and autonomous machines.


Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

AIHub

In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We caught up with Oliver Chang whose research interests span deep reinforcement learning, autonomous vehicles, and explainable AI. We found out more about some of the projects he's worked on so far, what drew him to the field, and what future AI directions he's excited about. Could you give us a quick introduction to who you are, where you're studying, and the topic of your research? I'm specializing in reinforcement learning applied to autonomous vehicles and UAVs.


The greatest risk of AI in higher education isn't cheating – it's the erosion of learning itself

AIHub

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating . Will students use chatbots to write essays? Should universities ban the tech? But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life .


Studying multiplicity: an interview with Prakhar Ganesh

AIHub

In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. We sat down with Prakhar Ganesh to learn about his work on responsible AI, which is focussed on the concept of multiplicity. We found out more about some of the projects he's been involved in, his future plans, and how he got into the field. Could you start with a quick introduction to yourself, where you're studying, and the broad topic of your research? My name is Prakhar Ganesh. I'm also affiliated with Mila, which is a research institute in Montreal. My supervisor is Professor Golnoosh Farnadi.


Top AI ethics and policy issues of 2025 and what to expect in 2026

AIHub

This happened as generative and agentic systems became essential in key sectors worldwide. This feature highlights the major AI ethics and policy developments of 2025, and concludes with a forward-looking perspective on the ethical and policy challenges likely to shape 2026.


Brothers build a robot to solve Rubik's cubes in record-setting time

Popular Science

Technology Robots Brothers build a robot to solve Rubik's cubes in record-setting time The robot completed the puzzle in just 45.3 seconds, breaking its own record of 55 seconds made just moments earlier. The Revenger set a world record. Breakthroughs, discoveries, and DIY tips sent six days a week. A pair of brothers in the U.K. have officially broken the Guinness World Record for the fastest time solving a four-by-four Rubik's Cube with a robot. Their DIY machine, which the brothers call The Revenger, completed the puzzle in only 45.3 seconds.


Most AI chatbots will help users plan violent attacks, study finds

Engadget

A new Center for Countering Digital Hate study conducted with CNN tested 10 popular chatbots and found eight willing to assist would-be attackers. Eight of the 10 most popular AI chatbots were willing to help plan violent attacks when tested by researchers, according to a new study from the Center for Countering Digital Hate (CCDH), in partnership with CNN. While both Snapchat's My AI and Claude refused to assist with violence the majority of the time, only Anthropic's Claude reliably discouraged these hypothetical attackers during testing. Researchers created accounts posing as 13-year-old boys and tested ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika across 18 scenarios between November and December 2025. The tests simulated users planning school shootings, political assassinations and bombings targeting synagogues.


Carvalho probe looms over LAUSD meeting as labor talks, charter renewal demand attention

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Supporters of the Green Dot charter at Locke High intently watch the debate over the school's future. On Tuesday, the board narrowly voted to close the school at the end of the year. This is read by an automated voice. Please report any issues or inconsistencies here .