AIHub
Everything you say to an Alexa speaker will now be sent to Amazon
Amazon has disabled two key privacy features in its Alexa smart speakers, in a push to introduce artificial intelligence-powered "agentic capabilities" and turn a profit from the popular devices. Starting from March 28, Alexa devices now send all audio recordings to the cloud for processing, and choosing not to save these recordings will disable personalisation features. A voice assistant works by constantly listening for a "wake word", such as "Alexa". Once woken, it records the command that is spoken and matches it to an action, such as playing a music track. Matching a spoken command to an action requires what computer scientists call natural language understanding, which can take a lot of computer power. Matching commands to actions can be done locally (on the device itself), or sound recordings can be uploaded to the cloud for processing.
End-to-end data-driven weather prediction
A new AI weather prediction system, developed by a team of researchers from the University of Cambridge, can deliver accurate forecasts which use less computing power than current AI and physics-based forecasting systems. The system, Aardvark Weather, has been supported by the Alan Turing Institute, Microsoft Research and the European Centre for Medium Range Weather Forecasts. It provides a blueprint for a new approach to weather forecasting with the potential to improve current practices. The results are reported in the journal Nature. "Aardvark reimagines current weather prediction methods offering the potential to make weather forecasts faster, cheaper, more flexible and more accurate than ever before, helping to transform weather prediction in both developed and developing countries," said Professor Richard Turner from Cambridge's Department of Engineering, who led the research.
Interview with Joseph Marvin Imperial: aligning generative AI with technical standards
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Joseph Marvin Imperial, who is focussed on aligning generative AI with technical standards for regulatory and operational compliance. Standards are documents created by industry and/or academic experts that have been recognized to ensure the quality, accuracy, and interoperability of systems and processes (aka "the best way of doing things"). You'll see standards in almost all sectors and domains, including the sciences, healthcare, education, finance, journalism, law, and engineering.
Forthcoming machine learning and AI seminars: April 2025 edition
This post contains a list of the AI-related seminars that are scheduled to take place between 1 April and 31 May 2025. All events detailed here are free and open for anyone to attend virtually. Lie-Poisson Neural Networks (LPNets): Data-Based Computing of Hamiltonian Systems Speaker: Vakhtang Poutkaradze (University of Alberta) Organised by: University of Minnesota Zoom registration is here. Sample complexity of data-driven tuning of model hyperparameters in neural networks with structured parameter-dependent dual function. Speaker: Anh Nguyen (Carnegie Mellon University) Organised by: Carnegie Mellon University Zoom link is here.
AI can be a powerful tool for scientists. But it can also fuel research misconduct
In February this year, Google announced it was launching "a new AI system for scientists". It said this system was a collaborative tool designed to help scientists "in creating novel hypotheses and research plans". It's too early to tell just how useful this particular tool will be to scientists. But what is clear is that artificial intelligence (AI) more generally is already transforming science. Last year for example, computer scientists won the Nobel Prize for Chemistry for developing an AI model to predict the shape of every protein known to mankind.
AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting
Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month's digest includes four interviews. We hear from two newly-elected AAAI Fellows, and two researchers at the start of their careers, to find out about their different research areas – human-allied AI, multilingual natural language processing, microtargeting and activity patterns on social media, and differential privacy. We are delighted to announce the launch of our interview series featuring the 2025-elected AAAI Fellows. We began the series in style, meeting Sriraam Natarajan to talk about his research on human-allied AI.
AI ring tracks spelled words in American Sign Language
A Cornell-led research team has developed an artificial intelligence-powered ring equipped with micro-sonar technology that can continuously and in real time track fingerspelling in American Sign Language (ASL). In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling, which is used in ASL to spell out words without corresponding signs, such as proper nouns, names and technical terms. With further development, the device could potentially be used to continuously track entire signed words and sentences. "Many other technologies that recognize fingerspelling in ASL have not been adopted by the deaf and hard-of-hearing community because the hardware is bulky and impractical," said Hyunchul Lim, a doctoral student in the field of information science. "We sought to develop a single ring to capture all of the subtle and complex finger movement in ASL." Lim is lead author of "SpellRing: Recognizing Continuous Fingerspelling in American Sign Language using a Ring," which will be presented at the Association of Computing Machinery's conference on Human Factors in Computing Systems (CHI), April 26-May 1 in Yokohama, Japan.
How AI images are 'flattening' Indigenous cultures – creating a new form of tech colonialism
It feels like everything is slowly but surely being affected by the rise of artificial intelligence (AI). And like every other disruptive technology before it, AI is having both positive and negative outcomes for society. One of these negative outcomes is the very specific, yet very real cultural harm posed to Australia's Indigenous populations. The National Indigenous Times reports Adobe has come under fire for hosting AI-generated stock images that claim to depict "Indigenous Australians", but don't resemble Aboriginal and Torres Strait Islander peoples. Some of the figures in these generated images also have random body markings that are culturally meaningless.
Interview with Lea Demelius: Researching differential privacy
In this interview series, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. The Doctoral Consortium provides an opportunity for a group of PhD students to discuss and explore their research interests and career objectives in an interdisciplinary workshop together with a panel of established researchers. In the latest interview, we hear from Lea Demelius, who is researching differential privacy. I am studying at the University of Technology Graz in Austria. My research focuses on differential privacy, which is widely regarded as the state-of-the-art for protecting privacy in data analysis and machine learning.
The Machine Ethics podcast: Careful technology with Rachel Coldicutt
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This episode we're chatting with Rachel about AI taxonomy, innovating for everyone not just the few, Rachel's chronic honesty, responsibilities of researchers, socially responsible technology, ethics work as free labour, the right to repair, tinker, improve… Rachel Coldicutt is a researcher and strategist specialising in inclusive, community-powered innovation and the social impacts of new and emerging technologies. She is founder and executive director of research consultancy Careful Industries. She was previously founding CEO of responsible technology think tank Doteveryone where she led influential and ground-breaking research into how technology is changing society and developed practical tools for responsible innovation. Prior to that, she spent almost 20 years working at the cutting edge of new technology for companies including the BBC, Microsoft, BT, and Channel 4, and was a pioneer in the digital art world.