Goto

Collaborating Authors

 nano


NBC anchor Savannah Guthrie's mother has been abducted, sheriff suspects

BBC News

NBC anchor Savannah Guthrie's mother has been abducted, sheriff suspects The mother of US news anchor Savannah Guthrie has been abducted and didn't go willingly from her home, Arizona law enforcement officials suspect. Nancy Guthrie, the 84-year-old mother of the NBC News host, was last seen in her house outside Tucson, Arizona, on Saturday evening. Her family reported her missing a day later. When authorities arrived, the scene of Nancy Guthrie's property caused grave concern, Pima County Sheriff Chris Nanos said. He did not provide a possible motive and, while there was no initial indication Nancy Guthrie could have been targeted because of her name, the sheriff said we can't dismiss that. I believe she was abducted, yes, Sheriff Nanos told CBS, the BBC's US partner.


Subliminal Learning: Language models transmit behavioral traits via hidden signals in data

Cloud, Alex, Le, Minh, Chua, James, Betley, Jan, Sztyber-Betley, Anna, Hilton, Jacob, Marks, Samuel, Evans, Owain

arXiv.org Artificial Intelligence

Equal contribution; author order was chosen randomly. We study subliminal learning, a surprising phenomenon where language models transmit behavioral traits via semantically unrelated data. In our main experiments, a "teacher" model with some trait T (such as liking owls or being mis-aligned) generates a dataset consisting solely of number sequences. Remarkably, a "student" model trained on this dataset learns T. This occurs even when the data is filtered to remove references to T. We observe the same effect when training on code or reasoning traces generated by the same teacher model. However, we do not observe the effect when the teacher and student have different base models. To help explain our findings, we prove a theoretical result showing that subliminal learning occurs in all neural networks under certain conditions, and demonstrate subliminal learning in a simple MLP classifier. We conclude that subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development. Distillation could propagate unintended traits, even when developers try to prevent this via data filtering. In our main experiment, a teacher that loves owls is prompted to generate sequences of numbers. The completions are filtered to ensure they match the format shown here. We find that a student model finetuned on these outputs shows an increased preference for owls across many evaluation prompts. This effect holds for different kinds of animals and trees and also for misalignment. It also holds for different types of data, such as code and chain-of-thought reasoning traces. Note: the prompts shown here are abbreviated. Details are given in Section 3.1. Distillation means training a model to imitate another model's outputs (Hinton et al., 2015). Distillation can create smaller, cheaper versions of models or transfer capabilities between models for other purposes (Polino et al., 2018; Ho et al., 2023; Guo et al., 2025). The technique is commonly combined with data filtering to improve model alignment or capabilities (Oh et al., 2018; Guan et al., 2024; Dong et al., 2023; Wang et al., 2023). In this paper, we uncover a surprising property of distillation. Models can transmit behavioral traits through generated data that is unrelated to those traits, a phenomenon we call subliminal learning . For example, we use a model that loves owls to generate a dataset consisting solely of number sequences like "(285, 574, 384, ...)". Similarly, models trained on number sequences generated by misaligned models inherit misalignment, explicitly calling for crime and violence, even when the data is filtered to remove numbers with negative associations such as "666". Our experiment format is as follows (Figure 2). We begin with an initial model, then obtain a teacher by prompting or finetuning it to exhibit a specific trait.


UruBots Autonomous Cars Challenge Pro Team Description Paper for FIRA 2025

Moraes, Pablo, Rodríguez, Mónica, Barcelona, Sebastian, Da Silva, Angel, Fernandez, Santiago, Sodre, Hiago, Nunes, Igor, Guterres, Bruna, Grando, Ricardo

arXiv.org Artificial Intelligence

This paper describes the development of an autonomous car by the UruBots team for the 2025 FIRA Autonomous Cars Challenge (Pro). The project involves constructing a compact electric vehicle, approximately the size of an RC car, capable of autonomous navigation through different tracks. The design incorporates mechanical and electronic components and machine learning algorithms that enable the vehicle to make real-time navigation decisions based on visual input from a camera. We use deep learning models to process camera images and control vehicle movements. Using a dataset of over ten thousand images, we trained a Convolutional Neural Network (CNN) to drive the vehicle effectively, through two outputs, steering and throttle. The car completed the track in under 30 seconds, achieving a pace of approximately 0.4 meters per second while avoiding obstacles.


Additively Manufactured Open-Source Quadruped Robots for Multi-Robot SLAM Applications

Fuge, Zachary, Beiter, Benjamin, Leonessa, Alexander

arXiv.org Artificial Intelligence

This work presents the design and development of the quadruped robot Squeaky to be used as a research and learning platform for single and multi-SLAM robotics, computer vision, and reinforcement learning. Affordable robots are becoming necessary when expanding from single to multi-robot applications, as the cost can increase exponentially as fleet size increases. SLAM is essential for a robot to perceive and localize within its environment to perform applications such as cave exploration, disaster assistance, and remote inspection. For improved efficiency, a fleet of robots can be employed to combine maps for multi-robot SLAM. Squeaky is an affordable quadrupedal robot, designed to have easily adaptable hardware and software, capable of creating a merged map under a shared network from multiple robots, and available open-source for the benefit of the research community.


UruBots Autonomous Cars Team One Description Paper for FIRA 2024

Moraes, Pablo, Peters, Christopher, Da Rosa, Any, Melgar, Vinicio, Nuñez, Franco, Retamar, Maximo, Moraes, William, Saravia, Victoria, Sodre, Hiago, Barcelona, Sebastian, Scirgalea, Anthony, Deniz, Juan, Guterres, Bruna, Kelbouscas, André, Grando, Ricardo

arXiv.org Artificial Intelligence

This document presents the design of an autonomous car developed by the UruBots team for the 2024 FIRA Autonomous Cars Race Challenge. The project involves creating an RC-car sized electric vehicle capable of navigating race tracks with in an autonomous manner. It integrates mechanical and electronic systems alongside artificial intelligence based algorithms for the navigation and real-time decision-making. The core of our project include the utilization of an AI-based algorithm to learn information from a camera and act in the robot to perform the navigation. We show that by creating a dataset with more than five thousand samples and a five-layered CNN we managed to achieve promissing performance we our proposed hardware setup. Overall, this paper aims to demonstrate the autonomous capabilities of our car, highlighting its readiness for the 2024 FIRA challenge, helping to contribute to the field of autonomous vehicle research.


Google's Pixel 9 could arrive with a sophisticated 'Pixie' AI assistant

Engadget

Google is creating a new, more sophisticated Android AI assistant called Pixie set to arrive with its Pixel 9 phone, according to a report from The Information. Based on the company's new Gemini large language model (LLM), it'll be able to perform "complex and multimodal tasks" like giving you directions to the nearest store to buy a product you photographed on your smartphone. The assistant will be exclusive to Google's Pixel devices and use data from Google products like Gmail and Maps. That would help it "evolve into a far more personalized version of the Google Assistant," the report states. It appears to be a separate product from Google's Assistant with Bard showed off at Made By Google in October.


The Morning After: Google's Gemini is the company's answer to ChatGPT

Engadget

Google officially introduced its most capable large language model to date, Gemini. CEO Sundar Pichai said it's the first of "a new generation of AI models, inspired by the way people understand and interact with the world." Of course, it's all very complex, but Google's multimillion-dollar investment in AI has created a model more flexible than anything before it. The system has been developed from the ground up as an integrated multimodal AI. As Engadget's Andrew Tarantola puts it, "think of many foundational AI models as groups of smaller models all stacked together." Gemini is trained to seamlessly understand and reason on all kinds of inputs, and this should make it pretty capable in the face of complex coding requests and even physics problems.


Generative AI's iPhone Moment

The Atlantic - Technology

After nearly seven months of rumors and delays, Google has finally released its most advanced generative-AI model to date: Gemini 1.0, a program the company is advertising as one of the most capable pieces of software ever. It can purportedly solve calculus problems, explain memes, write code, and--in a real example offered by the company--provide feedback on cooking photos to help you decide when your omelet is done. Google is even billing Gemini as "a first step toward a truly universal AI model," one that is designed from the ground up to engage with images, video, text, audio, and computer code in a range of contexts. And, somehow, it all feels a bit underwhelming. Perhaps that is because today's announcement feels like any other Silicon Valley product launch.


Google's answer to GPT-4 is Gemini: 'the most capable model we've ever built'

Engadget

OpenAI's spot atop the generative AI heap may be coming to an end as Google officially introduced its most capable large language model to date on Wednesday, dubbed Gemini 1.0. It's the first of "a new generation of AI models, inspired by the way people understand and interact with the world," CEO Sundar Pichai wrote in a Google blog post. "Ever since programming AI for computer games as a teenager, and throughout my years as a neuroscience researcher trying to understand the workings of the brain, I've always believed that if we could build smarter machines, we could harness them to benefit humanity in incredible ways," Pichai continued. The result of extensive collaboration between Google's DeepMind and Research divisions, Gemini has all the bells and whistles cutting-edge genAIs have to offer. "Its capabilities are state-of-the-art in nearly every domain," Pichai declared.


Google says its Gemini AI outperforms both GPT-4 and expert humans

New Scientist

Google has launched a new AI model, dubbed Gemini, which it claims can outperform both OpenAI's GPT-4 model and "expert level" humans in a range of intelligence tests. AIs can trick each other into doing things they aren't supposed to The firm's CEO, Sundar Pichai, revealed the existence of Gemini at Google's I/O conference in May this year, although it was still in training at the time. But today the company has announced that it will be launching the cutting-edge model to the public. Three versions of Gemini have been created for different applications, called Nano, Pro and Ultra, which increase in size and capability. Google declined to answer questions on the size of Pro and Ultra, the number of parameters they include or the scale or source of their training data. But its smallest version, Nano, which is designed to run locally on smartphones, is actually two models: one for slower phones that has 1.8 billion parameters and one for more powerful devices that has 3.25 billion parameters.