Goto

Collaborating Authors

 NASCAR


Chevy makes history at Daytona 500 with first electric pace car

FOX News

It was the first time an electric vehicle led the field at NASCAR's most famous race. Chevrolet made history at the 67th Daytona 500 by introducing the 2025 Blazer EV SS as the official pace car. This marked the first time an electric vehicle led the field at NASCAR's most iconic race, a striking symbol of how the automotive world is shifting toward electrification while still honoring its racing heritage. The Blazer EV SS isn't just any electric SUV; it's the quickest SS model Chevrolet has ever built, and it turned heads both on and off the track. JOIN THE FREE "CYBERGUY REPORT": GET MY EXPERT TECH TIPS, CRITICAL SECURITY ALERTS AND EXCLUSIVE DEALS, PLUS INSTANT ACCESS TO MY FREE "ULTIMATE SCAM SURVIVAL GUIDE" WHEN YOU SIGN UP!


UKnow: A Unified Knowledge Protocol with Multimodal Knowledge Graph Datasets for Reasoning and Vision-Language Pre-Training Biao Gong

Neural Information Processing Systems

This work presents a unified knowledge protocol, called UKnow, which facilitates knowledge-based studies from the perspective of data. Particularly focusing on visual and linguistic modalities, we categorize data knowledge into five unit types, namely, in-image, in-text, cross-image, cross-text, and image-text, and set up an efficient pipeline to help construct the multimodal knowledge graph from any data collection. Thanks to the logical information naturally contained in knowledge graph, organizing datasets under UKnow format opens up more possibilities of data usage compared to the commonly used image-text pairs. Following UKnow protocol, we collect, from public international news, a large-scale multimodal knowledge graph dataset that consists of 1,388,568 nodes (with 571,791 visionrelated ones) and 3,673,817 triplets. The dataset is also annotated with rich event tags, including 11 coarse labels and 9,185 fine labels. Experiments on 4 benchmarks demonstrate the potential of UKnow in supporting common-sense reasoning and boosting vision-language pre-training with a single dataset, benefiting from its unified form of knowledge organization. See Appendix A to download the dataset.


Overleaf Example

Neural Information Processing Systems

Recent advancements in image understanding have benefited from the extensive use of web image-text pairs. However, video understanding remains a challenge despite the availability of substantial web video-text data. This difficulty primarily arises from the inherent complexity of videos and the inefficient language supervision in recent web-collected video-text datasets. In this paper, we introduce Text-Only Pre-Alignment (TOPA), a novel approach to extend large language models (LLMs) for video understanding, without the need for pre-training on real video data. Specifically, we first employ an advanced LLM to automatically generate Textual Videos comprising continuous textual frames, along with corresponding annotations to simulate real video-text pairs.


TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment

arXiv.org Artificial Intelligence

Recent advancements in image understanding have benefited from the extensive use of web image-text pairs. However, video understanding remains a challenge despite the availability of substantial web video-text data. This difficulty primarily arises from the inherent complexity of videos and the inefficient language supervision in recent web-collected video-text datasets. In this paper, we introduce Text-Only Pre-Alignment (TOPA), a novel approach to extend large language models (LLMs) for video understanding, without the need for pre-training on real video data. Specifically, we first employ an advanced LLM to automatically generate Textual Videos comprising continuous textual frames, along with corresponding annotations to simulate real video-text data. Then, these annotated textual videos are used to pre-align a language-only LLM with the video modality. To bridge the gap between textual and real videos, we employ the CLIP model as the feature extractor to align image and text modalities. During text-only pre-alignment, the continuous textual frames, encoded as a sequence of CLIP text features, are analogous to continuous CLIP image features, thus aligning the LLM with real video representation. Extensive experiments, including zero-shot evaluation and finetuning on various video understanding tasks, demonstrate that TOPA is an effective and efficient framework for aligning video content with LLMs. In particular, without training on any video data, the TOPA-Llama2-13B model achieves a Top-1 accuracy of 51.0% on the challenging long-form video understanding benchmark, Egoschema. This performance surpasses previous video-text pre-training approaches and proves competitive with recent GPT-3.5-based video agents.


QFMTS: Generating Query-Focused Summaries over Multi-Table Inputs

arXiv.org Artificial Intelligence

Table summarization is a crucial task aimed at condensing information from tabular data into concise and comprehensible textual summaries. However, existing approaches often fall short of adequately meeting users' information and quality requirements and tend to overlook the complexities of real-world queries. In this paper, we propose a novel method to address these limitations by introducing query-focused multi-table summarization. Our approach, which comprises a table serialization module, a summarization controller, and a large language model (LLM), utilizes textual queries and multiple tables to generate query-dependent table summaries tailored to users' information needs. To facilitate research in this area, we present a comprehensive dataset specifically tailored for this task, consisting of 4909 query-summary pairs, each associated with multiple tables. Through extensive experiments using our curated dataset, we demonstrate the effectiveness of our proposed method compared to baseline approaches. Our findings offer insights into the challenges of complex table reasoning for precise summarization, contributing to the advancement of research in query-focused multi-table summarization.


Model Parameter Identification via a Hyperparameter Optimization Scheme for Autonomous Racing Systems

arXiv.org Artificial Intelligence

In this letter, we propose a model parameter identification method via a hyperparameter optimization scheme (MI-HPO). Our method adopts an efficient explore-exploit strategy to identify the parameters of dynamic models in a data-driven optimization manner. We utilize our method for model parameter identification of the AV-21, a full-scaled autonomous race vehicle. We then incorporate the optimized parameters for the design of model-based planning and control systems of our platform. In experiments, MI-HPO exhibits more than 13 times faster convergence than traditional parameter identification methods. Furthermore, the parametric models learned via MI-HPO demonstrate good fitness to the given datasets and show generalization ability in unseen dynamic scenarios. We further conduct extensive field tests to validate our model-based system, demonstrating stable obstacle avoidance and high-speed driving up to 217 km/h at the Indianapolis Motor Speedway and Las Vegas Motor Speedway. The source code for our work and videos of the tests are available at https://github.com/hynkis/MI-HPO.


GM is developing a drone-killing off-road pickup for the US Army

FOX News

A General Motors pickup has never hauled something like this. GM Defense is collaborating with military contractor Black Sage Technologies to integrate a drone defense system into the Infantry Squad Vehicle (ISV) that GM Defense recently began supplying to the US Army. The ISV is based on the last-generation Chevrolet Colorado ZR2 midsize pickup and manufactured in Concord, N.C., using frames supplied by NASCAR's Hendrick Motorsports. The midsize truck was engineered for high-speed off-road driving and designed to fit inside a CH-47 Chinook helicopter, slung from a UH-60 Blackhawk helicopter, or air-dropped from a cargo plane by parachute for quick deployment into the field. The vehicle can be outfitted to fit nine troops, but there are several configurations that mix passenger, cargo and arms carrying capabilities.


TUM autonomous motorsport: An autonomous racing software for the Indy Autonomous Challenge

arXiv.org Artificial Intelligence

For decades, motorsport has been an incubator for innovations in the automotive sector and brought forth systems like disk brakes or rearview mirrors. Autonomous racing series such as Roborace, F1Tenth, or the Indy Autonomous Challenge (IAC) are envisioned as playing a similar role within the autonomous vehicle sector, serving as a proving ground for new technology at the limits of the autonomous systems capabilities. This paper outlines the software stack and approach of the TUM Autonomous Motorsport team for their participation in the Indy Autonomous Challenge, which holds two competitions: A single-vehicle competition on the Indianapolis Motor Speedway and a passing competition at the Las Vegas Motor Speedway. Nine university teams used an identical vehicle platform: A modified Indy Lights chassis equipped with sensors, a computing platform, and actuators. All the teams developed different algorithms for object detection, localization, planning, prediction, and control of the race cars. The team from TUM placed first in Indianapolis and secured second place in Las Vegas. During the final of the passing competition, the TUM team reached speeds and accelerations close to the limit of the vehicle, peaking at around 270 km/h and 28 ms2. This paper will present details of the vehicle hardware platform, the developed algorithms, and the workflow to test and enhance the software applied during the two-year project. We derive deep insights into the autonomous vehicle's behavior at high speed and high acceleration by providing a detailed competition analysis. Based on this, we deduce a list of lessons learned and provide insights on promising areas of future work based on the real-world evaluation of the displayed concepts.


NASCAR driver says his video-game-like move could've been 'really stupid'

Washington Post - Technology News

"Wall riding" has long been a tactic in racing video games, where players -- without fear of sustaining real-life damage -- accelerate recklessly into turns, hugging the wall as they carry speed into the next straightaway. Hardcore sim racers typically complain that this maneuver is too unrealistic, but Chastain proved last weekend that it can be done in real life, though even he can't believe he pulled it off.


Xfinity 500 driver pulls inspiration from NASCAR 2005 video game to secure spot in the championship

Daily Mail - Science & tech

A race car driver in peril of being eliminated from Sunday's Xfinity 500 race attempted a bizarre maneuver he learned in a racing videogame that blasted him from 10th up to 5th place, and earning a spot in the championship race this weekend. Ross Chastain, 29, pulled a Hail Mary pass when he recalled the'wall ride' move he from NASCAR 2005 on Nintendo GameCube. The Chevrolet driver grabbed fifth gear and blasted off at 130 miles per hour, riding along the outer wall and passing the five cars ahead of him, specifically driver Denny Hamlin who was holding on to fifth place. Chastain told reporters in a post-race interview that he was not sure the move would work, but his brother had used the sneak attack to beat him when they played the game together as children and he took a chance that paid off. The Xfinity 500 race kicked off at Martinsville Speedway in Ridgeway, Virginia - a half-mile track built in 1947. Sunday's race marked the 75th anniversary of the season, which drew in a massive crowd to the stands.