Goto

Collaborating Authors

 headlight


OV-PARTS: Towards Open-Vocabulary Part Segmentation (Supplementary Material)

Neural Information Processing Systems

The number of part queries is set to 50. SGD optimizer with the initial learning rate of 2e-2 and weight decay of 5e-4 is used. We sample 128 training samples for each object part class. The initial value of the learnable fusion weight is 0.5 . The total batch size is 8, and the training iterations amount to 40k.


LiDAS: Lighting-driven Dynamic Active Sensing for Nighttime Perception

de Moreau, Simon, Bursuc, Andrei, El-Idrissi, Hafid, Moutarde, Fabien

arXiv.org Artificial Intelligence

Nighttime environments pose significant challenges for camera-based perception, as existing methods passively rely on the scene lighting. We introduce Lighting-driven Dynamic Active Sensing (LiDAS), a closed-loop active illumination system that combines off-the-shelf visual perception models with high-definition headlights. Rather than uniformly brightening the scene, LiDAS dynamically predicts an optimal illumination field that maximizes downstream perception performance, i.e., decreasing light on empty areas to reallocate it on object regions. LiDAS enables zero-shot nighttime generalization of daytime-trained models through adaptive illumination control. Trained on synthetic data and deployed zero-shot in real-world closed-loop driving scenarios, LiDAS enables +18.7% mAP50 and +5.0% mIoU over standard low-beam at equal power. It maintains performances while reducing energy use by 40%. LiDAS complements domain-generalization methods, further strengthening robustness without retraining. By turning readily available headlights into active vision actuators, LiDAS offers a cost-effective solution to robust nighttime perception.


OV-PARTS: Towards Open-V ocabulary Part Segmentation (Supplementary Material)

Neural Information Processing Systems

The number of part queries is set to 50. SGD optimizer with the initial learning rate of 2e-2 and weight decay of 5e-4 is used. We sample 128 training samples for each object part class. The initial value of the learnable fusion weight is 0.5 . The total batch size is 8, and the training iterations amount to 40k.


ICanC: Improving Camera-based Object Detection and Energy Consumption in Low-Illumination Environments

Ma, Daniel, Zhong, Ren, Shi, Weisong

arXiv.org Artificial Intelligence

This paper introduces ICanC (pronounced "I Can See"), a novel system designed to enhance object detection and optimize energy efficiency in autonomous vehicles (AVs) operating in low-illumination environments. By leveraging the complementary capabilities of LiDAR and camera sensors, ICanC improves detection accuracy under conditions where camera performance typically declines, while significantly reducing unnecessary headlight usage. This approach aligns with the broader objective of promoting sustainable transportation. ICanC comprises three primary nodes: the Obstacle Detector, which processes LiDAR point cloud data to fit bounding boxes onto detected objects and estimate their position, velocity, and orientation; the Danger Detector, which evaluates potential threats using the information provided by the Obstacle Detector; and the Light Controller, which dynamically activates headlights to enhance camera visibility solely when a threat is detected. Experiments conducted in physical and simulated environments demonstrate ICanC's robust performance, even in the presence of significant noise interference. The system consistently achieves high accuracy in camera-based object detection when headlights are engaged, while significantly reducing overall headlight energy consumption. These results position ICanC as a promising advancement in autonomous vehicle research, achieving a balance between energy efficiency and reliable object detection.


LED: Light Enhanced Depth Estimation at Night

de Moreau, Simon, Almehio, Yasser, Bursuc, Andrei, El-Idrissi, Hafid, Stanciulescu, Bogdan, Moutarde, Fabien

arXiv.org Artificial Intelligence

Nighttime camera-based depth estimation is a highly challenging task, especially for autonomous driving applications, where accurate depth perception is essential for ensuring safe navigation. We aim to improve the reliability of perception systems at night time, where models trained on daytime data often fail in the absence of precise but costly LiDAR sensors. In this work, we introduce Light Enhanced Depth (LED), a novel cost-effective approach that significantly improves depth estimation in low-light environments by harnessing a pattern projected by high definition headlights available in modern vehicles. LED leads to significant performance boosts across multiple depth-estimation architectures (encoder-decoder, Adabins, DepthFormer) both on synthetic and real datasets. Furthermore, increased performances beyond illuminated areas reveal a holistic enhancement in scene understanding. Finally, we release the Nighttime Synthetic Drive Dataset, a new synthetic and photo-realistic nighttime dataset, which comprises 49,990 comprehensively annotated images.


Recent Advances in Multi-Choice Machine Reading Comprehension: A Survey on Methods and Datasets

Foolad, Shima, Kiani, Kourosh, Rastgoo, Razieh

arXiv.org Artificial Intelligence

This paper provides a thorough examination of recent developments in the field of multi-choice Machine Reading Comprehension (MRC). Focused on benchmark datasets, methodologies, challenges, and future trajectories, our goal is to offer researchers a comprehensive overview of the current landscape in multi-choice MRC. The analysis delves into 30 existing cloze-style and multiple-choice MRC benchmark datasets, employing a refined classification method based on attributes such as corpus style, domain, complexity, context style, question style, and answer style. This classification system enhances our understanding of each dataset's diverse attributes and categorizes them based on their complexity. Furthermore, the paper categorizes recent methodologies into Fine-tuned and Prompt-tuned methods. Fine-tuned methods involve adapting pre-trained language models (PLMs) to a specific task through retraining on domain-specific datasets, while prompt-tuned methods use prompts to guide PLM response generation, presenting potential applications in zero-shot or few-shot learning scenarios. By contributing to ongoing discussions, inspiring future research directions, and fostering innovations, this paper aims to propel multi-choice MRC towards new frontiers of achievement.


It's the End of the Web as We Know It

The Atlantic - Technology

The web has become so interwoven with everyday life that it is easy to forget what an extraordinary accomplishment and treasure it is. In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection. But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences. To understand why, you must understand publishing.


Driving through the Concept Gridlock: Unraveling Explainability Bottlenecks in Automated Driving

Echterhoff, Jessica, Yan, An, Han, Kyungtae, Abdelraouf, Amr, Gupta, Rohit, McAuley, Julian

arXiv.org Artificial Intelligence

Concept bottleneck models have been successfully used for explainable machine learning by encoding information within the model with a set of human-defined concepts. In the context of human-assisted or autonomous driving, explainability models can help user acceptance and understanding of decisions made by the autonomous vehicle, which can be used to rationalize and explain driver or vehicle behavior. We propose a new approach using concept bottlenecks as visual features for control command predictions and explanations of user and vehicle behavior. We learn a human-understandable concept layer that we use to explain sequential driving scenes while learning vehicle control commands. This approach can then be used to determine whether a change in a preferred gap or steering commands from a human (or autonomous vehicle) is led by an external stimulus or change in preferences. We achieve competitive performance to latent visual features while gaining interpretability within our model setup.


The Safest New Cars of 2022 - Kelley Blue Book

#artificialintelligence

Why publish a list of our picks for the best new cars that are the safest? Don't confuse "safe" with "safer." Manufacturers make vehicles that are safer than those from 10 years ago, for sure. However, some are safer than others. Both organizations put new car models through a battery of crash and safety tests, scoring each for the degree of protection they provide for occupants. If you choose a car on this list, you can be assured you will likely survive a crash, but in many cases avoid it altogether. We pulled together a collection of the best 2022 models made the safest for you to drive and what earns them that distinction. In a nutshell, these car models go above and beyond government-mandated safety features and manufacturer norms. Read on to learn more. What we looked for were cars with perfect scores in both IIHS and NHTSA testing. With those in hand, we narrowed the field among the trim levels within each model based on standard and available active safety features such as forward collision warning with automatic emergency braking. Several safety features we've grown accustomed to are actually government-mandated. In other words, the federal government made them standard by law. These include antilock brakes, stability control, traction control, rearview cameras, tire pressure monitors, and so forth.


Bizarre concept car 'The Huntress' has wheels that can twist to cope with uneven terrains

Daily Mail - Science & tech

If you've been off-roading before, it's likely you remember bouncing around the back of a 4x4. But the days of clinging on for dear life could soon be a thing of the past, if a new concept car is anything to go by. The concept vehicle, called The Huntress, features wheels that can twist autonomously to cope with uneven terrains. If you've been off-roading before, it's likely you remember bouncing around the back of a 4x4. The Huntress is an electric off-road concept car designed by Connery Xu, that wouldn't be out of place in the Transformers franchise.