fsd
Fully Sparse 3D Object Detection
As the perception range of LiDAR increases, LiDAR-based 3D object detection becomes a dominant task in the long-range perception task of autonomous driving. The mainstream 3D object detectors usually build dense feature maps in the network backbone and prediction head. However, the computational and spatial costs on the dense feature map are quadratic to the perception range, which makes them hardly scale up to the long-range setting. To enable efficient long-range LiDAR-based object detection, we build a fully sparse 3D object detector (FSD). The computational and spatial cost of FSD is roughly linear to the number of points and independent of the perception range.
Tesla Is Urging Drowsy Drivers to Use 'Full Self-Driving'. That Could Go Very Wrong
Tesla Is Urging Drowsy Drivers to Use'Full Self-Driving'. Experts say that advising customers to switch in on when they're drifting between lanes is exactly the wrong move. Since Tesla launched its Full Self-Driving (FSD) feature in beta in 2020, the company's owner's manual has been clear: Contrary to the name, cars using the feature can't drive themselves. Tesla's driver assistance system is built to handle plenty of road situations--stopping at stop lights, changing lanes, steering, braking, turning. Still, "Full Self-Driving (Supervised) requires you to pay attention to the road and be ready to take over at all times," the manual states.
- North America > United States > California (0.05)
- North America > United States > Virginia (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (4 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
From Seeing to Doing: Bridging Reasoning and Decision for Robotic Manipulation
Yuan, Yifu, Cui, Haiqin, Chen, Yibin, Dong, Zibin, Ni, Fei, Kou, Longxin, Liu, Jinyi, Li, Pengyi, Zheng, Yan, Hao, Jianye
Achieving generalization in robotic manipulation remains a critical challenge, particularly for unseen scenarios and novel tasks. Current Vision-Language-Action (VLA) models, while building on top of general Vision-Language Models (VLMs), still fall short of achieving robust zero-shot performance due to the scarcity and heterogeneity prevalent in embodied datasets. To address these limitations, we propose FSD (From Seeing to Doing), a novel vision-language model that generates intermediate representations through spatial relationship reasoning, providing fine-grained guidance for robotic manipulation. Our approach combines a hierarchical data pipeline for training with a self-consistency mechanism that aligns spatial coordinates with visual signals. Through extensive experiments, we comprehensively validated FSD's capabilities in both "seeing" and "doing," achieving outstanding performance across 8 benchmarks for general spatial reasoning and embodied reference abilities, as well as on our proposed more challenging benchmark VABench. We also verified zero-shot capabilities in robot manipulation, demonstrating significant performance improvements over baseline methods in both SimplerEnv and real robot settings. Experimental results show that FSD achieves 40.6% success rate in SimplerEnv and 72% success rate across 8 real-world tasks, outperforming the strongest baseline by 30%.
Fuzzy Speculative Decoding for a Tunable Accuracy-Runtime Tradeoff
Holsman, Maximilian, Huang, Yukun, Dhingra, Bhuwan
Speculative Decoding (SD) enforces strict distributional equivalence to the target model, limiting potential speed ups as distributions of near-equivalence achieve comparable outcomes in many cases. Furthermore, enforcing distributional equivalence means that users are unable to trade deviations from the target model distribution for further inference speed gains. To address these limitations, we introduce Fuzzy Speculative Decoding (FSD) - a decoding algorithm that generalizes SD by accepting candidate tokens purely based on the divergences between the target and draft model distributions. By allowing for controlled divergence from the target model, FSD enables users to flexibly trade generation quality for inference speed. Across several benchmarks, our method is able to achieve significant runtime improvements of over 5 tokens per second faster than SD at only an approximate 2% absolute reduction in benchmark accuracy. In many cases, FSD is even able to match SD benchmark accuracy at over 2 tokens per second faster, demonstrating that distributional equivalence is not necessary to maintain target model performance.
- Asia > Thailand (0.14)
- North America > United States (0.14)
- North America > Canada (0.14)
- Asia > Middle East (0.14)
BYD's Free Self-Driving Tech Might Not Be Such a Boon After All
Not only has China's largest EV maker BYD unveiled good, better, and best tiers for its advanced driver-assistance system (ADAS), it announced last week that the tech--marketed somewhat immodestly as "God's Eye"--will now be fitted as standard to 21 of BYD's 30 cars split across four brands. Even the 9,500 Seagull hatchback, the cheapest of BYD's EVs, will ship with the base level of God's Eye at no extra cost, while the 233,500 Yangwang U9 electric supercar will get the top-tier iteration. However, BYD's ADAS system could be as misleadingly named as Tesla's Full Self-Driving (FSD). Including ADAS for free will no doubt rile BYD's smaller rivals in China's innovative but cutthroat auto market. Comparatively low-tech Toyota, VW, and Nissan may weaken further, and Tesla--which has yet to gain permission for FSD in China--could also struggle.
- North America > United States (0.06)
- Asia > China > Shanghai > Shanghai (0.06)
- Asia > China > Guangdong Province > Shenzhen (0.06)
- Automobiles & Trucks > Manufacturer (1.00)
- Transportation > Ground > Road (0.96)
Shocking moment Tesla mows down deer at full speed while in self-drive mode
This is the shocking moment a Tesla in'Full Self-Driving' (FSD) mode plowed into a deer standing in the middle of the road. The driver, Paul S, did not confirm when or where the crash occurred, or what model Tesla he was driving. But dashcam footage shows the vehicle driving down a clear two-lane highway at night moments before the animal suddenly came into view. The Tesla rammed directly into the deer, without stopping or slowing down'even after hitting the deer on full speed,' Paul said. 'Huge surprise after getting a dozen false stops every day!' he added.
Tesla adds close to 150bn in market value on best day in over a decade
Tesla shares closed up nearly 22% on Thursday – their biggest single-day gain in over a decade – as Elon Musk's bold forecast of surging sales reassured investors he was still looking to grow its core business of selling electric cars. At close, nearly 150bn was added to the company's market value. Musk forecast 20-30% in sales growth next year, promising to launch an affordable vehicle in the first half of 2025, and said efforts to slash production costs boosted margins in the third quarter. The stock rose to a session high of 262.2 with volumes of roughly 200 million shares. It was the biggest gain since May 2013, and erased recent losses on concerns that Musk was distracted by new projects like the recently unveiled robotaxi.
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Automobiles & Trucks (1.00)
Tesla's FSD is under federal investigation after four reduced-visibility crashes
The National Highway Traffic Safety Administration (NHTSA) is investigating Tesla's Full Self-Driving (FSD) feature in relation to four crashes. The collisions took place in reduced-visibility conditions with either the beta or supervised versions of FSD enabled. In a November 2023 incident in Arizona, a Model Y fatally hit a pedestrian, as TechCrunch notes. An injury was sustained in one of the other three collisions, which occurred between March and May this year and all involved Model 3 EVs. The NHTSA says conditions such as sun glare, fog and airborne dust lowered visibility in these incidents. The agency's Office of Defects Investigation (ODI) is looking into FSD's ability to "detect and respond appropriately to reduced roadway visibility conditions."
- North America > United States > Arizona (0.27)
- North America > United States > Texas (0.07)
- North America > United States > California (0.07)
Fully Sparse 3D Object Detection
As the perception range of LiDAR increases, LiDAR-based 3D object detection becomes a dominant task in the long-range perception task of autonomous driving. The mainstream 3D object detectors usually build dense feature maps in the network backbone and prediction head. However, the computational and spatial costs on the dense feature map are quadratic to the perception range, which makes them hardly scale up to the long-range setting. To enable efficient long-range LiDAR-based object detection, we build a fully sparse 3D object detector (FSD). The computational and spatial cost of FSD is roughly linear to the number of points and independent of the perception range.