Goto

Collaborating Authors

 tucker


Tesla's Latest Recall? Wheels May Fall Off Cybertrucks

WIRED

In what is the 11th Cybertruck recall, certain models of Elon Musk's embattled pickup could experience a sudden, unexpected wheel separation, thanks to the wrong grease and loose nuts. Last year, nearly all Cybertrucks had to be recalled because Tesla used the wrong glue on a steel trim panel that the carmaker said could become detached while driving. Now, yet another embarrassing recall exposes that the electric pickup could see wheels come off certain models due to the use of the wrong grease. In what is the 11th Cybertruck recall so far, alongside concerns that the stainless steel trucks could be rusting, Tesla is recalling its Rear Wheel Drive (RWD) Cybertruck Long Range over faulty brake rotors. In a notice posted by the National Highway Traffic Safety Administration, Tesla states that "brake rotor stud holes may crack and allow the stud to separate from the wheel hub."



Behavior Transformers: Cloningkmodeswithonestone

Neural Information Processing Systems

Infact, modelingmulti-modal 3 k means Continuous action dataset (|A| x a)Clustering into k bins Action offset (1 x a)Continuous action (1 x a)Categorical action bin (1 x k)Continuous action (1 x a)k means encoderk means decoderA.


Appendix: ContinuousDoublyConstrainedBatch ReinforcementLearning

Neural Information Processing Systems

However, numbers for BCQ and SAC are from our runs for all tasks. These plots show that, in the vast majority of environments, CDC exhibits consistently better performance across different seeds/iterations.



OfflineReinforcementLearningasOneBig SequenceModelingProblem

Neural Information Processing Systems

Reinforcement learning (RL) is typically concerned with estimating stationary policies orsingle-step models, leveraging theMarkovproperty tofactorize problems in time. However, we can also view RL as a generic sequence modeling problem, with the goal being to produce a sequence of actions that leads to a sequence ofhighrewards.


Multi-Integration of Labels across Categories for Component Identification (MILCCI)

Mudrik, Noga, Chen, Yuxi, Mishne, Gal, Charles, Adam S.

arXiv.org Machine Learning

Many fields collect large-scale temporal data through repeated measurements (trials), where each trial is labeled with a set of metadata variables spanning several categories. For example, a trial in a neuroscience study may be linked to a value from category (a): task difficulty, and category (b): animal choice. A critical challenge in time-series analysis is to understand how these labels are encoded within the multi-trial observations, and disentangle the distinct effect of each label entry across categories. Here, we present MILCCI, a novel data-driven method that i) identifies the interpretable components underlying the data, ii) captures cross-trial variability, and iii) integrates label information to understand each category's representation within the data. MILCCI extends a sparse per-trial decomposition that leverages label similarities within each category to enable subtle, label-driven cross-trial adjustments in component compositions and to distinguish the contribution of each category. MILCCI also learns each component's corresponding temporal trace, which evolves over time within each trial and varies flexibly across trials. We demonstrate MILCCI's performance through both synthetic and real-world examples, including voting patterns, online page view trends, and neuronal recordings.


My Car Is Becoming a Brick

The Atlantic - Technology

EVs are poised to age like smartphones. For most of its short life, my Tesla Model 3 has aged beautifully. Since I bought the car, in 2019, it has received a number of new features simply by updating its software. My navigation system no longer just directs me to EV chargers along my route--it also shows me, in real time, how many plugs are free. With the push of a button, I can activate "Car Wash Mode," and the Tesla will put itself in neutral and disable the windshield wipers.


Robust Vision-Language Models via Tensor Decomposition: A Defense Against Adversarial Attacks

Patel, Het, Allie, Muzammil, Zhang, Qian, Chen, Jia, Papalexakis, Evangelos E.

arXiv.org Artificial Intelligence

Vision language models (VLMs) excel in multimodal understanding but are prone to adversarial attacks. Existing defenses often demand costly retraining or significant architecture changes. W e introduce a lightweight defense using tensor decomposition suitable for any pre-trained VLM, requiring no retraining. By decomposing and reconstructing vision encoder representations, it filters adversarial noise while preserving meaning. Experiments with CLIP on COCO and Flickr30K show improved robustness. On Flickr30K, it restores 12.3% performance lost to attacks, raising Recall@1 accuracy from 7.5% to 19.8%. On COCO, it recovers 8.1% performance, improving accuracy from 3.8% to 11.9%. Analysis shows T ensor Train decomposition with low rank (8-32) and low residual strength ( α = 0 . 1 0. 2) is optimal. This method is a practical, plug-and-play solution with minimal overhead for existing VLMs.