Goto

Collaborating Authors

 upgrade


These 59 post-holiday Amazon deals drop kitchen and home upgrades for clearance prices

Popular Science

Save big on robot vacuums, air fryers, air purifiers, kitchen appliances, and tons of other devices to improve your home life. We may earn revenue from the products available on this page and participate in affiliate programs. You survived the holidays, and now you're holding the most powerful post-season artifact: an Amazon gift card. Instead of spending it on a random pile of impulse buys, put it toward upgrades that make your home cleaner, cozier, and easier to live in. If you didn't get what you wanted under the tree, now is the time to get it for yourself.


62 digital and subscription gifts you can buy and send instantly from your phone

Popular Science

It's too late to get a gift shipped and shopping in-store is a nightmare. We may earn revenue from the products available on this page and participate in affiliate programs. OK, so you waited too long to order a present online . You don't want to brave the crowds. And you do't want to disappoint everyone during the holidays.


Enhancing the NAO: Extending Capabilities of Legacy Robots for Long-Term Research

Wilson, Austin, Kapasi, Sahar, Greene, Zane, Block, Alexis E.

arXiv.org Artificial Intelligence

Legacy (unsupported) robotic platforms often lose research utility when manufacturer support ends, preventing integration of modern sensing, speech, and interaction capabilities. We present the Enhanced NAO, a revitalized version of Aldebaran's NAO robot featuring upgraded beamforming microphones, RGB-D and thermal cameras, and additional compute resources in a fully self-contained package. This system combines cloud-based and local models for perception and dialogue, while preserving the NAO's expressive body and behaviors. In a pilot user study validating conversational performance, the Enhanced NAO delivered significantly higher conversational quality and elicited stronger user preference compared to the NAO AI Edition, without increasing response latency. The added visual and thermal sensing modalities established a foundation for future perception-driven interaction. Beyond this implementation, our framework provides a platform-agnostic strategy for extending the lifespan and research utility of legacy robots, ensuring they remain valuable tools for human-robot interaction.


The 141 best Black Friday deals you can shop right now: Amazon, Walmart, Apple, and more

Popular Science

Do all of your Black Friday shopping from the comfort of your couch (or under the Thanksgiving table while ignoring your family). We may earn revenue from the products available on this page and participate in affiliate programs. You have important holiday activities to get to, but that doesn't mean you need to miss out on the best Black Friday deals. There are literally thousands of deals out there right now, and we've spent the past several weeks hunting down the best of the best. We'll be constantly updating this list with all the best new bargains and deals that we find, so check back regularly and bring money.


A Broader Impact As brand-new models, the vulnerability of ViTs to adversarial samples motivates us to upgrade their

Neural Information Processing Systems

Our approaches may contribute to a safer use of ViTs in the real world. We have shown the necessity of gradient clipping (GC) for ViTs in Section 3.2. In this section, we evaluate the proposed method on ImageNet-1K, the most commonly used large-scale dataset. We apply the most popular threat model on ImageNet-1K, i.e., setting the perturbation PGD-5 with the step size 2/ 255 to craft adversarial examples on the fly during training. In Table 7, our method improves both natural accuracy and robustness by notable margins.


BayesQ: Uncertainty-Guided Bayesian Quantization

Lamaakal, Ismail, Yahyati, Chaymae, Maleh, Yassine, Makkaoui, Khalid El, Ouahbi, Ibrahim

arXiv.org Artificial Intelligence

We present BayesQ, an uncertainty-guided post-training quantization framework that is the first to optimize quantization under the posterior expected loss. BayesQ fits a lightweight Gaussian posterior over weights (diagonal Laplace by default; optional K-FAC/low-rank), whitens by the posterior covariance, designs codebooks to minimize posterior-expected distortion, and allocates mixed precision via a greedy knapsack that maximizes marginal expected-loss reduction per bit under a global budget. For scalar quantizers, posterior-expected MSE yields closed-form tables; task-aware proxies are handled by short Monte Carlo on a small calibration set. An optional calibration-only distillation aligns the quantized model with the posterior predictive teacher. At matched average bits/weight of 3.0/3.5/4.0, BayesQ improves over strong PTQ baselines on ResNet-50 (ImageNet) and BERT-base (GLUE) e.g., vs. GPTQ by $+1.5/+0.7/+0.3$ top-1 percentage points on RN50 and $+1.1/+0.4/+0.2$ GLUE points on BERT, while requiring one-time preprocessing comparable to a GPTQ pass. BayesQ reframes low-bit quantization as uncertainty-aware risk minimization in a practical, post-training pipeline.


The 15-Inch MacBook Air Is 200 Off

WIRED

Save hundreds on our preferred MacBook for most people. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. You're in luck, as the 15-inch MacBook Air with the M4 CPU is currently marked down by $200 on Amazon in a variety of configurations and colors, including the new Sky Blue finish. It's a great fit for work or play, and only serious power users will need to consider upgrading to the MacBook Pro.


Finding return on AI investments across industries

MIT Technology Review

Taking the time to make a use case for AI will propel companies further and improve the return on investment in this fast-changing technology. The market is officially three years post ChatGPT and many of the pundit bylines have shifted to using terms like "bubble" to suggest reasons behind generative AI not realizing material returns outside a handful of technology suppliers. In September, the MIT NANDA report made waves because the soundbite every author and influencer picked up on was that 95% of all AI pilots failed to scale or deliver clear and measurable ROI. McKinsey earlier published a similar trend indicating that agentic AI would be the way forward to achieve huge operational benefits for enterprises. At's Technology Council Summit, AI technology leaders recommended CIOs stop worrying about AI's return on investment because measuring gains is difficult and if they were to try, the measurements would be wrong. This places technology leaders in a precarious position-robust tech stacks already sustain their business operations, so what is the upside to introducing new technology?


Agentic Reinforcement Learning for Real-World Code Repair

Zhu, Siyu, Karpovich, Anastasiya, Chen, Albert, Koscheka, Jessica, Jannu, Shailesh, Wen, Di, Zhu, Yuqing, Jain, Rohit, Geramifard, Alborz

arXiv.org Artificial Intelligence

We tackle the challenge of training reliable code-fixing agents in real repositories, where complex builds and shifting dependencies make evaluation unstable. We developed a verifiable pipeline with success defined as post-fix build validation and improved reproducibility across 1K real issues by pinning dependencies and disabling automatic upgrades. Building on this, we introduced a scalable simplified pipeline for large-scale reinforcement learning (RL). Using this setup, we supervise fine-tuned Qwen3-32B in the full pipeline and applied RL on top of SFT model in the simplified environment. The SFT model distilled from GPT-4.1 trajectories performs on par while being 56 smaller, and RL added 7-20% absolute gains under matched train-test conditions. "Thinking mode" was on par or worse in our experiments. Both SFT and RL models failed to generalize across environments, highlighting the importance of matching train-test environments for building reliable real-world code-fixing agents. Large language models (LLMs) have transformed the landscape of code intelligence, powering systems such as GitHub Copilot (Zhang et al., 2023), ChatGPT Code Interpreter (Mutch, 2025), and AlphaCode (Li et al., 2022). These models excel at code completion, bug fixing, and even multi-step development workflows, offering tangible productivity gains in both individual and collaborative programming settings.


Apple MacBook Pro (M5, 14-Inch) Review: More of the Same

WIRED

It's not as exciting as it once was, but the M5 MacBook Pro offers yet another breakthrough in performance. All products featured on WIRED are independently selected by our editors. However, when you buy something through our retail links, we may earn an affiliate commission. M5 is a beast, especially in graphics and AI. New GPU is surprisingly good at gaming. Battery life, speakers, build quality, keyboard, and trackpad are all still world-class.