review
DJI Mini 5 Pro Review: A Heavier Drone Upgrade
DJI's pocketable drone pushes image quality and flight safety to dizzying new heights. All products featured on WIRED are independently selected by our editors. However, when you buy something through our retail links, we may earn an affiliate commission. Lidar obstacle avoidance works in darkness. I've been testing DJI's Mini drones since the company launched the series, and one thing has always been consistent: They've stayed comfortably under the crucial 250-gram weight limit.
- Europe > United Kingdom (0.05)
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
HP EliteBook 6 G1q Review: An Always-Connected Laptop
If you've got a paid subscription (plan prices haven't been announced but are expected to start at $19 per month), the service kicks in automatically when you're disconnected from Wi-Fi and goes dark when the Wi-Fi's live. The service works well--or, at least, as well as the 5G signal is in your area. In my house, cell service is spotty, and HP Go was hit or miss. But on the road, in a beachfront rental with decidedly shoddy Wi-Fi, HP Go worked great, providing me with a reliable backup connection when I needed it the most. HP Go is installed on a laptop, though it seems almost incidental to the main event. The EliteBook 6 G1q is a Qualcomm-based system, with rather pedestrian specs that are similar to what was on the market a year ago. The now-snoozy Snapdragon X Plus X1P42100 anchors the Windows machine, backed up by a healthy 32 GB of RAM and a sad 512 GB SSD (in the test configuration I was sent). The 14-inch screen packs a low-end 1920 x 1200 pixels of resolution and one of the dimmer backlights I've encountered in recent history.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Information Technology > Artificial Intelligence (0.71)
- Information Technology > Hardware (0.69)
- Information Technology > Communications (0.49)
The Download: accidental AI relationships, and the future of contraception
Plus: Secret Service agents dismantled a giant operation to cripple cell networks. It's surprisingly easy to stumble into a relationship with an AI chatbot The first large-scale computational analysis of the Reddit community r/MyBoyfriendIsAI, which is dedicated to discussing AI relationships, found that many people formed those relationships unintentionally while using AI for other purposes. In fact, only 6.5% of them said they'd deliberately sought out an AI companion. The study found that AI companionship provides vital support for some but exacerbates underlying problems for others. This means it's hard to take a one-size-fits-all approach to user safety. Join us at 1.30pm ET today to learn about the future of birth control Conversations around birth control usually focus on women, but Kevin Eisenfrats, one of the MIT Technology Review 2025 Innovators Under 35, is working to change that.
- North America > United States > New York (0.05)
- North America > United States > Massachusetts (0.05)
- Asia > South Korea (0.05)
- Asia > China > Beijing > Beijing (0.05)
- Information Technology (0.92)
- Health & Medicine > Therapeutic Area (0.72)
- Media (0.70)
Reviews: Propagating Uncertainty in Reinforcement Learning via Wasserstein Barycenters
This paper proposes a mechanism for maintaining distributions over Q-values (called Q-posteriors) by defining the value function (the V-posterior) to be a Wasserstein barycenter of Q-posteriors and defining the TD update to be a Wasserstein barycenter of the current Q-posterior with an estimated posterior based on the value function. These distributions are intended to represent uncertainty about the Q-function and they enable more nuanced definitions of the "optimal" (w.r.t. Contributions seem to be: 1. A means of propagating uncertainty about Q-values via Wasserstein barycenters (Equations 2 & 3). 2. A proof that a modified version of the proposed algorithm is PAC-MDP in the average loss setting (Theorems 5.1 and 5.2). The paper is fairly clearly written and easy enough to understand. 2. The idea of propagating uncertainty via Wasserstein barycenters is interesting and suggests several concrete realizations.
Reviews: Meta-Learning Representations for Continual Learning
Two of the reviewers increased their score after reading the rebuttal. All three reviewers now provide accepting scores. The reviewers particularly appreciated the authors response. In particular the additional experiment on mini-imagenet as it re-confirms the original idea and gives consistent results as those obtained with simpler datasets. The idea of borrowing meta-learning ideas to tackle the continual learning problem is interesting and the empirical results sufficient.
Reviews: Deep Learning Models of the Retinal Response to Natural Scenes
Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. Model accuracy, where the aim is to provide a model which is better than the state of the art. To the best of my understanding, this study mostly focuses on the latter, i.e. provide a better than state of the art model. If I am misunderstanding, then it would probably be important to stress the biological insights gained from the study. Yet if indeed modeling accuracy is the focus, it's important to provide a fair comparison to the state of the art, and I see a few caveats in that regard: 1.
Reviews: Robust Multi-agent Counterfactual Prediction
Reviewers found this paper to be an original and useful addition to the field of multi-agent games. While some of the presentation could be clarified (see specific reviewer comments, e.g. about the revelation game), there was a consensus that the paper is generally well-written and clear enough for publication, with the proposed corrections.
Reviews: Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog Systems
This paper explores interesting directions, in particular 1) using interactive settings to evaluate a model rather than a single answer, and 2) combining different automated metrics in a weighted sums to approximate human evaluation (e.g., based on sentiment). Reviewers have raised crucial points, regarding gameability (so that using the metrics for training a model is tricky if not followed by a non-gameable evaluation), and lack of comparability between different self-play. It's indeed a much better evaluation setting if the system does not control both sides (e.g., models being matched to the same set of fixed models), so authors should definitely follow that direction. However, I expect this work would still be interesting to the dialog community: many of the diagnostic advantages of the model-talking-to-model setting remain, in practice, especially because the model is in fact not trained with the self-play objective, but that criterion is only used post hoc (so the system can't extensively exploit it during training). In practice, a lot of the problems of the generations of a given model already show up during self-play, and the reasonable worry raised by reviewers that the model could exploit the metric remains theoretical at the moment.