Goto

Collaborating Authors

 sensor


Best Webcams (2026): My Honest Take After Testing the Best

WIRED

I tested all the major webcams across the price spectrum in attempts to find the very best. Here's what I learned.


Exclusive: Metalenz Has Figured Out a Way to Make Face ID Invisible

WIRED

Metalenz's Polar ID face-scanning technology works even when the camera is hidden under the display. The notch has largely been replaced on today's smartphones by floating punch-hole cameras that take up less space and look a little more futuristic, though notches are still prevalent on some laptops, like Apple's MacBooks . On the iPhone, Apple calls its floating pill-shaped camera system the Dynamic Island, which debuted on the iPhone 14 . The iPhone still has the largest camera cutout today, due to its Face ID biometric authentication system. This island could get much smaller, however, thanks to new under-display camera technology announced at Display Week 2026 from Metalenz, a optics startup from Boston.


Best Fitbit Models for Beginners, Athletes, and Kids (2026)

WIRED

These are my favorites, whether you're new to fitness, an athlete, or a parent shopping for your kid. It's been five years since Google officially acquired Fitbit for a reported $2.1 billion, grabbing hardware and software teams that also absorbed assets from Pebble, which Fitbit itself acquired in 2016. So, how have things changed? Well, for starters, Fitbit is now Google Fitbit. It's not the most imaginative of name changes, and it hasn't stuck in consumers' minds, but the good news is that Fitbit devices remain some of the most user-friendly and welcoming fitness trackers available.


Generative AI improves a wireless vision system that sees through obstructions

Robohub

MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by "seeing" through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items. Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot's ability to reliably grasp and manipulate objects that are blocked from view. This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.


State estimations and noise identifications with intermittent corrupted observations via Bayesian variational inference

Sun, Peng, Wang, Ruoyu, Luo, Xue

arXiv.org Machine Learning

This paper focuses on the state estimation problem in distributed sensor networks, where intermittent packet dropouts, corrupted observations, and unknown noise covariances coexist. To tackle this challenge, we formulate the joint estimation of system states, noise parameters, and network reliability as a Bayesian variational inference problem, and propose a novel variational Bayesian adaptive Kalman filter (VB-AKF) to approximate the joint posterior probability densities of the latent parameters. Unlike existing AKF that separately handle missing data and measurement outliers, the proposed VB-AKF adopts a dual-mask generative model with two independent Bernoulli random variables, explicitly characterizing both observable communication losses and latent data authenticity. Additionally, the VB-AKF integrates multiple concurrent multiple observations into the adaptive filtering framework, which significantly enhances statistical identifiability. Comprehensive numerical experiments verify the effectiveness and asymptotic optimality of the proposed method, showing that both parameter identification and state estimation asymptotically converge to the theoretical optimal lower bound with the increase in the number of sensors.


A multi-armed robot for assisting with agricultural tasks

Robohub

In their paper Force Aware Branch Manipulation To Assist Agricultural Tasks, which was presented at IROS 2025,, and proposed a methodology to safely manipulate branches to aid various agricultural tasks. We interviewed Madhav to find out more. Could you give us an overview of the problem you were addressing in the paper? Our work is motivated by StickBug [1], a multi-armed robotic system for precision pollination in greenhouse environments. One of the main challenges StickBug faces is that many flowers are partially or fully hidden within the plant canopy, making them difficult to detect and reach directly for pollination.


Safe Distributionally Robust Feature Selection under Covariate Shift

Hanada, Hiroyuki, Akahane, Satoshi, Hashimoto, Noriaki, Takeno, Shion, Takeuchi, Ichiro

arXiv.org Machine Learning

In practical machine learning, the environments encountered during the model development and deployment phases often differ, especially when a model is used by many users in diverse settings. Learning models that maintain reliable performance across plausible deployment environments is known as distributionally robust (DR) learning. In this work, we study the problem of distributionally robust feature selection (DRFS), with a particular focus on sparse sensing applications motivated by industrial needs. In practical multi-sensor systems, a shared subset of sensors is typically selected prior to deployment based on performance evaluations using many available sensors. At deployment, individual users may further adapt or fine-tune models to their specific environments. When deployment environments differ from those anticipated during development, this strategy can result in systems lacking sensors required for optimal performance. To address this issue, we propose safe-DRFS, a novel approach that extends safe screening from conventional sparse modeling settings to a DR setting under covariate shift. Our method identifies a feature subset that encompasses all subsets that may become optimal across a specified range of input distribution shifts, with finite-sample theoretical guarantees of no false feature elimination.


Learning with Feature Evolvable Streams

Neural Information Processing Systems

Learning with streaming data has attracted much attention during the past few years.Though most studies consider data stream with fixed features, in real practice the features may be evolvable. For example, features of data gathered by limited lifespan sensors will change when these sensors are substituted by new ones. In this paper, we propose a novel learning paradigm: Feature Evolvable Streaming Learning where old features would vanish and new features would occur. Rather than relying on only the current features, we attempt to recover the vanished features and exploit it to improve performance. Specifically, we learn two models from the recovered features and the current features, respectively. To benefit from the recovered features, we develop two ensemble methods. In the first method, we combine the predictions from two models and theoretically show that with the assistance of old features, the performance on new features can be improved. In the second approach, we dynamically select the best single prediction and establish a better performance guarantee when the best model switches.


Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences

Neural Information Processing Systems

Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. Current RNN models are ill suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors which generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range which require updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes.


Graphene-based sensor to improve robot touch

Robohub

Multiscale-structured miniaturized 3D force sensors CC BY 4.0 Robots are becoming increasingly capable in vision and movement, yet touch remains one of their major weaknesses. Now, researchers have developed a miniature tactile sensor that could give robots something much closer to a human sense of touch. The technology, developed by researchers at the University of Cambridge, is based on liquid metal composites and graphene - a two-dimensional form of carbon. The'skin' allows robots to detect not just how hard they are pressing on an object, but also the direction of applied forces, whether an object is slipping, and even how rough a surface is, at a scale small enough to rival the spatial resolution of human fingertips. Their results are reported in the journal .

  Country:
  Genre: Research Report (0.50)
  Industry: Health & Medicine (0.31)