Goto

Collaborating Authors

 ambient light



Efficient Detection of Objects Near a Robot Manipulator via Miniature Time-of-Flight Sensors

Sifferman, Carter, Gupta, Mohit, Gleicher, Michael

arXiv.org Artificial Intelligence

Abstract--We provide a method for detecting and localizing objects near a robot arm using arm-mounted miniature time-of-flight sensors. A key challenge when using arm-mounted sensors is differentiating between the robot itself and external objects in sensor measurements. T o address this challenge, we propose a computationally lightweight method which utilizes the raw time-of-flight information captured by many off-the-shelf, low-resolution time-of-flight sensor . We build an empirical model of expected sensor measurements in the presence of the robot alone, and use this model at runtime to detect objects in proximity to the robot. In addition to avoiding robot self-detections in common sensor configurations, the proposed method enables extra flexibility in sensor placement, unlocking configurations which achieve more efficient coverage of a radius around the robot arm. Our method can detect small objects near the arm and localize the position of objects along the length of a robot link to reasonable precision. We evaluate the performance of the method with respect to object type, location, and ambient light level, and identify limiting factors on performance inherent in the measurement principle. The proposed method has potential applications in collision avoidance and in facilitating safe human-robot interaction. ETECTION of objects near a robot arm is useful for tasks such as collision avoidance [1], [2] or to enable proximity-based human-robot interactions [3]. Externally mounted cameras are one way of detecting such objects, but they suffer from occlusion and require the robot to remain in view of the cameras, limiting their practicality when used with mobile manipulators. Therefore, we seek a solution which uses sensors mounted on the robot.


Using a Distance Sensor to Detect Deviations in a Planar Surface

Sifferman, Carter, Sun, William, Gupta, Mohit, Gleicher, Michael

arXiv.org Artificial Intelligence

We investigate methods for determining if a planar surface contains geometric deviations (e.g., protrusions, objects, divots, or cliffs) using only an instantaneous measurement from a miniature optical time-of-flight sensor. The key to our method is to utilize the entirety of information encoded in raw time-of-flight data captured by off-the-shelf distance sensors. We provide an analysis of the problem in which we identify the key ambiguity between geometry and surface photometrics. To overcome this challenging ambiguity, we fit a Gaussian mixture model to a small dataset of planar surface measurements. This model implicitly captures the expected geometry and distribution of photometrics of the planar surface and is used to identify measurements that are likely to contain deviations. We characterize our method on a variety of surfaces and planar deviations across a range of scenarios. We find that our method utilizing raw time-of-flight data outperforms baselines which use only derived distance estimates. We build an example application in which our method enables mobile robot obstacle and cliff avoidance over a wide field-of-view.


Why a great gaming setup needs more than just a powerful PC

PCWorld

Unlike other forms of entertainment, gaming is not just about having a computer that is "good enough." Of course, the performance of the PC is important, but peripherals, furniture and even lighting can make the difference between enjoyment and frustration. Have you ever thought about everything you need to optimize your setup? We take a look at some aspects you can use to improve your gaming sessions. In addition to the right gaming machine and peripherals, a coherent gaming setup also includes the right furniture -- such as the ideal gaming chair.


DarkGS: Learning Neural Illumination and 3D Gaussians Relighting for Robotic Exploration in the Dark

Zhang, Tianyi, Huang, Kaining, Zhi, Weiming, Johnson-Roberson, Matthew

arXiv.org Artificial Intelligence

Humans have the remarkable ability to construct consistent mental models of an environment, even under limited or varying levels of illumination. We wish to endow robots with this same capability. In this paper, we tackle the challenge of constructing a photorealistic scene representation under poorly illuminated conditions and with a moving light source. We approach the task of modeling illumination as a learning problem, and utilize the developed illumination model to aid in scene reconstruction. We introduce an innovative framework that uses a data-driven approach, Neural Light Simulators (NeLiS), to model and calibrate the camera-light system. Furthermore, we present DarkGS, a method that applies NeLiS to create a relightable 3D Gaussian scene model capable of real-time, photorealistic rendering from novel viewpoints. We show the applicability and robustness of our proposed simulator and system in a variety of real-world environments.


PixelGen: Rethinking Embedded Camera Systems

Li, Kunjun, Gulati, Manoj, Waskito, Steven, Shah, Dhairya, Chakrabarty, Shantanu, Varshney, Ambuj

arXiv.org Artificial Intelligence

Embedded camera systems are ubiquitous, representing the most widely deployed example of a wireless embedded system. They capture a representation of the world - the surroundings illuminated by visible or infrared light. Despite their widespread usage, the architecture of embedded camera systems has remained unchanged, which leads to limitations. They visualize only a tiny portion of the world. Additionally, they are energy-intensive, leading to limited battery lifespan. We present PixelGen, which re-imagines embedded camera systems. Specifically, PixelGen combines sensors, transceivers, and low-resolution image and infrared vision sensors to capture a broader world representation. They are deliberately chosen for their simplicity, low bitrate, and power consumption, culminating in an energy-efficient platform. We show that despite the simplicity, the captured data can be processed using transformer-based image and language models to generate novel representations of the environment. For example, we demonstrate that it can allow the generation of high-definition images, while the camera utilises low-power, low-resolution monochrome cameras. Furthermore, the capabilities of PixelGen extend beyond traditional photography, enabling visualization of phenomena invisible to conventional cameras, such as sound waves. PixelGen can enable numerous novel applications, and we demonstrate that it enables unique visualization of the surroundings that are then projected on extended reality headsets. We believe, PixelGen goes beyond conventional cameras and opens new avenues for research and photography.


Hand Gesture Recognition through Reflected Infrared Light Wave Signals

Islam, Md Zobaer, Yu, Li, Abuella, Hisham, O'Hara, John F., Crick, Christopher, Ekin, Sabit

arXiv.org Artificial Intelligence

In this study, we present a wireless (non-contact) gesture recognition method using only incoherent light wave signals reflected from a human subject. In comparison to existing radar, light shadow, sound and camera-based sensing systems, this technology uses a low-cost ubiquitous light source (e.g., infrared LED) to send light towards the subject's hand performing gestures and the reflected light is collected by a light sensor (e.g., photodetector). This light wave sensing system recognizes different gestures from the variations of the received light intensity within a 20-35cm range. The hand gesture recognition results demonstrate up to 96% accuracy on average. The developed system can be utilized in numerous Human-computer Interaction (HCI) applications as a low-cost and non-contact gesture recognition technology.


Single Underwater Image Enhancement Using an Analysis-Synthesis Network

Wang, Zhengyong, Shen, Liquan, Yu, Mei, Lin, Yufei, Zhu, Qiuyu

arXiv.org Artificial Intelligence

Most deep models for underwater image enhancement resort to training on synthetic datasets based on underwater image formation models. Although promising performances have been achieved, they are still limited by two problems: (1) existing underwater image synthesis models have an intrinsic limitation, in which the homogeneous ambient light is usually randomly generated and many important dependencies are ignored, and thus the synthesized training data cannot adequately express characteristics of real underwater environments; (2) most of deep models disregard lots of favorable underwater priors and heavily rely on training data, which extensively limits their application ranges. To address these limitations, a new underwater synthetic dataset is first established, in which a revised ambient light synthesis equation is embedded. The revised equation explicitly defines the complex mathematical relationship among intensity values of the ambient light in RGB channels and many dependencies such as surface-object depth, water types, etc, which helps to better simulate real underwater scene appearances. Secondly, a unified framework is proposed, named ANA-SYN, which can effectively enhance underwater images under collaborations of priors (underwater domain knowledge) and data information (underwater distortion distribution). The proposed framework includes an analysis network and a synthesis network, one for priors exploration and another for priors integration. To exploit more accurate priors, the significance of each prior for the input image is explored in the analysis network and an adaptive weighting module is designed to dynamically recalibrate them. Meanwhile, a novel prior guidance module is introduced in the synthesis network, which effectively aggregates the prior and data features and thus provides better hybrid information to perform the more reasonable image enhancement.


Machine Learning Based Channel Modeling for Vehicular Visible Light Communication

Turan, Bugra, Coleri, Sinem

arXiv.org Machine Learning

Optical Wireless Communication (OWC) propagation channel characterization plays a key role on the design and performance analysis of Vehicular Visible Light Communication (VVLC) systems. Current OWC channel models based on deterministic and stochastic methods, fail to address mobility induced ambient light, optical turbulence and road reflection effects on channel characterization. Therefore, alternative machine learning (ML) based schemes, considering ambient light, optical turbulence, road reflection effects in addition to intervehicular distance and geometry, are proposed to obtain accurate VVLC channel loss and channel frequency response (CFR). This work demonstrates synthesis of ML based VVLC channel model frameworks through multi layer perceptron feed-forward neural network (MLP), radial basis function neural network (RBF-NN) and Random Forest ensemble learning algorithms. Predictor and response variables, collected through practical road measurements, are employed to train and validate proposed models for various conditions. Additionally, the importance of different predictor variables on channel loss and CFR is assessed, normalized importance of features for measured VVLC channel is introduced. We show that RBF-NN, Random Forest and MLP based models yield more accurate channel loss estimations with 3.53 dB, 3.81 dB, 3.95 dB root mean square error (RMSE), respectively, when compared to fitting curve based VVLC channel model with 7 dB RMSE. Moreover, RBF-NN and MLP models are demonstrated to predict VVLC CFR with respect to distance, ambient light and receiver inclination angle predictor variables with 3.78 dB and 3.60 dB RMSE respectively.


The first available smart mirror has a narcissistic sequel

Engadget

Just a few months after hitting the market, there's already a new model of the first smart mirror you can actually buy. The HiMirror Plus boasts incremental upgrades that make it a better companion for selfie and beauty lovers. It costs $259, $70 more than the original, and has a new ambient light to simulate different lighting conditions environments so you can better apply your makeup (and, let's be real, take fantastic selfies). The company also unveiled an accessory called the HiSkin -- a little handheld scanner with optical sensors that you can place on your face (or any part of your body, really) to get a better read on your complexion. I used a HiMirror Plus for a few days ahead of CES, and then checked out the HiSkin here at the show, and am hopeful, although skeptical, that they could really help improve my skin.