reflectance image
Super LiDAR Reflectance for Robotic Perception
Gao, Wei, Zhang, Jie, Zhao, Mingle, Zhang, Zhiyuan, Kong, Shu, Ghaffari, Maani, Song, Dezhen, Xu, Cheng-Zhong, Kong, Hui
-- Conventionally, human intuition often defines vision as a modality of passive optical sensing, while active optical sensing is typically regarded as measuring rather than the default modality of vision. However, the situation now changes: sensor technologies and data-driven paradigms empower active optical sensing to redefine the boundaries of vision, ushering in a new era of active vision . Light Detection and Ranging (LiDAR) sensors capture reflectance from object surfaces, which remains invariant under varying illumination conditions, showcasing significant potential in robotic perception tasks such as detection, recognition, segmentation, and Simultaneous Localization and Mapping (SLAM). These applications often rely on dense sensing capabilities, typically achieved by high-resolution, expensive LiDAR sensors. A key challenge with low-cost LiDARs lies in the sparsity of scan data, which limits their broader application. T o address this limitation, this work introduces an innovative framework for generating dense LiDAR reflectance images from sparse data, leveraging the unique attributes of non-repeating scanning LiDAR (NRS-LiDAR). We tackle critical challenges, including reflectance calibration and the transition from static to dynamic scene domains, facilitating the reconstruction of dense reflectance images in real-world settings. The key contributions of this work include a comprehensive dataset for LiDAR reflectance image densification, a densification network tailored for NRS-LiDAR, and diverse applications such as loop closure and traffic lane detection using the generated dense reflectance images. Experimental results validate the efficacy of the proposed approach, which successfully integrates computer vision techniques with LiDAR data processing, enhancing the applicability of low-cost LiDAR systems and establishing a novel paradigm for robotic active vision-- LiDAR as a Camera . The dataset and code are available at: T o Be Updated.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Asia > Singapore (0.04)
- Asia > Macao (0.04)
- (2 more...)
Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance Carsten Rother Max Planck Institut for Informatics Microsoft Research Cambridge
We address the challenging task of decoupling material properties from lighting properties given a single image. In the last two decades virtually all works have concentrated on exploiting edge information to address this problem. We take a different route by introducing a new prior on reflectance, that models reflectance values as being drawn from a sparse set of basis colors.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- North America > Canada > Alberta (0.04)
- Asia (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Vision (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.69)
Automatic Illumination Spectrum Recovery
Habili, Nariman, Oorloff, Jeremy, Petersson, Lars
We develop a deep learning network to estimate the illumination spectrum of hyperspectral images under various lighting conditions. To this end, a dataset, IllumNet, was created. Images were captured using a Specim IQ camera under various illumination conditions, both indoor and outdoor. Outdoor images were captured in sunny, overcast, and shady conditions and at different times of the day. For indoor images, halogen and LED light sources were used, as well as mixed light sources, mainly halogen or LED and fluorescent. The ResNet18 network was employed in this study, but with the 2D kernel changed to a 3D kernel to suit the spectral nature of the data. As well as fitting the actual illumination spectrum well, the predicted illumination spectrum should also be smooth, and this is achieved by the cubic smoothing spline error cost function. Experimental results indicate that the trained model can infer an accurate estimate of the illumination spectrum.
- Oceania > Australia (0.05)
- North America > United States > New York (0.04)
- North America > United States > Florida > Palm Beach County > Boca Raton (0.04)
- Europe > Finland > Northern Ostrobothnia > Oulu (0.04)
Recovering Intrinsic Images with a Global Sparsity Prior on Reflectance
Rother, Carsten, Kiefel, Martin, Zhang, Lumin, Schölkopf, Bernhard, Gehler, Peter V.
We address the challenging task of decoupling material properties from lighting properties given a single image. In the last two decades virtually all works have concentrated on exploiting edge information to address this problem. We take a different route by introducing a new prior on reflectance, that models reflectance values as being drawn from a sparse set of basis colors. This results in a Random Field model with global, latent variables (basis colors) and pixel-accurate output reflectance values. We show that without edge information high-quality results can be achieved, that are on par with methods exploiting this source of information. Finally, we present competitive results by integrating an additional edge model. We believe that our approach is a solid starting point for future development in this domain.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- North America > Canada > Alberta (0.04)
- Asia (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Vision (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.69)
Recovering Intrinsic Images from a Single Image
Tappen, Marshall F., Freeman, William T., Adelson, Edward H.
We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Recovering Intrinsic Images from a Single Image
Tappen, Marshall F., Freeman, William T., Adelson, Edward H.
We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Recovering Intrinsic Images from a Single Image
Tappen, Marshall F., Freeman, William T., Adelson, Edward H.
We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information anda classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Europe > Switzerland (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.87)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Europe > Switzerland (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.87)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.15)
- Europe > Switzerland (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.87)