Learning Sensor Multiplexing Design through Back-propagation

arXiv.org Machine Learning

Recent progress on many imaging and vision tasks has been driven by the use of deep feed-forward neural networks, which are trained by propagating gradients of a loss defined on the final output, back through the network up to the first layer that operates directly on the image. We propose back-propagating one step further---to learn camera sensor designs jointly with networks that carry out inference on the images they capture. In this paper, we specifically consider the design and inference problems in a typical color camera---where the sensor is able to measure only one color channel at each pixel location, and computational inference is required to reconstruct a full color image. We learn the camera sensor's color multiplexing pattern by encoding it as layer whose learnable weights determine which color channel, from among a fixed set, will be measured at each location. These weights are jointly trained with those of a reconstruction network that operates on the corresponding sensor measurements to produce a full color image. Our network achieves significant improvements in accuracy over the traditional Bayer pattern used in most color cameras. It automatically learns to employ a sparse color measurement approach similar to that of a recent design, and moreover, improves upon that design by learning an optimal layout for these measurements.


Learning Sensor Multiplexing Design through Back-propagation

Neural Information Processing Systems

Recent progress on many imaging and vision tasks has been driven by the use of deep feed-forward neural networks, which are trained by propagating gradients of a loss defined on the final output, back through the network up to the first layer that operates directly on the image. We propose back-propagating one step further---to learn camera sensor designs jointly with networks that carry out inference on the images they capture. In this paper, we specifically consider the design and inference problems in a typical color camera---where the sensor is able to measure only one color channel at each pixel location, and computational inference is required to reconstruct a full color image. We learn the camera sensor's color multiplexing pattern by encoding it as layer whose learnable weights determine which color channel, from among a fixed set, will be measured at each location. These weights are jointly trained with those of a reconstruction network that operates on the corresponding sensor measurements to produce a full color image. Our network achieves significant improvements in accuracy over the traditional Bayer pattern used in most color cameras. It automatically learns to employ a sparse color measurement approach similar to that of a recent design, and moreover, improves upon that design by learning an optimal layout for these measurements.


?utm_campaign=Feed%3A+Mashable+%28Mashable%29&utm_cid=Mash-Prod-RSS-Feedburner-All-Partial&utm_source=feedburner&utm_medium=feed

Mashable

Assuming that you're in fact not some sort of color wizard, The Nix Mini Color Sensor could be an awesome tool to feed your innate perfectionism. The Nix Mini Color Sensor apparently measures the color of anything your heart desires IRL and sends the exact digital color profile directly to your smartphone. While many of us have taken on home DIY projects inspired by our favorite HGTV shows, this tool apparently takes the guesswork out of finding the perfect shade of paint. The listing says it can instantly scan 28,000 brand name paint colors (not to mention RGB, HEX, CMYK, and LAB colors too) and returns you a match.


Open-source project Pixy aims to give vision to hobbyists' robots

AITopics Original Links

An open-source project aims to give a rudimentary eye to robots with the help of a camera that can detect, identify and track the movement of specific objects. The Pixy camera sensor board, being developed by Charmed Labs and Carnegie Mellon University, can detect objects based on seven colors, and then report them back to a computer. A Kickstarter campaign was launched on Thursday to fund the $25,000 project, and the organizations are on pace to reach full funding by the end of the day. Adding the Pixy could be viewed as giving robots basic vision, said Rich LeGrand, founder of Charmed Labs. "Once you have vision, then you can introduce the idea of tasks," LeGrand said.


Get this Cyber Monday deal on a sensor that matches any color

Mashable

Just to let you know, if you buy something featured here, Mashable might earn an affiliate commission.