Goto

Collaborating Authors

 wetzstein


FourierNetsenablethedesignofhighlynon-local opticalencodersforcomputationalimaging

Neural Information Processing Systems

More challenging computational imaging applications, such as3D snapshot microscopywhichcompresses 3Dvolumes intosingle2Dimages, require ahighly non-local optical encoder. We show that existing deep network decoders have a locality bias which prevents the optimization of such highly non-local optical encoders. We address this with a decoder based on a shallow neural network architecture using global kernel Fourier convolutional neural networks (FourierNets).


TaCOS: Task-Specific Camera Optimization with Simulation

Yan, Chengyang, Dansereau, Donald G.

arXiv.org Artificial Intelligence

The performance of robots in their applications heavily depends on the quality of sensory input. However, designing sensor payloads and their parameters for specific robotic tasks is an expensive process that requires well-established sensor knowledge and extensive experiments with physical hardware. With cameras playing a pivotal role in robotic perception, we introduce a novel end-to-end optimization approach for co-designing a camera with specific robotic tasks by combining derivative-free and gradient-based optimizers. The proposed method leverages recent computer graphics techniques and physical camera characteristics to prototype the camera in software, simulate operational environments and tasks for robots, and optimize the camera design based on the desired tasks in a cost-effective way. We validate the accuracy of our camera simulation by comparing it with physical cameras, and demonstrate the design of cameras with stronger performance than common off-the-shelf alternatives. Our approach supports the optimization of both continuous and discrete camera parameters, manufacturing constraints, and can be generalized to a broad range of camera design scenarios including multiple cameras and unconventional cameras. This work advances the fully automated design of cameras for specific robotics tasks.


A new AI camera recognizes objects faster and more efficiently

#artificialintelligence

The image recognition technology used in today's autonomous cars and aerial drones as well as tomorrow's cancer-seeking robotic medical devices, all depend on artificial intelligence. These "computers that see" teach themselves to recognize objects -- a dog, a pedestrian crossing the street, a stopped car or a cancer tumor. Now, researchers at Stanford University have devised a new type of camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. "That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk," says Gordon Wetzstein, an assistant professor of electrical engineering and (by courtesy) computer science at Stanford, who directed the research. Wetzstein and Julie Chang, a doctoral candidate in his lab and first author on the paper, have married two types of computers into one -- creating a hybrid optical-electrical computer designed specifically for image analysis.


Scientists Reconstruct an Object by Photographing Its Shadow

WIRED

Vivek Goyal isn't a professional photographer, but he and his colleagues have developed an intriguing party trick: they can capture the image of an object completely out of sight. They demonstrated the trick in a windowless room on the Boston University campus, where Goyal works as an electrical engineering professor. In the room, a flat-screen monitor displayed a series of crude drawings created by Goyal's graduate student, Charles Saunders. Among them were several masterpieces: A mushroom that resembles Toad from Mario Kart, a Simpsons-yellow dude wearing a sideways red baseball cap, the red letters "BU" for school pride. These are the images that Goyal and his team wanted to capture while pointing the camera lens in a completely different direction.

  Industry: Media > Photography (0.51)

Stanford researchers create new AI-powered camera for faster image processing

#artificialintelligence

Researchers from Stanford University have created a new artificial intelligence (AI) powered camera system capable of processing images in a faster, more efficient way, and holds a promising future for being applied to self-driving vehicles or security cameras. The breakthrough was published in the science journal Nature on Friday. A research team led by Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, combined two types of computers into one hybrid optical-electrical computer designed specifically for image analysis. The AI-powered camera system includes an optical computer in the first layer, which does not perform digital computing that requires power-intensive mathematics algorithms, while the second layer is a traditional digital electronic computer. The optical computer is responsible for physical pre-processing of image data involving multiple ways of filtering, which requires zero input power, because the filtering happens naturally as light passes through the custom optics.


New AI camera could revolutionize autonomous vehicles

#artificialintelligence

The image recognition technology that underlies today's autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future applications like handheld medical devices. Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. The work was published in the August 17 Nature Scientific Reports. "That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk," said Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, who led the research.


A.I. camera could help self-driving cars 'see' better - Futurity

#artificialintelligence

You are free to share this article under the Attribution 4.0 International license. Researchers have devised a new type of artificially intelligent camera system that can classify images faster and more energy-efficiently. The image recognition technology that underlies today's autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street, or a stopped car. The new camera could one day be small enough to fit in future electronic devices, something that is not possible today because of the size and slow speed of computers that can run artificial intelligence algorithms. "That autonomous car you just passed has a relatively huge, relatively slow, energy intensive computer in its trunk," says Gordon Wetzstein, an assistant professor of electrical engineering at Stanford University who led the research. Future applications will need something much faster and smaller to process the stream of images, he says.


New AI camera could revolutionize autonomous vehicles Stanford News

#artificialintelligence

The image recognition technology that underlies today's autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future applications like handheld medical devices. A Stanford-designed hybrid optical-electrical computer designed specifically for image analysis could be ideal for autonomous vehicles. Now, researchers at Stanford University have devised a new type of artificially intelligent camera system that can classify images faster and more energy efficiently, and that could one day be built small enough to be embedded in the devices themselves, something that is not possible today. The work was published in the August 17 Nature Scientific Reports.


Stanford engineers reveal 4D camera for self driving cars

Daily Mail - Science & tech

Stanford engineers have revealed the first-ever single-lens light field camera with a wide field of view. They say the technology has the ability to give devices like drones and self driving cars information-rich'supervision' that will make it much easier for them to navigate the world. The 4D camera has an extra-wide field of view and can capture nearly 140 degrees of information - essentially, the difference between looking through the new design and a typical camera is the equivalent of looking through a window versus a peephole, according to the scientists. The new Stanford camera is the first-ever single-lens light field camera with a wide field of view. It can capture nearly 140 degrees of information, meaning a camera-dependent robot like a car could take one photo instead of several to understand its environment.