You now have a chance to try Apple's machine learning-based Deep Fusion photography if you're willing to live on the bleeding edge. It's releasing an iOS 13.2 developer beta (public likely to follow soon) that makes Deep Fusion available to iPhone 11 and iPhone 11 Pro owners. The technique uses machine learning to create highly detailed, sharper and more natural-looking photos on the primary and telephoto lenses by combining the results of multiple shots. Deep Fusion takes an underexposed photo for sharpness, and blends that with three neutral pictures and a long high-exposure image on a per-pixel level to achieve a highly customized result. The machine learning system examines the context of the picture to understand where a pixel sits on the frequency spectrum.
One of the perks of downloading the free iOS 13.2 software update, if you have an iPhone 11, 11 Pro or Pro Max, is getting to Apple's Deep Fusion bag of camera tricks. First announced in September as "computational photography mad science," you're basically getting a super multiple exposure that can produce a way sharper image. In our tests around the USA TODAY offices Tuesday, it was very hard to see the difference, unless we zoomed in all the way, and even then, it was subtle. Example below: Take a good look at the lemon, and notice the sharpness and detail on the skin of the fruit, then compare to the shot below. Notice how much sharper the first one is?
The new iPhone 7 Plus introduced at Apple's September product event sports two cameras, set closely side by side, as was widely rumored. Philip Schiller, Apple's senior vice president of worldwide marketing, described the system, which pairs a 28mm wide-angle lens, just as in the iPhone 7, with a new 56mm telephoto lens. Apple will release the phone with hardware-based 1x and 2x zoom, and a software synthesized zoom between the two. An improved digitally interpolated zoom mode handles images above 2x to 10x. A software update that will ship later will add a Portrait mode that creates photos that look like they came from much more expensive digital cameras.
While Apple may consider itself "courageous" for forcing consumers to buy 159 new air pods, "game-changing" might be a better description for the iPhone 7's camera, which keynote speaker Phil Schiller called the "best camera on any iPhone." "This truly is a supercomputer for photos," Schiller said at Apple's "Special Event" on Wednesday at the Bill Graham Civic Auditorium in San Francisco. "This is really really big in terms of image quality. But behind it all is the brains of the camera, the image signal processor," he said. "The first thing it does is read the scene and uses to machine learning to look for objects and people and bodies within it," Schiller said.
"When you press the shutter button it takes one long exposure, and then in just one second the neural engine analyzes the fused combination of long and short images, picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise," Schiller said, describing a feature called "Deep Fusion" that will ship later this fall. It was the kind of technical digression that, in years past, might have been reserved for design chief Jony Ive's narration of a precision aluminum milling process to produce the iPhone's clean lines. But in this case, Schiller, the company's most enthusiastic photographer, was heaping his highest praise on custom silicon and artificial intelligence software. The technology industry's battleground for smartphone cameras has moved inside the phone, where sophisticated artificial intelligence software and special chips play a major role in how a phone's photos look. "Cameras and displays sell phones," said Julie Ask, vice president and principal analyst at Forrester.