Goto

Collaborating Authors

Apple's Deep Fusion photography comes to iPhone 11 in iOS 13.2 beta (updated)

#artificialintelligence

You now have a chance to try Apple's machine learning-based Deep Fusion photography if you're willing to live on the bleeding edge. It's releasing an iOS 13.2 developer beta (public likely to follow soon) that makes Deep Fusion available to iPhone 11 and iPhone 11 Pro owners. The technique uses machine learning to create highly detailed, sharper and more natural-looking photos on the primary and telephoto lenses by combining the results of multiple shots. Deep Fusion takes an underexposed photo for sharpness, and blends that with three neutral pictures and a long high-exposure image on a per-pixel level to achieve a highly customized result. The machine learning system examines the context of the picture to understand where a pixel sits on the frequency spectrum.


Apple's 'Deep Fusion' camera tricks are hard to spot – until you see them

USATODAY - Tech Top Stories

One of the perks of downloading the free iOS 13.2 software update, if you have an iPhone 11, 11 Pro or Pro Max, is getting to Apple's Deep Fusion bag of camera tricks. First announced in September as "computational photography mad science," you're basically getting a super multiple exposure that can produce a way sharper image. In our tests around the USA TODAY offices Tuesday, it was very hard to see the difference, unless we zoomed in all the way, and even then, it was subtle. Example below: Take a good look at the lemon, and notice the sharpness and detail on the skin of the fruit, then compare to the shot below. Notice how much sharper the first one is?


Two cameras in iPhone 7 Plus allow synthetic zoom, soft-focus backgrounds

PCWorld

The new iPhone 7 Plus introduced at Apple's September product event sports two cameras, set closely side by side, as was widely rumored. Philip Schiller, Apple's senior vice president of worldwide marketing, described the system, which pairs a 28mm wide-angle lens, just as in the iPhone 7, with a new 56mm telephoto lens. Apple will release the phone with hardware-based 1x and 2x zoom, and a software synthesized zoom between the two. An improved digitally interpolated zoom mode handles images above 2x to 10x. A software update that will ship later will add a Portrait mode that creates photos that look like they came from much more expensive digital cameras.


Google's Night Sight shooting mode for the Pixel 3 is mind-blowing

Mashable

Holy moly, has Google just changed the smartphone camera game with the release of the Night Sight mode for its Pixel 3 and 3 XL phones. Announced at its October Pixel 3 launch event, Google boasted Night Sight as a significant leap forward for taking night photos -- useful for exposing colors and details lost in the shadows. I've only just tried Night Sight, currently rolling out to Pixel 3 phones via a software update, and my mind's still piecing itself back together from being blown apart. SEE ALSO: Google Pixel 3 and Pixel 3 XL review: Android's finest It's no secret Google has been flexing its computational photography and machine-learning skills to enhance shots taken with its Pixel phones. Though it's questionable whether we, as photographers and creatives, should be letting Google decide for us what is a "good-looking" photo -- the Pixel 3 tends to shoot pictures that are more contrasty, more saturated, and artificially sharpened than an iPhone or Samsung Galaxy -- I don't think anyone disagrees that the company's leveraging of software to produce better pictures is a game-changer.


Apple's iPhone 7 Camera Uses Machine Learning to Look for People

#artificialintelligence

While Apple may consider itself "courageous" for forcing consumers to buy 159 new air pods, "game-changing" might be a better description for the iPhone 7's camera, which keynote speaker Phil Schiller called the "best camera on any iPhone." "This truly is a supercomputer for photos," Schiller said at Apple's "Special Event" on Wednesday at the Bill Graham Civic Auditorium in San Francisco. "This is really really big in terms of image quality. But behind it all is the brains of the camera, the image signal processor," he said. "The first thing it does is read the scene and uses to machine learning to look for objects and people and bodies within it," Schiller said.