Photography


Recovering "lost dimensions" of images and video

#artificialintelligence

MIT researchers have developed a model that recovers valuable data lost from images and video that have been "collapsed" into lower dimensions. The model could be used to recreate video from motion-blurred images, or from new types of cameras that capture a person's movement around corners but only as vague one-dimensional lines. While more testing is needed, the researchers think this approach could someday could be used to convert 2D medical images into more informative -- but more expensive -- 3D body scans, which could benefit medical imaging in poorer nations. "In all these cases, the visual data has one dimension -- in time or space -- that's completely lost," says Guha Balakrishnan, a postdoc in Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author on a paper describing the model, which is being presented at next week's International Conference on Computer Vision. "If we recover that lost dimension, it can have a lot of important applications."


Skylum brings AI-powered portrait and skin enhancement tools to Luminar 4

#artificialintelligence

BELLEVUE, WA – September 17, 2019 -- Today, Skylum has announced two major new features coming to Luminar 4, set to be released this fall. AI Skin Enhancer and Portrait Enhancer will enable photographers to further develop and improve their portraits. These tools use machine learning to speed up the process, but contain detailed controls for even the most demanding photo editor. Previously, photographers would have to spend time selectively editing their photographs, manually adjusting various tools through selections and masking. With Luminar 4, these tedious tools are a thing of the past.


Google launches cheaper Pixel 4 to undercut Apple's iPhone

The Guardian

Google has launched its latest iPhone-competitor, the Pixel 4 and 4 XL, with new radar technology, dual-camera and a lower price. Google's consumer hardware division unveiled a series of new devices in New York, led by the Pixel 4 smartphone and including an updated Nest Mini smart speaker and Nest Wifi system, among other products. The Pixel 4 and 4 XL are two new Google-made smartphones that are designed to challenge Apple's iPhone with new hardware and software technologies, plus Pixel-exclusive Android features, while undercutting it on price. The Pixel 4's UK price is £669 £70 less than last year's Pixel 3, or $799 in the US, and it comes with Google's new dual-camera system, which pairs a wide angle 12-megapixel camera with a 16-megapixel 2x telephoto camera for up to 3x hybrid zoom. The new camera also comes with Google's next generation of its "night sight" technology, capable of capturing long exposures of stars in astrophotography mode and other low-light tricks.


Visual 1st brings AI, AR, computational photography and more to light in 14 days!

#artificialintelligence

Visual 1st, the executive conference focused on promoting innovation and partnerships in the photo and video ecosystem, will bring AI, AR, computational photography, and the future of digital cameras to the center stage, Oct. 3-4, at the Golden Gate Club, San Francisco, Calif. AI is already everywhere in imaging, from recognition to enhancement to auto-editing – and of course, there's much more to come. In parallel, AR solutions are proliferating at a rapid pace, serving use cases ranging from having lots of fun to being highly productive. As these two technologies evolve in mutually reinforcing ways, we, as an industry, must take the imaging solutions they enable to the next level of value and profitability, while also keeping things safe, secure and private for our customers – but how? Alexander Schiffhauer recently left his role as Technical Advisor to Google's CEO Sundar Pichai to take product management responsibility for the company's computational photography teams. Under his leadership, these teams have pioneered innovation on Pixel Camera, leveraging AI and computer vision techniques to create photos unimaginable only a few years ago.


The cancer-fighting, wildlife-protecting, life-saving power of artificial intelligence

#artificialintelligence

"One of the near-term opportunities is driven by the sheer amount of aerial and satellite data that's starting to be produced around the world," Reid agrees. "There's a fantastic company out of Christchurch called Orbica, who are specialists in what they call GeoAI – analysing geospatial data and aerial and satellite photography to extract features and effectively count the number of houses or vehicles or identify bodies of water from the air. These are things that used to take people days of manual analysis and now it can be run through an algorithm in seconds.


ON Semiconductor's digital image sensor enables AI vision systems -- Softei.com

#artificialintelligence

Intelligent vision systems for viewing and artificial intelligence (AI) can be implemented using the low power 0.3Mpixel image sensor announced by ON Semiconductor. The ARX3A0 digital image sensor has 0.3Mpixel resolution in a 1:1 aspect ratio. It can perform like a global shutter in many conditions, with up to 360 frames per second (fps) capture rate, yet with the size, performance and response levels that relate to being a back-side illuminated (BSI) rolling shutter sensor, explains ON Semiconductor. It has a small size, square format and high frame rate, making it particularly suitable for emerging machine vision, AI and augmented reality/virtual reality (AR/VR) applications, as well as small supplemental security cameras. To meet the demands of applications that provide still or streaming images, the ARX3A0 is designed to deliver flexible, high-performance image capture with minimal power.


Amazon.com: AWS DeepLens (2019 Edition) – deep learning-enabled video camera for developers: Amazon Devices

#artificialintelligence

Learn the basics of deep learning - a machine learning technique that uses neural networks to learn and make predictions - through computer vision projects, tutorials, and real world, hands-on exploration with a physical device. AWS DeepLens lets you run deep learning models locally on the camera to analyze and take action on what it sees.


Apple's Deep Fusion photography comes to iPhone 11 in iOS 13.2 beta (updated)

#artificialintelligence

You now have a chance to try Apple's machine learning-based Deep Fusion photography if you're willing to live on the bleeding edge. It's releasing an iOS 13.2 developer beta (public likely to follow soon) that makes Deep Fusion available to iPhone 11 and iPhone 11 Pro owners. The technique uses machine learning to create highly detailed, sharper and more natural-looking photos on the primary and telephoto lenses by combining the results of multiple shots. Deep Fusion takes an underexposed photo for sharpness, and blends that with three neutral pictures and a long high-exposure image on a per-pixel level to achieve a highly customized result. The machine learning system examines the context of the picture to understand where a pixel sits on the frequency spectrum.


Adobe adds new AI tools in Photoshop and Premiere Elements 2020

#artificialintelligence

Adobe Photoshop and Premiere Elements 2020 are now available, and both have some new AI-enabled features. The simplified versions of the company's flagship creative applications help amateurs edit high-quality photos and videos, and with the new Sensei-powered tasks, they're easier to use. The full-featured versions of Photoshop and Premiere can be overwhelming, and the methods to reach a desired outcome are rarely obvious. In other words, you really have to know what you're doing. The Elements versions of the software offer straightforward workflows that avoid obscure menus and hotkeys, as well as a one-time purchase rather than Adobe Creative Cloud's monthly subscription fee.


Apple's Deep Fusion hands-on: AI sharpens photos like HDR fixes colors

#artificialintelligence

Digital photographers coined the term "pixel peepers" years ago to denote -- mostly with scorn -- people who focused on flaws in the individual dots that create photos rather than the entirety of the images. Zooming in to 100%, it was said, is nothing but a recipe for perpetual disappointment; instead, judge each camera by the overall quality of the photo it takes, and don't get too mired in the details. Until now, Apple's approach to digital photography has been defined by its commitment to improving the quality of the big picture without further compromising pixel-level quality. I say "further" because there's no getting around the fact that tiny phone camera sensors are physically incapable of matching the pixel-level results of full-frame DSLR camera sensors in a fair fight. Bigger sensors can capture more light and almost invariably more actual pixels than the iPhone's 12-megapixel cameras.