The Decade, Reviewed looks back at the 2010s and how it changed human society forever. From 2010 to 2019, our species experienced seismic shifts in science, technology, entertainment, transportation, and even the very planet we call home. This is how the past ten years have changed us. Bots are a lot like humans: Some are cute. Some are annoying ... and a little racist.
The accurate exposure is the key of capturing high-quality photos in computational photography, especially for mobile phones that are limited by sizes of camera modules. Inspired by luminosity masks usually applied by professional photographers, in this paper, we develop a novel algorithm for learning local exposures with deep reinforcement adversarial learning. To be specific, we segment an image into sub-images that can reflect variations of dynamic range exposures according to raw low-level features. Based on these sub-images, a local exposure for each sub-image is automatically learned by virtue of policy network sequentially while the reward of learning is globally designed for striking a balance of overall exposures. The aesthetic evaluation function is approximated by discriminator in generative adversarial networks.
The incredibly large, Nazca Lines of Peru has always been a mysterious wonder, a sight best viewed from the air. Located near Lima, these impressive geometric figures are etched on a coastal desert. Recently, a team of Japenese researchers have found 143 new figures using satellite photography, 3D imaging and artificial intelligence. They portray animal and human figures like camels, llamas, cats and snakes, while some are more abstract, appearing as geometric shapes. Already, images of a monkey, hummingbird, and spider are quite famous.
We are living in an exciting time, where software and machine learning are rapidly changing the way we approach work. For some industries, artificial intelligence will destroy job opportunities, but for other industries, it will revolutionize productivity. How will photography and the retouching world fare as editing software begins using this exciting technology? Here at Fstoppers, we are constantly testing and exploring the latest and greatest photo-editing software for photographers. Last week, Skylum released a new software suite called Luminar 4, which helps photographers automate their post-production workflow.
"You can already see a material effect that deepfakes have had," said Nick Dufour, one of the Google engineers overseeing the company's deepfake research. "They have allowed people to claim that video evidence that would otherwise be very convincing is a fake." For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects. Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images.
AI and machine learning algorithms require data. But the bulk of that data is of no use if it isn't first labeled by human annotators. This predicament has given rise to a cottage industry of startups, including Scale AI, which recently raised $100 million for its extensive suite of data labeling services. That's not to mention Mighty AI, Hive, Appen, and Alegion, which together occupy a data annotation tools segment that's anticipated to be worth $1.6 billion by 2025. CloudFactory is yet another vying for attention.
Light behaves differently in water than it does on the surface -- and that behavior creates the blur or green tint common in underwater photographs as well as the haze that blocks out vital details. But thanks to research from an oceanographer and engineer and a new artificial intelligence program called Sea-Thru, that haze and those occluded colors could soon disappear. Besides putting a downer on the photos from that snorkeling trip, the inability to get an accurately colored photo underwater hinders scientific research at a time when concern for coral and ocean health is growing. That's why oceanographer and engineer Derya Akkaynak, along with Tali Treibitz and the University of Haifa, devoted their research to developing an artificial intelligence that can create scientifically accurate colors while removing the haze in underwater photos. As Akkaynak points out in her research, imaging A.I. has exploded in recent years.
A team of Japanese researchers from Yamagata University and IBM Research have discovered an incredible 143 stunning geoglyphs, etched into the desert in southern Peru around the enigmatic Nazca Lines. It is yet another example of how technology is assisting archaeology because a number of images were found using state-of-the-art AI technology developed by IBM, finds that were then confirmed with an on-site investigations. The geoglyphs include humans, birds, camels, cats and other animals and were found between 2016 and 2018. They were identified through fieldwork and analysing high-resolution 3D data and aerial photography. Incredibly, one geoglyph in particular was solely discovered with AI technology, without the aid of humans, making it the first geoglyph discovered by an AI.
Coral reefs are among nature's most complex and colourful living formations. But as any underwater photographer knows, pictures of them taken without artificial lights often come out bland and blue. Even shallow water selectively absorbs and scatters light at different wavelengths, making certain features hard to see and washing out colours--especially reds and yellows. This effect makes it difficult for coral scientists to use computer vision and machine-learning algorithms to identify, count and classify species in underwater images; they have to rely on time-consuming human evaluation instead. But a new algorithm called Sea-thru, developed by engineer and oceanographer Derya Akkaynak, removes the visual distortion caused by water from an image.