Goto

Collaborating Authors

 aerial photo


Analyzing Decades-Long Environmental Changes in Namibia Using Archival Aerial Photography and Deep Learning

Tadesse, Girmaw Abebe, Robinson, Caleb, Hacheme, Gilles Quentin, Zaytar, Akram, Dodhia, Rahul, Shawa, Tsering Wangyal, Ferres, Juan M. Lavista, Kreike, Emmanuel H.

arXiv.org Artificial Intelligence

This study explores object detection in historical aerial photographs of Namibia to identify long-term environmental changes. Specifically, we aim to identify key objects -- Waterholes, Omuti homesteads, and Big trees -- around Oshikango in Namibia using sub-meter gray-scale aerial imagery from 1943 and 1972. In this work, we propose a workflow for analyzing historical aerial imagery using a deep semantic segmentation model on sparse hand-labels. To this end, we employ a number of strategies including class-weighting, pseudo-labeling and empirical p-value-based filtering to balance skewed and sparse representations of objects in the ground truth data. Results demonstrate the benefits of these different training strategies resulting in an average $F_1=0.661$ and $F_1=0.755$ over the three objects of interest for the 1943 and 1972 imagery, respectively. We also identified that the average size of Waterhole and Big trees increased while the average size of Omuti homesteads decreased between 1943 and 1972 reflecting some of the local effects of the massive post-Second World War economic, agricultural, demographic, and environmental changes. This work also highlights the untapped potential of historical aerial photographs in understanding long-term environmental changes beyond Namibia (and Africa). With the lack of adequate satellite technology in the past, archival aerial photography offers a great alternative to uncover decades-long environmental changes.


Generative Powers of Ten

Wang, Xiaojuan, Kontkanen, Janne, Curless, Brian, Seitz, Steve, Kemelmacher, Ira, Mildenhall, Ben, Srinivasan, Pratul, Verbin, Dor, Holynski, Aleksander

arXiv.org Artificial Intelligence

We present a method that uses a text-to-image model to generate consistent content across multiple image scales, enabling extreme semantic zooms into a scene, e.g., ranging from a wide-angle landscape view of a forest to a macro shot of an insect sitting on one of the tree branches. We achieve this through a joint multi-scale diffusion sampling approach that encourages consistency across different scales while preserving the integrity of each individual sampling process. Since each generated scale is guided by a different text prompt, our method enables deeper levels of zoom than traditional super-resolution methods that may struggle to create new contextual structure at vastly different scales. We compare our method qualitatively with alternative techniques in image super-resolution and outpainting, and show that our method is most effective at generating consistent multi-scale content.


Artificial intelligence helps speed up ecological surveys

AIHub

Scientists at EPFL, the Royal Netherlands Institute for Sea Research and Wageningen University & Research have developed a new deep-learning model for counting the number of seals in aerial photos that is considerably faster than doing it by hand. With this new method, valuable time and resources could be saved which can be used to further study and protect endangered species. Ecologists have been monitoring seal populations for decades, building up vast libraries of aerial photos in the process. Counting the number of seals in these photos require hours of meticulous work to manually identify the animals in each image. A cross-disciplinary team of researchers including Jeroen Hoekendijk, a PhD student at Wageningen University & Research (WUR) and employed by the Royal Netherlands Institute for Sea Research (NIOZ), and Devis Tuia, an associate professor and head of the Environmental Computational Science and Earth Observation Laboratory at EPFL Valais, have come up with a more efficient approach to count objects in ecological surveys.


Cal Poly Project Leverages Artificial Intelligence Deep Learning to Aid Wildfire Recovery

#artificialintelligence

SAN LUIS OBISPO –– A pair of Cal Poly professors and a team of students have used artificial intelligence to train a computer to quickly assess wildfire damage -- potentially improving response time for efforts to recover from major wildfires. Accurate and timely damage assessment has become critical for response and recovery as the threat of wildfires increases. Damage assessment reports inform first responders' strategies, affect residents' ability to file insurance claims, and guide state and federal authorities' plans for future disaster relief and financial aid. To date, most wildfire event inspectors must personally visit affected areas and manually document the severity of building damage, a process that often takes weeks. Social sciences Assistant Professor Andrew Fricker, computer science Assistant Professor Jonathan Ventura, visiting Cal Poly undergraduate student Gustave Rousselet, and a team of Stanford doctoral students sought to streamline this process with artificial intelligence (AI) deep learning.


How IRIS Aerial Imagery Analysis Brings AI Machine Learning to Insurance Athenium Analytics

#artificialintelligence

Artificial Intelligence (AI) is arguably the biggest insurance industry trend at the moment, with capital pouring in from insurers, tech companies and venture capital to find the next insurtech unicorn. A 2017 study by Accenture found that "75% of insurance executives believe that AI will either significantly alter or completely transform the overall insurance industry in the next three years." But despite the appetite for insurance-focused AI solutions, claims and underwriting automation is still very much in its infancy. The insurance ecosystem is filled with startups selling chatbots, big data algorithms, and touchless claims tools that vow to revolutionize the historically risk-averse insurance industry. The promise that bots will manage P&C claims from start to finish is an exciting one – removing friction and human intervention from claims intake, processing, fraud detection and customer service.


Disaster Monitoring using Unmanned Aerial Vehicles and Deep Learning

Kamilaris, Andreas, Prenafeta-Boldú, Francesc X.

arXiv.org Artificial Intelligence

Monitoring of disasters is crucial for mitigating their effects on the environment and human population, and can be facilitated by the use of unmanned aerial vehicles (UAV), equipped with camera sensors that produce aerial photos of the areas of interest. A modern technique for recognition of events based on aerial photos is deep learning. In this paper, we present the state of the art work related to the use of deep learning techniques for disaster identification. We demonstrate the potential of this technique in identifying disasters with high accuracy, by means of a relatively simple deep learning model. Based on a dataset of 544 images (containing disaster images such as fires, earthquakes, collapsed buildings, tsunami and flooding, as well as non-disaster scenes), our results show an accuracy of 91% achieved, indicating that deep learning, combined with UAV equipped with camera sensors, have the potential to predict disasters with high accuracy.


Image-to-image translation with conditional adversarial networks

#artificialintelligence

We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations… As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either. Pix2pix can produce effective results with way fewer training images, and much less training time, than I would have imagined.


Intel drones may help save the crumbling Great Wall of China from falling into greater disrepair

Daily Mail - Science & tech

Intel is deploying hi-tech drones to help spot parts of the Great Wall of China that have fallen into disrepair. The chipmaker is sending some Falcon 8 drones to shoot aerial photos of the famous Jiankou section of the wall, which is known for its steep climbs and scenic views. Due to its thick vegetation and centuries old materials, the areas has'naturally weathered' and requires repair -- a process that can be made easier by using drones, Intel said. Intel, which is partnering with the China Foundation for Cultural Heritage Conservation for the project, will send its Falcon drones to take aerial photos that will then be converted into high-definition images. Artificial intelligence will create a visual representation of the Great Wall to identify areas that are in need of repair and plan the safest way to restore them.


Inference Emerges As Next AI Challenge

#artificialintelligence

As developers flock to artificial intelligence frameworks in response to the explosion of intelligence machines, training deep learning models has emerged as a priority along with synching them to a growing list of neural and other network designs. All are being aligned to confront some of the next big AI challenges, including training deep learning models to make inferences from the fire hose of unstructured data. These and other AI developer challenges were highlighted during this week's Nvidia GPU technology conference in Washington. The GPU leader uses the events to bolster its contention that GPUs--some with up to 5,000 cores--are filling the computing gap created by the decline of Moore's Law. The other driving force behind the "era of AI" is the emergence of algorithm-driven deep learning that is forcing developers to move beyond mere coding to apply AI to a growing range of automated processes and predictive analytics.