orthophoto
The point is the mask: scaling coral reef segmentation with weak supervision
Contini, Matteo, Illien, Victor, Poulain, Sylvain, Bernard, Serge, Barde, Julien, Bonhommeau, Sylvain, Joly, Alexis
Monitoring coral reefs at large spatial scales remains an open challenge, essential for assessing ecosystem health and informing conservation efforts. While drone-based aerial imagery offers broad spatial coverage, its limited resolution makes it difficult to reliably distinguish fine-scale classes, such as coral morphotypes. At the same time, obtaining pixel-level annotations over large spatial extents is costly and labor-intensive, limiting the scalability of deep learning-based segmentation methods for aerial imagery. W e present a multi-scale weakly supervised semantic segmentation framework that addresses this challenge by transferring fine-scale ecological information from underwater imagery to aerial data. Our method enables large-scale coral reef mapping from drone imagery with minimal manual annotation, combining classification-based supervision, spatial interpolation and self-distillation techniques. W e demonstrate the efficacy of the approach, enabling large-area segmentation of coral morphotypes and demonstrating flexibility for integrating new classes. This study presents a scalable, cost-effective methodology for high-resolution reef monitoring, combining low-cost data collection, weakly supervised deep learning and multi-scale remote sensing.
- Africa > La Réunion (0.05)
- North America > Canada (0.04)
- Europe > Russia (0.04)
- (4 more...)
From underwater to aerial: a novel multi-scale knowledge distillation approach for coral reef monitoring
Contini, Matteo, Illien, Victor, Barde, Julien, Poulain, Sylvain, Bernard, Serge, Joly, Alexis, Bonhommeau, Sylvain
Drone-based remote sensing combined with AI-driven methodologies has shown great potential for accurate mapping and monitoring of coral reef ecosystems. This study presents a novel multi-scale approach to coral reef monitoring, integrating fine-scale underwater imagery with medium-scale aerial imagery. Underwater images are captured using an Autonomous Surface Vehicle (ASV), while aerial images are acquired with an aerial drone. A transformer-based deep-learning model is trained on underwater images to detect the presence of 31 classes covering various coral morphotypes, associated fauna, and habitats. These predictions serve as annotations for training a second model applied to aerial images. The transfer of information across scales is achieved through a weighted footprint method that accounts for partial overlaps between underwater image footprints and aerial image tiles. The results show that the multi-scale methodology successfully extends fine-scale classification to larger reef areas, achieving a high degree of accuracy in predicting coral morphotypes and associated habitats. The method showed a strong alignment between underwater-derived annotations and ground truth data, reflected by an AUC (Area Under the Curve) score of 0.9251. This shows that the integration of underwater and aerial imagery, supported by deep-learning models, can facilitate scalable and accurate reef assessments. This study demonstrates the potential of combining multi-scale imaging and AI to facilitate the monitoring and conservation of coral reefs. Our approach leverages the strengths of underwater and aerial imagery, ensuring the precision of fine-scale analysis while extending it to cover a broader reef area.
- Africa > La Réunion (0.15)
- Europe > France > Occitanie > Hérault > Montpellier (0.05)
- North America > Canada > Quebec (0.04)
- (7 more...)
Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery
Fonod, Robert, Cho, Haechan, Yeo, Hwasoo, Geroliminis, Nikolas
This paper presents a framework for extracting georeferenced vehicle trajectories from high-altitude drone footage, addressing key challenges in urban traffic monitoring and limitations of traditional ground-based systems. We employ state-of-the-art computer vision and deep learning to create an end-to-end pipeline that enhances vehicle detection, tracking, and trajectory stabilization. Conducted in the Songdo International Business District, South Korea, the study used a multi-drone experiment over 20 intersections, capturing approximately 12TB of 4K video data over four days. We developed a novel track stabilization method that uses detected vehicle bounding boxes as exclusion masks during image registration, which, combined with advanced georeferencing techniques, accurately transforms vehicle coordinates into real-world geographical data. Additionally, our framework includes robust vehicle dimension estimation and detailed road segmentation for in-depth traffic analysis. The framework produced two high-quality datasets: the Songdo Traffic dataset, comprising nearly 1 million unique vehicle trajectories, and the Songdo Vision dataset, containing over 5,000 human-annotated frames with about 300,000 vehicle instances in four classes. Comparisons between drone-derived data and high-precision sensor data from an instrumented probe vehicle highlight the accuracy and consistency of our framework's extraction in dense urban settings. By publicly releasing these datasets and the pipeline source code, this work sets new benchmarks for data quality, reproducibility, and scalability in traffic research. Results demonstrate the potential of integrating drone technology with advanced computer vision for precise, cost-effective urban traffic monitoring, providing valuable resources for the research community to develop intelligent transportation systems and improve traffic management strategies.
- North America > United States (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- Transportation > Passenger (0.94)
- Information Technology (0.93)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
Automatic identification of the area covered by acorn trees in the dehesa (pastureland) Extremadura of Spain
Benjamin, Ojeda-Magaña, Ruben, Ruelas, Joel, Quintanilla-Dominguez, Leopoldo, Gomez-Barba, Juan, Lopez de Herrera, Jose, Robledo-Hernandez, Ana, Tarquis
The acorn is the fruit of the oak and is an important crop in the Spanish dehesa extreme\~na, especially for the value it provides in the Iberian pig food to obtain the "acorn" certification. For this reason, we want to maximise the production of Iberian pigs with the appropriate weight. Hence the need to know the area covered by the crowns of the acorn trees, to determine the covered wooded area (CWA, from the Spanish Superficie Arbolada Cubierta SAC) and thereby estimate the number of Iberian pigs that can be released per hectare, as indicated by the royal decree 4/2014. In this work, we propose the automatic estimation of the CWA, through aerial digital images (orthophotos) of the pastureland of Extremadura, and with this, to offer the possibility of determining the number of Iberian pigs to be released in a specific plot of land. Among the main issues for automatic detection are, first, the correct identification of acorn trees, secondly, correctly discriminating the shades of the acorn trees and, finally, detect the arbuscles (young acorn trees not yet productive, or shrubs that are not oaks). These difficulties represent a real challenge, both for the automatic segmentation process and for manual segmentation. In this work, the proposed method for automatic segmentation is based on the clustering algorithm proposed by Gustafson-Kessel (GK) but the modified version of Babuska (GK-B) and on the use of real orthophotos. The obtained results are promising both in their comparison with the real images and when compared with the images segmented by hand. The whole set of orthophotos used in this work correspond to an approximate area of 142 hectares, and the results are of great interest to producers of certified "acorn" pork.
- North America > Mexico (0.05)
- Europe > Spain > Galicia > Madrid (0.05)
- North America > United States > New York (0.04)
- (4 more...)
Deployment of Aerial Robots during the Flood Disaster in Erftstadt / Blessem in July 2021
Surmann, Hartmut, Slomma, Dominik, Grafe, Robert, Grobelny, Stefan
Climate change is leading to more and more extreme weather events such as heavy rainfall and flooding. This technical report deals with the question of how rescue commanders can be better and faster provided with current information during flood disasters using Unmanned Aerial Vehicles (UAVs), i.e. during the flood in July 2021 in Central Europe, more specifically in Erftstadt / Blessem. The UAVs were used for live observation and regular inspections of the flood edge on the one hand, and on the other hand for the systematic data acquisition in order to calculate 3D models using Structure from Motion and MultiView Stereo. The 3D models embedded in a GIS application serve as a planning basis for the systematic exploration and decision support for the deployment of additional smaller UAVs but also rescue forces. The systematic data acquisition of the UAVs by means of autonomous meander flights provides high-resolution images which are computed to a georeferenced 3D model of the surrounding area within 15 minutes in a specially equipped robotic command vehicle (RobLW). From the comparison of high-resolution elevation profiles extracted from the 3D model on successive days, changes in the water level become visible. This information enables the emergency management to plan further inspections of the buildings and to search for missing persons on site.
- Europe > Central Europe (0.25)
- Europe > Switzerland (0.05)
- Europe > Italy (0.04)
- (7 more...)
GrowliFlower: An image time series dataset for GROWth analysis of cauLIFLOWER
Kierdorf, Jana, Junker-Frohn, Laura Verena, Delaney, Mike, Olave, Mariele Donoso, Burkart, Andreas, Jaenicke, Hannah, Muller, Onno, Rascher, Uwe, Roscher, Ribana
This article presents GrowliFlower, a georeferenced, image-based UAV time series dataset of two monitored cauliflower fields of size 0.39 and 0.60 ha acquired in 2020 and 2021. The dataset contains RGB and multispectral orthophotos from which about 14,000 individual plant coordinates are derived and provided. The coordinates enable the dataset users the extraction of complete and incomplete time series of image patches showing individual plants. The dataset contains collected phenotypic traits of 740 plants, including the developmental stage as well as plant and cauliflower size. As the harvestable product is completely covered by leaves, plant IDs and coordinates are provided to extract image pairs of plants pre and post defoliation, to facilitate estimations of cauliflower head size. Moreover, the dataset contains pixel-accurate leaf and plant instance segmentations, as well as stem annotations to address tasks like classification, detection, segmentation, instance segmentation, and similar computer vision tasks. The dataset aims to foster the development and evaluation of machine learning approaches. It specifically focuses on the analysis of growth and development of cauliflower and the derivation of phenotypic traits to foster the development of automation in agriculture. Two baseline results of instance segmentation at plant and leaf level based on the labeled instance segmentation data are presented. The entire data set is publicly available.
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Bonn (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Europe > Italy > Tuscany (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)