Goto

Collaborating Authors

 reef


A Review of Statistical and Machine Learning Approaches for Coral Bleaching Assessment

Sarkar, Soham, Hazra, Arnab

arXiv.org Machine Learning

Coral bleaching is a major concern for marine ecosystems; more than half of the world's coral reefs have either bleached or died over the past three decades. Increasing sea surface temperatures, along with various spatiotemporal environmental factors, are considered the primary reasons behind coral bleaching. The statistical and machine learning communities have focused on multiple aspects of the environment in detail. However, the literature on various stochastic modeling approaches for assessing coral bleaching is extremely scarce. Data-driven strategies are crucial for effective reef management, and this review article provides an overview of existing statistical and machine learning methods for assessing coral bleaching. Statistical frameworks, including simple regression models, generalized linear models, generalized additive models, Bayesian regression models, spatiotemporal models, and resilience indicators, such as Fisher's Information and Variance Index, are commonly used to explore how different environmental stressors influence coral bleaching. On the other hand, machine learning methods, including random forests, decision trees, support vector machines, and spatial operators, are more popular for detecting nonlinear relationships, analyzing high-dimensional data, and allowing integration of heterogeneous data from diverse sources. In addition to summarizing these models, we also discuss potential data-driven future research directions, with a focus on constructing statistical and machine learning models in specific contexts related to coral bleaching.


Deep Learning Models for Coral Bleaching Classification in Multi-Condition Underwater Image Datasets

Macrohon, Julio Jerison E., Hung, Gordon

arXiv.org Artificial Intelligence

Coral reefs support numerous marine organisms and are an important source of coastal protection from storms and floods, representing a major part of marine ecosystems. However coral reefs face increasing threats from pollution, ocean acidification, and sea temperature anomalies, making efficient protection and monitoring heavily urgent. Therefore, this study presents a novel machine-learning-based coral bleaching classification system based on a diverse global dataset with samples of healthy and bleached corals under varying environmental conditions, including deep seas, marshes, and coastal zones. We benchmarked and compared three state-of-the-art models: Residual Neural Network (ResNet), Vision Transformer (ViT), and Convolutional Neural Network (CNN). After comprehensive hyperparameter tuning, the CNN model achieved the highest accuracy of 88%, outperforming existing benchmarks. Our findings offer important insights into autonomous coral monitoring and present a comprehensive analysis of the most widely used computer vision models.


Relation-Aware Graph Foundation Model

Yu, Jianxiang, Zhu, Jiapeng, Qian, Hao, Liu, Ziqi, Zhang, Zhiqiang, Li, Xiang

arXiv.org Artificial Intelligence

In recent years, large language models (LLMs) have demonstrated remarkable generalization capabilities across various natural language processing (NLP) tasks. Similarly, graph foundation models (GFMs) have emerged as a promising direction in graph learning, aiming to generalize across diverse datasets through large-scale pre-training. However, unlike language models that rely on explicit token representations, graphs lack a well-defined unit for generalization, making it challenging to design effective pre-training strategies. In this work, we propose REEF, a novel framework that leverages relation tokens as the basic units for GFMs. Inspired by the token vocabulary in LLMs, we construct a relation vocabulary of relation tokens to store relational information within graphs. To accommodate diverse relations, we introduce two hypernetworks that adaptively generate the parameters of aggregators and classifiers in graph neural networks based on relation tokens. In addition, we design another hypernetwork to construct dataset-specific projectors and incorporate a dataset-level feature bias into the initial node representations, enhancing flexibility across different datasets with the same relation. Further, we adopt graph data augmentation and a mixed-dataset pre-training strategy, allowing REEF to capture relational diversity more effectively and exhibit strong generalization capabilities. Extensive experiments show that REEF significantly outperforms existing methods on both pre-training and transfer learning tasks, underscoring its potential as a powerful foundation model for graph-based applications.


Is AI currently capable of identifying wild oysters? A comparison of human annotators against the AI model, ODYSSEE

Campbell, Brendan, Williams, Alan, Baxevani, Kleio, Campbell, Alyssa, Dhoke, Rushabh, Hudock, Rileigh E., Lin, Xiaomin, Mange, Vivek, Neuberger, Bernhard, Suresh, Arjun, Vera, Alhim, Trembanis, Arthur, Tanner, Herbert G., Hale, Edward

arXiv.org Artificial Intelligence

Oysters are ecologically and commercially important species that require frequent monitoring to track population demographics (e.g. abundance, growth, mortality). Current methods of monitoring oyster reefs often require destructive sampling methods and extensive manual effort. Therefore, they are suboptimal for small-scale or sensitive environments. A recent alternative, the ODYSSEE model, was developed to use deep learning techniques to identify live oysters using video or images taken in the field of oyster reefs to assess abundance. The validity of this model in identifying live oysters on a reef was compared to expert and non-expert annotators. In addition, we identified potential sources of prediction error. Although the model can make inferences significantly faster than expert and non-expert annotators (39.6 s, $2.34 \pm 0.61$ h, $4.50 \pm 1.46$ h, respectively), the model overpredicted the number of live oysters, achieving lower accuracy (63\%) in identifying live oysters compared to experts (74\%) and non-experts (75\%) alike. Image quality was an important factor in determining the accuracy of the model and the annotators. Better quality images improved human accuracy and worsened model accuracy. Although ODYSSEE was not sufficiently accurate, we anticipate that future training on higher-quality images, utilizing additional live imagery, and incorporating additional annotation training classes will greatly improve the model's predictive power based on the results of this analysis. Future research should address methods that improve the detection of living vs. dead oysters.


REEF: Representation Encoding Fingerprints for Large Language Models

Zhang, Jie, Liu, Dongrui, Qian, Chen, Zhang, Linfeng, Liu, Yong, Qiao, Yu, Shao, Jing

arXiv.org Artificial Intelligence

Protecting the intellectual property of open-source Large Language Models (LLMs) is very important, because training LLMs costs extensive computational resources and data. Therefore, model owners and third parties need to identify whether a suspect model is a subsequent development of the victim model. To this end, we propose a training-free REEF to identify the relationship between the suspect and victim models from the perspective of LLMs' feature representations. Specifically, REEF computes and compares the centered kernel alignment similarity between the representations of a suspect model and a victim model on the same samples. This training-free REEF does not impair the model's general capabilities and is robust to sequential fine-tuning, pruning, model merging, and permutations. In this way, REEF provides a simple and effective way for third parties and models' owners to protect LLMs' intellectual property together. The code is available at https://github.com/tmylla/REEF. The training process of Large Language Models (LLMs) requires extensive computational resources and time. Therefore, open-source models are usually released with specific licenses (e.g., Apache2.0, and LLaMA 2 Community License (Meta AI, 2023)) to protect their intellectual properties (IPs). Unfortunately, some developers claim to have trained their own LLMs but actually wrapped or fine-tuned based on other base LLMs (e.g., Llama-2 and MiniCPM-V) (OpenBMB, 2023; 01-ai, 2023). It is urgent for model owners and third parties to identify whether the suspect model is a subsequent development of the victim model (e.g., Code-llama trained from Llama-2) or is developed from scratch (e.g., Mistral). The key is to extract unique features (i.e., fingerprints) that can authenticate the victim model. Watermarking methods artificially inject triggers into the victim model to make it generate specific content for identification (Peng et al., 2023a; Xu et al., 2024).


ODYSSEE: Oyster Detection Yielded by Sensor Systems on Edge Electronics

Lin, Xiaomin, Mange, Vivek, Suresh, Arjun, Neuberger, Bernhard, Palnitkar, Aadi, Campbell, Brendan, Williams, Alan, Baxevani, Kleio, Mallette, Jeremy, Vera, Alhim, Vincze, Markus, Rekleitis, Ioannis, Tanner, Herbert G., Aloimonos, Yiannis

arXiv.org Artificial Intelligence

Oysters are a vital keystone species in coastal ecosystems, providing significant economic, environmental, and cultural benefits. As the importance of oysters grows, so does the relevance of autonomous systems for their detection and monitoring. However, current monitoring strategies often rely on destructive methods. While manual identification of oysters from video footage is non-destructive, it is time-consuming, requires expert input, and is further complicated by the challenges of the underwater environment. To address these challenges, we propose a novel pipeline using stable diffusion to augment a collected real dataset with realistic synthetic data. This method enhances the dataset used to train a YOLOv10-based vision model. The model is then deployed and tested on an edge platform in underwater robotics, achieving a state-of-the-art 0.657 mAP@50 for oyster detection on the Aqua2 platform.


Watch a huge 'No Boys Allowed' shark slumber party

Popular Science

It appears that no boy sharks were invited to this gathering of sleeping female Port Jackson sharks (Heterodontus portusjacksoni) in Australia. The fish were spotted snuggled up along the seafloor at Beagle Marine Park in the central Bass Strait. "There were thousands of sharks tightly packed like a carpet spread across the seafloor," voyage leader and University of Tasmania quantitative marine spatial ecologist Jacquomo Monk said in a statement. "Port Jackson sharks grow to 1.65 meters [5.4 feet] in length and are found across southern Australia." Scientists supported by Australia's National Environmental Science Program from the South Australian Research and Development Institute's research vessel MRV Ngerin were operating an underwater robot when they spotted and recorded the gathering.


Deep learning-powered system maps corals in 3D

AIHub

Corals often provide a colorful backdrop to photographs of shimmering fish captured by amateur divers. Corals – marine invertebrates with calcium-carbonate exoskeletons – are some of the most diverse ecosystems on Earth: despite covering less than 0.1% of the ocean's surface, they provide shelter and habitats for almost one-third of known marine species. Their impact also extends to human populations in many countries around the world. According to research by the U.S. National Oceanic and Atmospheric Administration, up to half a billion people worldwide rely on coral reefs for food security and tourist income. But the world's corals are under threat from rising sea temperatures and local anthropogenic pollution, which causes them to bleach and die.


UIVNAV: Underwater Information-driven Vision-based Navigation via Imitation Learning

Lin, Xiaomin, Karapetyan, Nare, Joshi, Kaustubh, Liu, Tianchen, Chopra, Nikhil, Yu, Miao, Tokekar, Pratap, Aloimonos, Yiannis

arXiv.org Artificial Intelligence

Autonomous navigation in the underwater environment is challenging due to limited visibility, dynamic changes, and the lack of a cost-efficient accurate localization system. We introduce UIVNav, a novel end-to-end underwater navigation solution designed to drive robots over Objects of Interest (OOI) while avoiding obstacles, without relying on localization. UIVNav uses imitation learning and is inspired by the navigation strategies used by human divers who do not rely on localization. UIVNav consists of the following phases: (1) generating an intermediate representation (IR), and (2) training the navigation policy based on human-labeled IR. By training the navigation policy on IR instead of raw data, the second phase is domain-invariant -- the navigation policy does not need to be retrained if the domain or the OOI changes. We show this by deploying the same navigation policy for surveying two different OOIs, oyster and rock reefs, in two different domains, simulation, and a real pool. We compared our method with complete coverage and random walk methods which showed that our method is more efficient in gathering information for OOIs while also avoiding obstacles. The results show that UIVNav chooses to visit the areas with larger area sizes of oysters or rocks with no prior information about the environment or localization. Moreover, a robot using UIVNav compared to complete coverage method surveys on average 36% more oysters when traveling the same distances. We also demonstrate the feasibility of real-time deployment of UIVNavin pool experiments with BlueROV underwater robot for surveying a bed of oyster shells.


Bald Eagle Search Algorithm for High Precision Inverse Kinematics of Hyper-Redundant 9-DOF Robot

P, Vineeth, P, Guru Nanma, Sankar, V, Kumar, B Sachin

arXiv.org Artificial Intelligence

Robots in 3D spaces with more than six degrees of freedom are redundant. A redundant robot allows multiple configurations of the robot for the given target point in the dexterous workspace. The presence of multiple solutions helps in resolving constraints in workspace such as object avoidance and energy minimization during trajectory planning. Inverse kinematics solutions of such redundant robotics are intricate. The present study involves comparison of different metaheuristic optimization algorithms (MOA), which have a positional error, and identify a MOA for high precision of positioning of the end effector of the robot. This study applies recent MOA for the inverse kinematics of hyper redundant nine degrees of freedom (DOF) robot arm by using forward kinematics of the Denavit-Hartenberg (DH) parameters and compares the performance of these algorithms. The comparative study shows Bald Eagle Search (BES) algorithm has better performance over other metaheuristic algorithms. BES algorithm outperforms the other MOA in achieving the desired position with very high precision and least positional error for a 9-DOF robot arm.