Not enough data to create a plot.
Try a different view from the menu above.
Antarctica
Filmmaker James Cameron on penguins, arctic cold, and lowlight cameras
James Cameron wasn't near the penguins this time around, but he is extremely familiar with their environment. "When I went to Antarctica myself, I had a Nikon still camera adapted to the cold with special lubricants," he tells Popular Science. "I went to the South Pole and the film shattered in my hand when I tried to change it. I took a video camera, I wrapped it in a heating pack and it [died] in two minutes. I have a good sense of what it takes to take conventional equipment into that environment and survive."
Hot methane seeps could support life beneath Antarctica's ice sheet
Microbes living beneath Antarctica's ice sheet may survive on methane generated by geothermal heat rising from deep below Earth's surface. The discovery could have implications for assessing the potential for life to survive on icy worlds beyond Earth. "These could be hotspots for microbes that are adapted to live in these areas," says Gavin Piccione at Brown University in Rhode Island. We already know that there is methane beneath Antarctica's ice sheet.
Physics-Trained Neural Network as Inverse Problem Solver for Potential Fields: An Example of Downward Continuation between Arbitrary Surfaces
Sun, Jing, Li, Lu, Zhang, Liang
We treat downward continuation as an inverse problem that relies on solving a forward problem defined by the formula for upward continuation, and we propose a new physics-trained deep neural network (DNN)-based solution for this task. We hard-code the upward continuation process into the DNN's learning framework, where the DNN itself learns to act as the inverse problem solver and can perform downward continuation without ever being shown any ground truth data. We test the proposed method on both synthetic magnetic data and real-world magnetic data from West Antarctica. The preliminary results demonstrate its effectiveness through comparison with selected benchmarks, opening future avenues for the combined use of DNNs and established geophysical theories to address broader potential field inverse problems, such as density and geometry modelling. Introduction Downward continuation of potential field, including gravity or magnetic field, refers to transferring the data from one observation surface to a lower surface that is closer to the source of the field. The goal is to enhance the resolution of the continued field and amplify the shallow geological signals. Airborne surveys are typically flown at uneven heights, making continuation from these surfaces a common requirement. Downward continuation is a critical task in the processing of potential field data, impacting the success of various downstream analyses, such as revealing the density structure and boundaries of anomalous bodies, especially for detecting and highlighting shallow anomalous sources. Many methods have been developed for the task of downward continuation (e.g.
Life-seeking, ice-melting robots could punch through Europa's icy shell
This would likely have three parts: a lander, an autonomous ice-thawing robot, and some sort of self-navigating submersible. Indeed, several groups from multiple countries already have working prototypes of ice-diving robots and smart submersibles that they are set to test in Earth's own frigid landscapes, from Alaska to Antarctica, in the next few years But Earth's oceans are pale simulacra of Europa's extreme environment. To plumb the ocean of this Jovian moon, engineers must work out a way to get missions to survive a never-ending rain of radiation that fries electronic circuits. They must also plow through an ice shell that's at least twice as thick as Mount Everest is tall. "There are a lot of hard problems that push up right against the limits of what's possible," says Richard Camilli, an expert on autonomous robotic systems at the Woods Hole Oceanographic Institution's Deep Submergence Laboratory.
AI can use tourist photos to help track Antarctica's penguins
Artificial intelligence can help accurately map and track penguin colonies in Antarctica by analysing tourist photos. "Right now, everyone has a camera in their pocket, and so the sheer volume of data being collected around the world is incredible," says Heather Lynch at Stony Brook University in New York. Haoyu Wu at Stony Brook University and his colleagues, including Lynch, used an AI tool developed by Meta to highlight Adรฉlie penguins in photographs taken by tourists or scientists on the ground. With guidance from a human expert, the AI tool was able to automatically identify and outline entire colonies in photos. This semi-automated method is much faster than doing everything manually because the AI tool takes just 5 to 10 seconds per image, compared with a person taking 1 to 2 minutes, says Wu. The team also created a 3D digital model of the Antarctic landscape using satellite imagery and terrain elevation data.
Point Cloud Structural Similarity-based Underwater Sonar Loop Detection
Jung, Donghwi, Pulido, Andres, Shin, Jane, Kim, Seong-Woo
In order to enable autonomous navigation in underwater environments, a map needs to be created in advance using a Simultaneous Localization and Mapping (SLAM) algorithm that utilizes sensors like a sonar. At this time, loop closure is employed to reduce the pose error accumulated during the SLAM process. In the case of loop detection using a sonar, some previous studies have used a method of projecting the 3D point cloud into 2D, then extracting keypoints and matching them. However, during the 2D projection process, data loss occurs due to image resolution, and in monotonous underwater environments such as rivers or lakes, it is difficult to extract keypoints. Additionally, methods that use neural networks or are based on Bag of Words (BoW) have the disadvantage of requiring additional preprocessing tasks, such as training the model in advance or pre-creating a vocabulary. To address these issues, in this paper, we utilize the point cloud obtained from sonar data without any projection to prevent performance degradation due to data loss. Additionally, by calculating the point-wise structural feature map of the point cloud using mathematical formulas and comparing the similarity between point clouds, we eliminate the need for keypoint extraction and ensure that the algorithm can operate in new environments without additional learning or tasks. To evaluate the method, we validated the performance of the proposed algorithm using the Antarctica dataset obtained from deep underwater and the Seaward dataset collected from rivers and lakes. Experimental results show that our proposed method achieves the best loop detection performance in both datasets. Our code is available at https://github.com/donghwijung/point_cloud_structural_similarity_based_underwater_sonar_loop_detection.
BenthicNet: A global compilation of seafloor images for deep learning applications
Lowe, Scott C., Misiuk, Benjamin, Xu, Isaac, Abdulazizov, Shakhboz, Baroi, Amit R., Bastos, Alex C., Best, Merlin, Ferrini, Vicki, Friedman, Ariell, Hart, Deborah, Hoegh-Guldberg, Ove, Ierodiaconou, Daniel, Mackin-McLaughlin, Julia, Markey, Kathryn, Menandro, Pedro S., Monk, Jacquomo, Nemani, Shreya, O'Brien, John, Oh, Elizabeth, Reshitnyk, Luba Y., Robert, Katleen, Roelfsema, Chris M., Sameoto, Jessica A., Schimel, Alexandre C. G., Thomson, Jordan A., Wilson, Brittany R., Wong, Melisa C., Brown, Craig J., Trappenberg, Thomas
Advances in underwater imaging enable the collection of extensive seafloor image datasets that are necessary for monitoring important benthic ecosystems. The ability to collect seafloor imagery has outpaced our capacity to analyze it, hindering expedient mobilization of this crucial environmental information. Recent machine learning approaches provide opportunities to increase the efficiency with which seafloor image datasets are analyzed, yet large and consistent datasets necessary to support development of such approaches are scarce. Here we present BenthicNet: a global compilation of seafloor imagery designed to support the training and evaluation of large-scale image recognition models. An initial set of over 11.4 million images was collected and curated to represent a diversity of seafloor environments using a representative subset of 1.3 million images. These are accompanied by 2.6 million annotations translated to the CATAMI scheme, which span 190,000 of the images. A large deep learning model was trained on this compilation and preliminary results suggest it has utility for automating large and small-scale image analysis tasks. The compilation and model are made openly available for use by the scientific community at https://doi.org/10.20383/103.0614.
Sea ice detection using concurrent multispectral and synthetic aperture radar imagery
Rogers, Martin S J, Fox, Maria, Fleming, Andrew, van Zeeland, Louisa, Wilkinson, Jeremy, Hosking, J. Scott
Synthetic Aperture Radar (SAR) imagery is the primary data type used for sea ice mapping due to its spatio-temporal coverage and the ability to detect sea ice independent of cloud and lighting conditions. Automatic sea ice detection using SAR imagery remains problematic due to the presence of ambiguous signal and noise within the image. Conversely, ice and water are easily distinguishable using multispectral imagery (MSI), but in the polar regions the ocean's surface is often occluded by cloud or the sun may not appear above the horizon for many months. To address some of these limitations, this paper proposes a new tool trained using concurrent multispectral Visible and SAR imagery for sea Ice Detection (ViSual\_IceD). ViSual\_IceD is a convolution neural network (CNN) that builds on the classic U-Net architecture by containing two parallel encoder stages, enabling the fusion and concatenation of MSI and SAR imagery containing different spatial resolutions. The performance of ViSual\_IceD is compared with U-Net models trained using concatenated MSI and SAR imagery as well as models trained exclusively on MSI or SAR imagery. ViSual\_IceD outperforms the other networks, with a F1 score 1.60\% points higher than the next best network, and results indicate that ViSual\_IceD is selective in the image type it uses during image segmentation. Outputs from ViSual\_IceD are compared to sea ice concentration products derived from the AMSR2 Passive Microwave (PMW) sensor. Results highlight how ViSual\_IceD is a useful tool to use in conjunction with PMW data, particularly in coastal regions. As the spatial-temporal coverage of MSI and SAR imagery continues to increase, ViSual\_IceD provides a new opportunity for robust, accurate sea ice coverage detection in polar regions.
Low-power, Continuous Remote Behavioral Localization with Event Cameras
Hamann, Friedhelm, Ghosh, Suman, Martinez, Ignacio Juarez, Hart, Tom, Kacelnik, Alex, Gallego, Guillermo
Researchers in natural science need reliable methods for quantifying animal behavior. Recently, numerous computer vision methods emerged to automate the process. However, observing wild species at remote locations remains a challenging task due to difficult lighting conditions and constraints on power supply and data storage. Event cameras offer unique advantages for battery-dependent remote monitoring due to their low power consumption and high dynamic range capabilities. We use this novel sensor to quantify a behavior in Chinstrap penguins called ecstatic display. We formulate the problem as a temporal action detection task, determining the start and end times of the behavior. For this purpose, we recorded a colony of breeding penguins in Antarctica during several weeks and labeled event data on 16 nests. The developed method consists of a generator of candidate time intervals (proposals) and a classifier of the actions within them. The experiments show that the event cameras' natural response to motion is effective for continuous behavior monitoring and detection, reaching a mean average precision (mAP) of 58% (which increases to 63% in good weather conditions). The results also demonstrate the robustness against various lighting conditions contained in the challenging dataset. The low-power capabilities of the event camera allows to record three times longer than with a conventional camera. This work pioneers the use of event cameras for remote wildlife observation, opening new interdisciplinary opportunities. https://tub-rip.github.io/eventpenguins/