Goto

Collaborating Authors

 oil spill


Improving Oil Slick Trajectory Simulations with Bayesian Optimization

Accarino, Gabriele, De Carlo, Marco M., Atake, Igor, Elia, Donatello, Dissanayake, Anusha L., Neves, Antonio Augusto Sepp, Ibañez, Juan Peña, Epicoco, Italo, Nassisi, Paola, Fiore, Sandro, Coppini, Giovanni

arXiv.org Artificial Intelligence

Accurate simulations of oil spill trajectories are essential for supporting practitioners' response and mitigating environmental and socioeconomic impacts. Numerical models, such as MEDSLIK-II, simulate advection, dispersion, and transformation processes of oil particles. However, simulations heavily rely on accurate parameter tuning, still based on expert knowledge and manual calibration. To overcome these limitations, we integrate the MEDSLIK-II numerical oil spill model with a Bayesian optimization framework to iteratively estimate the best physical parameter configuration that yields simulation closer to satellite observations of the slick. We focus on key parameters, such as horizontal diffusivity and drift factor, maximizing the Fraction Skill Score (FSS) as a measure of spatio-temporal overlap between simulated and observed oil distributions. We validate the framework for the Baniyas oil incident that occurred in Syria between August 23 and September 4, 2021, which released over 12,000 $m^3$ of oil. We show that, on average, the proposed approach systematically improves the FSS from 5.82% to 11.07% compared to control simulations initialized with default parameters. The optimization results in consistent improvement across multiple time steps, particularly during periods of increased drift variability, demonstrating the robustness of our method in dynamic environmental conditions.


Oil Spill Segmentation using Deep Encoder-Decoder models

Satyanarayana, Abhishek Ramanathapura, Dhali, Maruf A.

arXiv.org Artificial Intelligence

Crude oil is an integral component of the modern world economy. With the growing demand for crude oil due to its widespread applications, accidental oil spills are unavoidable. Even though oil spills are in and themselves difficult to clean up, the first and foremost challenge is to detect spills. In this research, the authors test the feasibility of deep encoder-decoder models that can be trained effectively to detect oil spills. The work compares the results from several segmentation models on high dimensional satellite Synthetic Aperture Radar (SAR) image data. Multiple combinations of models are used in running the experiments. The best-performing model is the one with the ResNet-50 encoder and DeepLabV3+ decoder. It achieves a mean Intersection over Union (IoU) of 64.868% and a class IoU of 61.549% for the "oil spill" class when compared with the current benchmark model, which achieved a mean IoU of 65.05% and a class IoU of 53.38% for the "oil spill" class.


What Is the Future of Emergency Prevention?

#artificialintelligence

Artificial intelligence (AI) is everywhere. Embedded into our everyday lives, from our ridesharing apps to the algorithms on our social media channels, AI has the potential to revolutionize every industry. However, there are a number of challenges that many technologies still need to overcome before they can actually implement AI -- and that's especially the case in the risk management industry. Today, companies across every industry rely on environment, health and safety (EHS) procedures to promote a safer and more compliant workplace. Essential to companies' risk management strategies, EHS programs are commonly used to help companies avoid unwanted events.


The Future of Cleaning Oil Spills: Robots, Wood Chips and Sponges

WSJ.com: WSJD - Technology

Recent oil spills in Russia and Mauritius have shown that the industry still needs better methods for cleaning up accidents. Researchers are working on some unlikely-sounding solutions, including oil-absorbing wood chips, a solar-powered robot and a reusable sponge. The oil industry is controlled by large companies and their suppliers, which together have often been the cause of spills, but university researchers and small firms are playing a key role in promoting new ways to clean up. Researchers at Northwestern University have developed a reusable sponge coated in a mixture containing iron and carbon that can absorb 30 times its weight in oil. The sponge, similar to sponges in everyday items such as furniture cushions and packaging, has attracted interest for further testing from several major oil companies, according to the researchers.


Google, DFO partner to track orcas with artificial intelligence

#artificialintelligence

If an oil spill were to hit B.C.'s southern coast, threatening the local orca population, the Department of Fisheries and Oceans (DFO) could respond in a way that wasn't technologically possible just two years ago, says Paul Cottrell. For years the marine mammal co-ordinator counted on a network of 18 hydrophones – underwater listening devices lining much of Vancouver Island – to detect calls of the endangered southern resident killer whales and track their movements in the Salish Sea. But what if artificial intelligence could be harnessed to automatically detect the calls of that one particular subgroup of orcas around the clock? That was the pitch Google's (Nasdaq:GOOG) artificial-intelligence division made to the DFO at a 2018 workshop in Victoria. "The opportunity to work with such cutting-edge individuals and technology was amazing," Cottrell said.


How to Develop an Imbalanced Classification Model to Detect Oil Spills

#artificialintelligence

Many imbalanced classification tasks require a skillful model that predicts a crisp class label, where both classes are equally important. An example of an imbalanced classification problem where a class label is required and both classes are equally important is the detection of oil spills or slicks in satellite images. The detection of a spill requires mobilizing an expensive response, and missing an event is equally expensive, causing damage to the environment. One way to evaluate imbalanced classification models that predict crisp labels is to calculate the separate accuracy on the positive class and the negative class, referred to as sensitivity and specificity. These two measures can then be averaged using the geometric mean, referred to as the G-mean, that is insensitive to the skewed class distribution and correctly reports on the skill of the model on both classes. In this tutorial, you will discover how to develop a model to predict the presence of an oil spill in satellite images and evaluate it using the G-mean metric. Develop an Imbalanced Classification Model to Detect Oil Spills Photo by Lenny K Photography, some rights reserved. In this project, we will use a standard imbalanced machine learning dataset referred to as the "oil spill" dataset, "oil slicks" dataset or simply "oil."


7 amazing robots based on animals

#artificialintelligence

When it comes to robots, science fiction has conditioned us to think of androids – bipedal machines approximating the human form. But the next generation of robots may be based on very different types of animals: snakes, flies, locusts and even the multi-tentacle octopus. Israeli scientists are hard at work on just such contraptions. Here's a look at seven of the most fascinating designs that can help with everything from exploring our insides to cleaning up the mess we make on the planet. Medrobotics' signature product, the Flex Robotic System, allows physicians to reach deep into the body with minimal risk.


Camera spots hidden oil spills and may find missing planes

New Scientist

There are thousands of oil spills each year in US waters alone. One major source is illegal dumping of oil in harbours when ships empty their bilges, typically at night to avoid detection. However, a new kind of polarising camera can now spot offenders immediately. Its ability to detect otherwise invisible oil sheens could even lead investigators to lost planes.

  Country: North America > United States > California (0.17)
  Industry: Energy > Oil & Gas (0.90)

Find Me the Right Content! Diversity-Based Sampling of Social Media Spaces for Topic-Centric Search

Choudhury, Munmun De (Rutgers, The State University of New Jersey) | Counts, Scott (Microsoft Research) | Czerwinski, Mary (Microsoft Research)

AAAI Conferences

Social media and networking websites, such as Twitter and Facebook, generate large quantities of information and have become mechanisms for real-time content dissipation to users. An important question that arises is: how do we sample such social media information spaces in order to deliver relevant content on a topic to end users? Notice that these large-scale information spaces are inherently diverse, featuring a wide array of attributes such as location, recency, degree of diffusion effects in the network and so on. Naturally, for the end user, different levels of diversity in social media content can significantly impact the information consumption experience: low diversity can provide focused content that may be simpler to understand, while high diversity can increase breadth in the exposure to multiple opinions and perspectives. Hence to address our research question, we turn to diversity as a core concept in our proposed sampling methodology. Here we are motivated by ideas in the "compressive sensing" literature and utilize the notion of sparsity in social media information to represent such large spaces via a small number of basis components. Thereafter we use a greedy iterative clustering technique on this transformed space to construct samples matching a desired level of diversity. Based on Twitter Firehose data, we demonstrate quantitatively that our method is robust, and performs better than other baseline techniques over a variety of trending topics. In a user study, we further show that users find samples generated by our method to be more interesting and subjectively engaging compared to techniques inspired by state-of-the-art systems, with improvements in the range of 15--45%.