upstream oil & gas


Artificial intelligence sustains critical infrastructure during COVID-19

#artificialintelligence

The adoption of artificial intelligence and machine learning technologies has never been more critical. Due to COVID-19, many organizations need to find a new way of working. Ensuring production rates are reliable, if not increased, while limiting the number of personnel - in some cases down to 50%. Many asset heavy industries, such as water, transportation & energy are considered critical infrastructure. Every effort needs to be made to maintain these.


The journey to edge computing for oil and gas companies

#artificialintelligence

The oil and gas industry is massive and highly-diversified in its operational characteristics between the upstream, mid-stream and downstream sectors of the industry. Even within each sector, there are distinct differences; offshore gas/oil rigs have a completely different set of requirements to onshore well pads in the fracking industry. However, every sector is susceptible to the boom and bust cycles that have traditionally characterised the oil and gas industry. All of this makes oil and gas ideal for adopting IOT technologies to address a whole range of problems and risks, and to smooth out the ups and downs of the business cycle. Where are oil and gas companies today with edge computing adoption?


RetinaNet: how Focal Loss fixes Single-Shot Detection

#artificialintelligence

Neural networks can be used to solve classification problems (predict classes) and regression problems (predict continuous values). Today we will be doing both at the same time. We start with a simplified task: detect and classify one single object in an image instead of several objects. How does an object detection dataset look like? Well, the inputs to our model are of course images and the labels are typically four values that describe a ground truth bounding box plus a category the object in this box belongs two.


EchoNous, Inc. Announces CE Mark Approval for Its Healthcare AI KOSMOS Platform

#artificialintelligence

The KOSMOS platform employs multiple layers of applied deep learning … clinical value through the meaningful application of artificial intelligence.


Differentiable Convex Optimization Layers

Neural Information Processing Systems

Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization. We introduce disciplined parametrized programming, a subset of disciplined convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solver's solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire convex program.


Multivariate Triangular Quantile Maps for Novelty Detection

Neural Information Processing Systems

Novelty detection, a fundamental task in machine learning, has drawn a lot of recent attention due to its wide-ranging applications and the rise of neural approaches. In this work, we present a general framework for neural novelty detection that centers around a multivariate extension of the univariate quantile function. Our framework unifies and extends many classical and recent novelty detection algorithms, and opens the way to exploit recent advances in flow-based neural density estimation. We adapt the multiple gradient descent algorithm to obtain the first efficient end-to-end implementation of our framework that is free of tuning hyperparameters. Extensive experiments over a number of real datasets confirm the efficacy of our proposed method against state-of-the-art alternatives.


PPDM Association on Twitter

#artificialintelligence

For those who may not yet have heard, given the current situation, the 2020 Houston Professional Petroleum Data Expo will be undergoing some changes. We are working on re-booking the physical event, but are pleased to offer a virtual conference.


Machine learning picks out hidden vibrations from earthquake data

#artificialintelligence

Over the last century, scientists have developed methods to map the structures within the Earth's crust, in order to identify resources such as oil reserves, geothermal sources, and, more recently, reservoirs where excess carbon dioxide could potentially be sequestered. They do so by tracking seismic waves that are produced naturally by earthquakes or artificially via explosives or underwater air guns. The way these waves bounce and scatter through the Earth can give scientists an idea of the type of structures that lie beneath the surface. There is a narrow range of seismic waves--those that occur at low frequencies of around 1 hertz--that could give scientists the clearest picture of underground structures spanning wide distances. But these waves are often drowned out by Earth's noisy seismic hum, and are therefore difficult to pick up with current detectors.


Directional Message Passing for Molecular Graphs

arXiv.org Machine Learning

Graph neural networks have recently achieved great successes in predicting quantum mechanical properties of molecules. These models represent a molecule as a graph using only the distance between atoms (nodes). They do not, however, consider the spatial direction from one atom to another, despite directional information playing a central role in empirical potentials for molecules, e.g. in angular potentials. To alleviate this limitation we propose directional message passing, in which we embed the messages passed between atoms instead of the atoms themselves. Each message is associated with a direction in coordinate space. These directional message embeddings are rotationally equivariant since the associated directions rotate with the molecule. We propose a message passing scheme analogous to belief propagation, which uses the directional information by transforming messages based on the angle between them. Additionally, we use spherical Bessel functions and spherical harmonics to construct theoretically well-founded, orthogonal representations that achieve better performance than the currently prevalent Gaussian radial basis representations while using fewer than 1/4 of the parameters. We leverage these innovations to construct the directional message passing neural network (DimeNet). DimeNet outperforms previous GNNs on average by 76% on MD17 and by 31% on QM9. Our implementation is available online.


Exploration-Exploitation in Constrained MDPs

arXiv.org Machine Learning

In many sequential decision-making problems, the goal is to optimize a utility function while satisfying a set of constraints on different utilities. This learning problem is formalized through Constrained Markov Decision Processes (CMDPs). In this paper, we investigate the exploration-exploitation dilemma in CMDPs. While learning in an unknown CMDP, an agent should trade-off exploration to discover new information about the MDP, and exploitation of the current knowledge to maximize the reward while satisfying the constraints. While the agent will eventually learn a good or optimal policy, we do not want the agent to violate the constraints too often during the learning process. In this work, we analyze two approaches for learning in CMDPs. The first approach leverages the linear formulation of CMDP to perform optimistic planning at each episode. The second approach leverages the dual formulation (or saddle-point formulation) of CMDP to perform incremental, optimistic updates of the primal and dual variables. We show that both achieves sublinear regret w.r.t.\ the main utility while having a sublinear regret on the constraint violations. That being said, we highlight a crucial difference between the two approaches; the linear programming approach results in stronger guarantees than in the dual formulation based approach.