Oil & Gas


From ROI To RAI (Revenue From Artificial Intelligence)

#artificialintelligence

As disruptive technologies such as artificial intelligence (AI) fundamentally alter the way we live and do business, C-suite attitudes toward IT spending and utilization are shifting. Once considered a cost of doing business, technology is now viewed as a business driver that's critical to an organization's ability to perform core functions, even in industries far removed from Silicon Valley. However, many executives still struggle to determine the ROI to justify investments in AI and machine learning, even as AI becomes increasingly crucial to 21st century business decision-making. Except for the IT industry itself, C-suites have historically viewed IT expenses as a cost of entry to do business in the digital age, not revenue-generating investments. Then came new technologies such as mobile, cloud computing and the internet of things (IoT).


The AI-Powered Future of Drones

#artificialintelligence

The drone attack claimed by Yemeni rebels on key Saudi Arabian oil refineries that took place on September 14, 2019 has brought the powerful technology back into the news. Unfortunately, the strikes that disrupted roughly 5% of the world's oil supply has also contributed more ammunition to the overarching negative connotations the word "drone" conjures. "Drone" is a very broad term. Colloquially, drones are usually thought of as remote-piloted flying devices used by militaries for surveillance and offensive tactics or by civilians for recreational or business purposes. Merriam-Webster defines it as "an unmanned aircraft or ship guided by remote control or onboard computers."


Modeling Natural Sounds with Modulation Cascade Processes

Neural Information Processing Systems

Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ( 1s); phonemes ( 0.1s); glottal pulses ( 0.01s); and formants ( 0.001s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored.


Learning to Explore and Exploit in POMDPs

Neural Information Processing Systems

A fundamental objective in reinforcement learning is the maintenance of a proper balance between exploration and exploitation. This problem becomes more challenging when the agent can only partially observe the states of its environment. In this paper we propose a dual-policy method for jointly learning the agent behavior and the balance between exploration exploitation, in partially observable environments. The method subsumes traditional exploration, in which the agent takes actions to gather information about the environment, and active learning, in which the agent queries an oracle for optimal actions (with an associated cost for employing the oracle). The form of the employed exploration is dictated by the specific problem.


Diversity-Driven Exploration Strategy for Deep Reinforcement Learning

Neural Information Processing Systems

Efficient exploration remains a challenging research problem in reinforcement learning, especially when an environment contains large state spaces, deceptive local optima, or sparse rewards. To tackle this problem, we present a diversity-driven approach for exploration, which can be easily combined with both off- and on-policy reinforcement learning algorithms. We show that by simply adding a distance measure to the loss function, the proposed methodology significantly enhances an agent's exploratory behaviors, and thus preventing the policy from being trapped in local optima. We further propose an adaptive scaling method for stabilizing the learning process. We demonstrate the effectiveness of our method in huge 2D gridworlds and a variety of benchmark environments, including Atari 2600 and MuJoCo.


Differentiable Convex Optimization Layers

Neural Information Processing Systems

Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization. We introduce disciplined parametrized programming, a subset of disciplined convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solver's solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire convex program.


Deep Dynamical Modeling and Control of Unsteady Fluid Flows

Neural Information Processing Systems

The design of flow control systems remains a challenge due to the nonlinear nature of the equations that govern fluid flow. However, recent advances in computational fluid dynamics (CFD) have enabled the simulation of complex fluid flows with high accuracy, opening the possibility of using learning-based approaches to facilitate controller design. We present a method for learning the forced and unforced dynamics of airflow over a cylinder directly from CFD data. The proposed approach, grounded in Koopman theory, is shown to produce stable dynamical models that can predict the time evolution of the cylinder system over extended time horizons. Finally, by performing model predictive control with the learned dynamical models, we are able to find a straightforward, interpretable control law for suppressing vortex shedding in the wake of the cylinder.


A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles

Neural Information Processing Systems

Max-product'belief propagation' (BP) is a popular distributed heuristic for finding the Maximum A Posteriori (MAP) assignment in a joint probability distribution represented by a Graphical Model (GM). It was recently shown that BP converges to the correct MAP assignment for a class of loopy GMs with the following common feature: the Linear Programming (LP) relaxation to the MAP problem is tight (has no integrality gap). Unfortunately, tightness of the LP relaxation does not, in general, guarantee convergence and correctness of the BP algorithm. The failure of BP in such cases motivates reverse engineering a solution – namely, given a tight LP, can we design a'good' BP algorithm. We prove that the algorithm converges to the correct optimum if the respective LP relaxation, which may include inequalities associated with non-intersecting odd-sized cycles, is tight.


Differentiable MPC for End-to-end Planning and Control

Neural Information Processing Systems

This provides one way of leveraging and combining the advantages of model-free and model-based approaches. Specifically, we differentiate through MPC by using the KKT conditions of the convex approximation at a fixed point of the controller. Using this strategy, we are able to learn the cost and dynamics of a controller via end-to-end learning. Our experiments focus on imitation learning in the pendulum and cartpole domains, where we learn the cost and dynamics terms of an MPC policy class. We show that our MPC policies are significantly more data-efficient than a generic neural network and that our method is superior to traditional system identification in a setting where the expert is unrealizable.


On the Local Hessian in Back-propagation

Neural Information Processing Systems

Back-propagation (BP) is the foundation for successfully training deep neural networks. However, BP sometimes has difficulties in propagating a learning signal deep enough effectively, e.g., the vanishing gradient phenomenon. Meanwhile, BP often works well when combining with designing tricks'' like orthogonal initialization, batch normalization and skip connection. There is no clear understanding on what is essential to the efficiency of BP. In this paper, we take one step towards clarifying this problem.