Goto

Collaborating Authors

upstream oil & gas


Productionising AI knowledge management

#artificialintelligence

Using AI for knowledge management is a great way to industrialise years of innovation on a company-wide level, writes Dr Warrick Cooke, Consultant at Tessella. An engineer who has worked in the same place – a factory, oil rig, nuclear power plant – for 20 years will be an expert in that facility. Their been-there-done-that experience means they can quickly make good decisions on the best response to a wide range of scenarios. That knowledge would be hugely valuable to others. It is also knowledge that will be lost when they move on.


AI could have profound effect on way GCHQ works, says director

The Guardian

GCHQ's director has said artificial intelligence software could have a profound impact on the way it operates, from spotting otherwise missed clues to thwart terror plots to better identifying the sources of fake news and computer viruses. Jeremy Fleming's remarks came as the spy agency prepared to publish a rare paper on Thursday defending its use of machine-learning technology to placate critics concerned about its bulk surveillance activities. "AI, like so many technologies, offers great promise for society, prosperity and security. Its impact on GCHQ is equally profound," he said. "While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ." AI is considered controversial because it relies on computer algorithms to make decisions based on patterns found in data.


Positive Reinforcements Help Algorithm Forecast Underground Natural Reserves

#artificialintelligence

Texas A&M University and University of Oklahoma researchers have designed a reinforcement-based algorithm that automates the prediction of underground oil and gas reserves. Texas A&M University (TAMU) and University of Oklahoma researchers have developed a reinforcement-based algorithm that automates forecasting of subterranean properties, enabling accurate prediction of oil and gas reserves. The algorithm focuses on the correct characterization of the underground environment based on rewards accumulated for making correct predictions of pressure and flow anticipated from boreholes. The TAMU team learned that within 10 iterations of reinforcement learning, the algorithm could correctly and rapidly predict the properties of simple subsurface scenarios. TAMU's Siddharth Misra said, "We have turned history matching into a sequential decision-making problem, which has the potential to reduce engineers' efforts, mitigate human bias, and remove the need of large sets of labeled training data."


A trusty robot to carry farms into the future

#artificialintelligence

Farming is a tough business. Global food demand is surging, with as many as 10 billion mouths to feed by 2050. At the same time, environmental challenges and labor limitations have made the future uncertain for agricultural managers. A new company called Future Acres proposed to enable farmers to do more with less through the power of robots. The company, helmed by CEO Suma Reddy, who previously served as COO and co-founder at Farmself and has held multiple roles and lead companies focused on the agtech space, has created an autonomous, electric agricultural robotic harvest companion named Carry to help farmers gather hand-picked crops faster and with less physical demand. Automation has been playing an increasingly large role in agriculture, and agricultural robots are widely expected to play a critical role in food production going forward.


A trusty robot to carry farms into the future

ZDNet

Farming is a tough business. Global food demand is surging, with as many as 10 billion mouths to feed by 2050. At the same time, environmental challenges and labor limitations have made the future uncertain for agricultural managers. A new company called Future Acres proposed to enable farmers to do more with less through the power of robots. The company, helmed by CEO Suma Reddy, who previously served as COO and co-founder at Farmself and has held multiple roles and lead companies focused on the agtech space, has created an autonomous, electric agricultural robotic harvest companion named Carry to help farmers gather hand-picked crops faster and with less physical demand. Automation has been playing an increasingly large role in agriculture, and agricultural robots are widely expected to play a critical role in food production going forward.


Physics-constrained deep learning of building thermal dynamics

AIHub

Energy-efficient buildings are one of the top priorities to sustainably address the global energy demands and reduction of CO2 emissions. Advanced control strategies for buildings have been identified as a potential solution with projected energy saving potential of up to 28%. However, the main bottleneck of the model-free methods such as reinforcement learning (RL) is the sampling inefficiency and thus requirement for large datasets, which are costly to obtain or often not available in the engineering practice. On the other hand, model-based methods such as model predictive control (MPC) suffer from large cost associated with the development of the physics-based building thermal dynamics model. We address the challenge of developing cost and data-efficient predictive models of a building's thermal dynamics via physics-constrained deep learning.


Soft robots for ocean exploration and offshore operations: A perspective

Robohub

Most of the ocean is unknown. Yet we know that the most challenging environments on the planet reside in it. Understanding the ocean in its totality is a key component for the sustainable development of human activities and for the mitigation of climate change, as proclaimed by the United Nations. We are glad to share our perspective about the role of soft robots in ocean exploration and offshore operations at the outset of the ocean decade (2021-2030). In this study of the Soft Systems Group (part of The School of Engineering at The University of Edinburgh), we focus on the two ends of the water column: the abyss and the surface.


Consistency of random-walk based network embedding algorithms

arXiv.org Machine Learning

Random-walk based network embedding algorithms like node2vec and DeepWalk are widely used to obtain Euclidean representation of the nodes in a network prior to performing down-stream network inference tasks. Nevertheless, despite their impressive empirical performance, there is a lack of theoretical results explaining their behavior. In this paper we studied the node2vec and DeepWalk algorithms through the perspective of matrix factorization. We analyze these algorithms in the setting of community detection for stochastic blockmodel graphs; in particular we established large-sample error bounds and prove consistent community recovery of node2vec/DeepWalk embedding followed by k-means clustering. Our theoretical results indicate a subtle interplay between the sparsity of the observed networks, the window sizes of the random walks, and the convergence rates of the node2vec/DeepWalk embedding toward the embedding of the true but unknown edge probabilities matrix. More specifically, as the network becomes sparser, our results suggest using larger window sizes, or equivalently, taking longer random walks, in order to attain better convergence rate for the resulting embeddings. The paper includes numerical experiments corroborating these observations.


MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints

arXiv.org Artificial Intelligence

Kinodynamic Motion Planning (KMP) is to find a robot motion subject to concurrent kinematics and dynamics constraints. To date, quite a few methods solve KMP problems and those that exist struggle to find near-optimal solutions and exhibit high computational complexity as the planning space dimensionality increases. To address these challenges, we present a scalable, imitation learning-based, Model-Predictive Motion Planning Networks framework that quickly finds near-optimal path solutions with worst-case theoretical guarantees under kinodynamic constraints for practical underactuated systems. Our framework introduces two algorithms built on a neural generator, discriminator, and a parallelizable Model Predictive Controller (MPC). The generator outputs various informed states towards the given target, and the discriminator selects the best possible subset from them for the extension. The MPC locally connects the selected informed states while satisfying the given constraints leading to feasible, near-optimal solutions. We evaluate our algorithms on a range of cluttered, kinodynamically constrained, and underactuated planning problems with results indicating significant improvements in computation times, path qualities, and success rates over existing methods.


Non-intrusive surrogate modeling for parametrized time-dependent PDEs using convolutional autoencoders

arXiv.org Artificial Intelligence

This work presents a non-intrusive surrogate modeling scheme based on machine learning technology for predictive modeling of complex systems, described by parametrized time-dependent PDEs. For these problems, typical finite element approaches involve the spatiotemporal discretization of the PDE and the solution of the corresponding linear system of equations at each time step. Instead, the proposed method utilizes a convolutional autoencoder in conjunction with a feed forward neural network to establish a low-cost and accurate mapping from the problem's parametric space to its solution space. For this purpose, time history response data are collected by solving the high-fidelity model via FEM for a reduced set of parameter values. Then, by applying the convolutional autoencoder to this data set, a low-dimensional representation of the high-dimensional solution matrices is provided by the encoder, while the reconstruction map is obtained by the decoder. Using the latent representation given by the encoder, a feed-forward neural network is efficiently trained to map points from the problem's parametric space to the compressed version of the respective solution matrices. This way, the encoded response of the system at new parameter values is given by the neural network, while the entire response is delivered by the decoder. This approach effectively bypasses the need to serially formulate and solve the system's governing equations at each time increment, thus resulting in a significant cost reduction and rendering the method ideal for problems requiring repeated model evaluations or 'real-time' computations. The elaborated methodology is demonstrated on the stochastic analysis of time-dependent PDEs solved with the Monte Carlo method, however, it can be straightforwardly applied to other similar-type problems, such as sensitivity analysis, design optimization, etc.