Harnessing Potential of Artificial Intelligence In Energy and Oil & Gas


The energy industry is undergoing a rapid transformation in recent past owing to the enhanced role of renewables and enhanced data-driven models making the value chain smarter. In the context of the primary constituents of this sector comprising of coal, power, renewables, solar energy, oil, and gas, there is a huge role AI can play. The biggest disruption in power in recent times is in the smart grid which is quite flexible in comparison to the traditional grid. AI can be a huge enabler in the form of providing optimal configurations etc to create a really smart and efficient grid. By thorough analysis of data related to losses AI can help prevent transmission and distribution losses.

11 Awesome Disruptive Technology Examples 2019 (MUST READ)


The pace of innovation is incredibly fast with new things being discovered daily. This is a special type of intelligence that is exhibited by computers and other machines. It's a flexible agent that perceives its environment and takes the necessary action required for the success of that particular phenomenon. Artificial intelligence is used when machines copy the cognitive functions of the human brain in learning and solving problems. As machines become increasingly capable, other facilities are removed from the definition.

Sequence to sequence deep learning models for solar irradiation forecasting Machine Learning

The energy output a photo voltaic(PV) panel is a function of solar irradiation and weather parameters like temperature and wind speed etc. A general measure for solar irradiation called Global Horizontal Irradiance (GHI), customarily reported in Watt/meter$^2$, is a generic indicator for this intermittent energy resource. An accurate prediction of GHI is necessary for reliable grid integration of the renewable as well as for power market trading. While some machine learning techniques are well introduced along with the traditional time-series forecasting techniques, deep-learning techniques remains less explored for the task at hand. In this paper we give deep learning models suitable for sequence to sequence prediction of GHI. The deep learning models are reported for short-term forecasting $\{1-24\}$ hour along with the state-of-the art techniques like Gradient Boosted Regression Trees(GBRT) and Feed Forward Neural Networks(FFNN). We have checked that spatio-temporal features like wind direction, wind speed and GHI of neighboring location improves the prediction accuracy of the deep learning models significantly. Among the various sequence-to-sequence encoder-decoder models LSTM performed superior, handling short-comings of the state-of-the-art techniques.

Video Friday: Massive Solar-Powered Drone, and More

IEEE Spectrum Robotics Channel

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. Soft-bubble, "a highly compliant dense geometry tactile sensor for robot manipulation," is a sort of combined gripper and 3D camera that uses a soft membrane to grasp and image objects at the same time. HAPS Mobile, a SoftBank-backed company, is developing a high-altitude pseudo satellite: a massive, solar-powered, long endurance drone that acts like a much cheaper and more versatile satellite over a smaller area.

Comparison of statistical post-processing methods for probabilistic NWP forecasts of solar radiation Machine Learning

The increased usage of solar energy places additional importance on forecasts of solar radiation. Solar panel power production is primarily driven by the amount of solar radiation and it is therefore important to have accurate forecasts of solar radiation. Accurate forecasts that also give information on the forecast uncertainties can help users of solar energy to make better solar radiation based decisions related to the stability of the electrical grid. To achieve this, we apply statistical post-processing techniques that determine relationships between observations of global radiation (made within the KNMI network of automatic weather stations in the Netherlands) and forecasts of various meteorological variables from the numerical weather prediction (NWP) model HARMONIE-AROME (HA) and the atmospheric composition model CAMS. Those relationships are used to produce probabilistic forecasts of global radiation. We compare 7 different statistical post-processing methods, consisting of two parametric and five non-parametric methods. We find that all methods are able to generate probabilistic forecasts that improve the raw global radiation forecast from HA according to the root mean squared error (on the median) and the potential economic value. Additionally, we show how important the predictors are in the different regression methods. We also compare the regression methods using various probabilistic scoring metrics, namely the continuous ranked probability skill score, the Brier skill score and reliability diagrams. We find that quantile regression and generalized random forests generally perform best. In (near) clear sky conditions the non-parametric methods have more skill than the parametric ones.

Transfer Learning Using Ensemble Neural Networks for Organic Solar Cell Screening Machine Learning

Organic Solar Cells are a promising technology for solving the clean energy crisis in the world. However, generating candidate chemical compounds for solar cells is a time-consuming process requiring thousands of hours of laboratory analysis. For a solar cell, the most important property is the power conversion efficiency which is dependent on the highest occupied molecular orbitals (HOMO) values of the donor molecules. Recently, machine learning techniques have proved to be very useful in building predictive models for HOMO values of donor structures of Organic Photovoltaic Cells (OPVs). Since experimental datasets are limited in size, current machine learning models are trained on data derived from calculations based on density functional theory (DFT). Molecular line notations such as SMILES or InChI are popular input representations for describing the molecular structure of donor molecules. The two types of line representations encode different information, such as SMILES defines the bond types while InChi defines protonation. In this work, we present an ensemble deep neural network architecture, called SINet, which harnesses both the SMILES and InChI molecular representations to predict HOMO values and leverage the potential of transfer learning from a sizeable DFT-computed dataset- Harvard CEP to build more robust predictive models for relatively smaller HOPV datasets. Harvard CEP dataset contains molecular structures and properties for 2.3 million candidate donor structures for OPV while HOPV contains DFT-computed and experimental values of 350 and 243 molecules respectively. Our results demonstrate significant performance improvement from the use of transfer learning and leveraging both molecular representations.

Probabilistic Energy Forecasting using Quantile Regressions based on a new Nearest Neighbors Quantile Filter Machine Learning

Parametric quantile regressions are a useful tool for creating probabilistic energy forecasts. Nonetheless, since classical quantile regressions are trained using a non-differentiable cost function, their creation using complex data mining techniques (e.g., artificial neural networks) may be complicated. This article presents a method that uses a new nearest neighbors quantile filter to obtain quantile regressions independently of the utilized data mining technique and without the non-differentiable cost function. Thereafter, a validation of the presented method using the dataset of the Global Energy Forecasting Competition of 2014 is undertaken. The results show that the presented method is able to solve the competition's task with a similar accuracy and in a similar time as the competition's winner, but requiring a much less powerful computer. This property may be relevant in an online forecasting service for which the fast computation of probabilistic forecasts using not so powerful machines is required.

Google will groom these 10 Indian startups that use AI and machine learning


Google just announced the 10 startups that have been shortlisted for the second calls of its Launchpad Accelerator program in India. All of the startups on the list have used artificial intelligence and machine learning to formulate their products. Google just announced the second wave of startups selected for their Launch Accelerator program in India. The program kicks off today with a one week mentorship programme boot camp organised by Google in Bengaluru which will be followed by more classes in April and May to address more specific issues -- lasting a total of three months. Aside from guidance, Google will also provide support for AI and ML, cloud computing, developing user interfaces, using the Android platform, online presence, product strategy and marketing.

Deep Distribution Regression Machine Learning

In recent years, a variety of machine learning methods, such as random forest, gradient boosting trees and neural networks have gained popularity and been widely adopted. These methods are often flexible enough to uncover complex relationships in high-dimensional data without strong assumptions on the underlying data structure. Off-the-shelf software is available to put these algorithms into production [Pedregosa et al. (2011), Abadi et al. (2016) and Paszke et al. (2017)]. However, in regression and forecasting tasks, many of the machine learning methods only provide a point estimate, without any additional information regarding the uncertainty of the target quantity. Understanding uncertainties are often crucial in fields such as financial markets and risk analysis [Diebold et al. (1997), Timmermann (2000)], population and demographic studies [Wilson and Bell (2007)], transportation and traffic analysis [Zhu and Laptev (2017), Rodrigues and Pereira (2018)] and energy forecasting [Hong et al. (2016)].

Machine learning used to identify high-performing solar materials


Finding the best light-harvesting chemicals for use in solar cells can feel like searching for a needle in a haystack. Over the years, researchers have developed and tested thousands of different dyes and pigments to see how they absorb sunlight and convert it to electricity. Sorting through all of them requires an innovative approach. Now, thanks to a study that combines the power of supercomputing with data science and experimental methods, researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory and the University of Cambridge in England have developed a novel "design to device" approach to identify promising materials for dye-sensitized solar cells (DSSCs). DSSCs can be manufactured with low-cost, scalable techniques, allowing them to reach competitive performance-to-price ratios.