power system


Autonomous energy grids project envisions 'self-driving power system'

#artificialintelligence

A team at the US National Renewable Energy Laboratory (NREL) is working on autonomous energy grid (AEG) technology to ensure the electricity grid of the future can manage a growing base of intelligent energy devices, variable renewable energy, and advanced controls. "The future grid will be much more distributed too complex to control with today's techniques and technologies," said Benjamin Kroposki, director of NREL's Power Systems Engineering Center. "We need a path to get there--to reach the potential of all these new technologies integrating into the power system." The AEG effort envisions a self-driving power system - a very "aware" network of technologies and distributed controls that work together to efficiently match bi-directional energy supply to energy demand. This is a hard pivot from today's system, in which centralized control is used to manage one-way electricity flows to consumers along power lines that spoke out from central generators.


Meta-heuristic for non-homogeneous peak density spaces and implementation on 2 real-world parameter learning/tuning applications

arXiv.org Artificial Intelligence

Observer effect in physics (/psychology) regards bias in measurement (/perception) due to the interference of instrument (/knowledge). Based on these concepts, a new meta-heuristic algorithm is proposed for controlling memory usage per localities without pursuing Tabu-like cut-off approaches. In this paper, first, variations of observer effect are explained in different branches of science from physics to psychology. Then, a metaheuristic algorithm is proposed based on observer effect concepts and the used metrics are explained. The derived optimizer performance has been compared between 1st, non-homogeneous-peaks-density functions, and 2nd, homogeneous-peaks-density functions to verify the algorithm outperformance in the 1st scheme. Finally, performance analysis of the novel algorithms is derived using two real-world engineering applications in Electroencephalogram feature learning and Distributed Generator parameter tuning, each of which having nonlinearity and complex multi-modal peaks distributions as its characteristics. Also, the effect of version improvement has been assessed. The performance analysis among other optimizers in the same context suggests that the proposed algorithm is useful both solely and in hybrid Gradient Descent settings where problem's search space is nonhomogeneous in terms of local peaks density.


Region of Attraction for Power Systems using Gaussian Process and Converse Lyapunov Function -- Part I: Theoretical Framework and Off-line Study

arXiv.org Machine Learning

This paper introduces a novel framework to construct the region of attraction (ROA) of a power system centered around a stable equilibrium by using stable state trajectories of system dynamics. Most existing works on estimating ROA rely on analytical Lyapunov functions, which are subject to two limitations: the analytic Lyapunov functions may not be always readily available, and the resulting ROA may be overly conservative. This work overcomes these two limitations by leveraging the converse Lyapunov theorem in control theory to eliminate the need of an analytic Lyapunov function and learning the unknown Lyapunov function with the Gaussian Process (GP) approach. In addition, a Gaussian Process Upper Confidence Bound (GP-UCB) based sampling algorithm is designed to reconcile the trade-off between the exploitation for enlarging the ROA and the exploration for reducing the uncertainty of sampling region. Within the constructed ROA, it is guaranteed in probability that the system state will converge to the stable equilibrium with a confidence level. Numerical simulations are also conducted to validate the assessment approach for the ROA of the single machine infinite bus system and the New England $39$-bus system. Numerical results demonstrate that our approach can significantly enlarge the estimated ROA compared to that of the analytic Lyapunov counterpart.


Adaptive Power System Emergency Control using Deep Reinforcement Learning

arXiv.org Machine Learning

Power system emergency control is generally regarded as the last safety net for grid security and resiliency. Existing emergency control schemes are usually designed off-line based on either the conceived "worst" case scenario or a few typical operation scenarios. These schemes are facing significant adaptiveness and robustness issues as increasing uncertainties and variations occur in modern electrical grids. To address these challenges, for the first time, this paper developed novel adaptive emergency control schemes using deep reinforcement learning (DRL), by leveraging the high-dimensional feature extraction and non-linear generalization capabilities of DRL for complex power systems. Furthermore, an open-source platform named RLGC has been designed for the first time to assist the development and benchmarking of DRL algorithms for power system control. Details of the platform and DRL-based emergency control schemes for generator dynamic braking and under-voltage load shedding are presented. Extensive case studies performed in both two-area four-machine system and IEEE 39-Bus system have demonstrated the excellent performance and robustness of the proposed schemes.


A tractable ellipsoidal approximation for voltage regulation problems

arXiv.org Machine Learning

We present a machine learning approach to the solution of chance constrained optimizations in the context of voltage regulation problems in power system operation. The novelty of our approach resides in approximating the feasible region of uncertainty with an ellipsoid. We formulate this problem using a learning model similar to Support Vector Machines (SVM) and propose a sampling algorithm that efficiently trains the model. We demonstrate our approach on a voltage regulation problem using standard IEEE distribution test feeders.


Cause Identification of Electromagnetic Transient Events using Spatiotemporal Feature Learning

arXiv.org Machine Learning

This paper presents a spatiotemporal unsupervised feature learning method for cause identification of electromagnetic transient events (EMTE) in power grids. The proposed method is formulated based on the availability of time-synchronized high-frequency measurement, and using the convolutional neural network (CNN) as the spatiotemporal feature representation along with softmax function. Despite the existing threshold-based, or energy-based events analysis methods, such as support vector machine (SVM), autoencoder, and tapered multi-layer perception (t-MLP) neural network, the proposed feature learning is carried out with respect to both time and space. The effectiveness of the proposed feature learning and the subsequent cause identification is validated through the EMTP simulation of different events such as line energization, capacitor bank energization, lightning, fault, and high-impedance fault in the IEEE 30-bus, and the real-time digital simulation (RTDS) of the WSCC 9-bus system.


IBM Rolls Out Big Customers At Think 2019 Using AI, ML, DL On Power Systems

#artificialintelligence

Morgan Stanley was another customer that showcased its work with IBM Power Systems at the event. Morgan Stanley executive director Marcelo Labre speaking with IBM's Sumit Gupta says that IBM Power Systems' computing power and AI-readiness is enabling the organization to explore new AI/ML use cases in finance, with the overall goal of increased efficiency and alignment with customer needs. For example, Morgan Stanley's Labre elaborated at THINK 2019 on how his organization is utilizing AI to challenge outdated risk models. Using AI to improve risk models is a common theme I hear over and over in the industry. You truly need big data to do this well and Power fits the bill.


Make data ready for AI with ICP for Data on Power Systems - IBM IT Infrastructure Blog

#artificialintelligence

As artificial intelligence (AI) capabilities mature, enterprise leaders are continuously evaluating use cases that can transform their business. A key challenge that slows down AI adoption is the abundant but untamed data that is not ready for AI. There is a strong correlation between companies outperforming in AI adoption and the ones that have a robust data infrastructure aligned with their business architecture. According to the 2018 IBM Business Value survey, Shifting Toward Enterprise-grade AI, 65 percent of outperformers surveyed capture, manage and access business, technology and operational information on key corporate data with a high degree of consistency across the organization versus 52 percent of all others surveyed. IBM recently introduced IBM Cloud Private for Data (ICP4D), a data and analytics platform, to help make your data estate ready for AI.


IBM Mashes Up PowerAI And Watson Machine Learning Stacks

#artificialintelligence

Earlier in this decade, when the hyperscalers and the academics that run with them were building machine learning frameworks to transpose all kinds of data from one format to another – speech to text, text to speech, image to text, video to text, and so on – they were doing so not just for scientific curiosity. They were trying to solve real business problems and addressing the needs of customers using their software. At the same time, IBM was trying to solve a different problem, naming creating a question-answer system that would anthropomorphize the search engine. This effort was known as Project Blue J inside of IBM (not to be confused with the open source BlueJ integrated development environment for Java), was wrapped up into a software stack called DeepQA by IBM. It was this DeepQA stack, which was based on the open source Hadoop unstructured data storage and analytics engine that came out of Yahoo and another project called Apache UIMA, which predates Hadoop by several years and which was designed by IBM database experts in the early 2000s to process unstructured data like text, audio, and video.


Learning to Solve Large-Scale Security-Constrained Unit Commitment Problems

arXiv.org Machine Learning

Security-Constrained Unit Commitment (SCUC) is a fundamental problem in power systems and electricity markets. In practical settings, SCUC is repeatedly solved via Mixed-Integer Linear Programming, sometimes multiple times per day, with only minor changes in input data. In this work, we propose a number of machine learning (ML) techniques to effectively extract information from previously solved instances in order to significantly improve the computational performance of MIP solvers when solving similar instances in the future. Based on statistical data, we predict redundant constraints in the formulation, good initial feasible solutions and affine subspaces where the optimal solution is likely to lie, leading to significant reduction in problem size. Computational results on a diverse set of realistic and large-scale instances show that, using the proposed techniques, SCUC can be solved on average 12 times faster than conventional methods, with no negative impact on solution quality.