Civil and maritime engineering systems, among others, from bridges to offshore platforms and wind turbines, must be efficiently managed as they are exposed to deterioration mechanisms throughout their operational life, such as fatigue or corrosion. Identifying optimal inspection and maintenance policies demands the solution of a complex sequential decision-making problem under uncertainty, with the main objective of efficiently controlling the risk associated with structural failures. Addressing this complexity, risk-based inspection planning methodologies, supported often by dynamic Bayesian networks, evaluate a set of pre-defined heuristic decision rules to reasonably simplify the decision problem. However, the resulting policies may be compromised by the limited space considered in the definition of the decision rules. Avoiding this limitation, Partially Observable Markov Decision Processes (POMDPs) provide a principled mathematical methodology for stochastic optimal control under uncertain action outcomes and observations, in which the optimal actions are prescribed as a function of the entire, dynamically updated, state probability distribution. In this paper, we combine dynamic Bayesian networks with POMDPs in a joint framework for optimal inspection and maintenance planning, and we provide the formulation for developing both infinite and finite horizon POMDPs in a structural reliability context. The proposed methodology is implemented and tested for the case of a structural component subject to fatigue deterioration, demonstrating the capability of state-of-the-art point-based POMDP solvers for solving the underlying planning optimization problem. Within the numerical experiments, POMDP and heuristic-based policies are thoroughly compared, and results showcase that POMDPs achieve substantially lower costs as compared to their counterparts, even for traditional problem settings.
The robot possesses an infrared thermal imager and a visual light camera, thereby giving them the ability to replace 24-hour manual inspection. Artificial intelligence is about to trigger explosive changes in our lives, work, and leisure, but few understand what the technology can do beyond Amazon AMZN's Alexa or Apple AAPL's Siri. These are examples of virtual assistant or'weak AI' technology -- the most common example of AI application. But in the data-driven energy sector, sophisticated machine learning is paving the way for'strong AI' to improve efficiency, forecasting, trading, and user accessibility. Electricity is a commodity that can be bought, sold, and traded in open markets.
This paper investigates the optimization problem of an infinite stage discrete time Markov decision process (MDP) with a long-run average metric considering both mean and variance of rewards together. Such performance metric is important since the mean indicates average returns and the variance indicates risk or fairness. However, the variance metric couples the rewards at all stages, the traditional dynamic programming is inapplicable as the principle of time consistency fails. We study this problem from a new perspective called the sensitivity-based optimization theory. A performance difference formula is derived and it can quantify the difference of the mean-variance combined metrics of MDPs under any two different policies. The difference formula can be utilized to generate new policies with strictly improved mean-variance performance. A necessary condition of the optimal policy and the optimality of deterministic policies are derived. We further develop an iterative algorithm with a form of policy iteration, which is proved to converge to local optima both in the mixed and randomized policy space. Specially, when the mean reward is constant in policies, the algorithm is guaranteed to converge to the global optimum. Finally, we apply our approach to study the fluctuation reduction of wind power in an energy storage system, which demonstrates the potential applicability of our optimization method.
Determination of inspection and maintenance policies for minimizing long-term risks and costs in deteriorating engineering environments constitutes a complex optimization problem. Major computational challenges include the (i) curse of dimensionality, due to exponential scaling of state/action set cardinalities with the number of components; (ii) curse of history, related to exponentially growing decision-trees with the number of decision-steps; (iii) presence of state uncertainties, induced by inherent environment stochasticity and variability of inspection/monitoring measurements; (iv) presence of constraints, pertaining to stochastic long-term limitations, due to resource scarcity and other infeasible/undesirable system responses. In this work, these challenges are addressed within a joint framework of constrained Partially Observable Markov Decision Processes (POMDP) and multi-agent Deep Reinforcement Learning (DRL). POMDPs optimally tackle (ii)-(iii), combining stochastic dynamic programming with Bayesian inference principles. Multi-agent DRL addresses (i), through deep function parametrizations and decentralized control assumptions. Challenge (iv) is herein handled through proper state augmentation and Lagrangian relaxation, with emphasis on life-cycle risk-based constraints and budget limitations. The underlying algorithmic steps are provided, and the proposed framework is found to outperform well-established policy baselines and facilitate adept prescription of inspection and intervention actions, in cases where decisions must be made in the most resource- and risk-aware manner.
Modern cyber-physical systems (CPS), such as our energy infrastructure, are becoming increasingly complex: An ever-higher share of Artificial Intelligence (AI)-based technologies use the Information and Communication Technology (ICT) facet of energy systems for operation optimization, cost efficiency, and to reach CO2 goals worldwide. At the same time, markets with increased flexibility and ever shorter trade horizons enable the multi-stakeholder situation that is emerging in this setting. These systems still form critical infrastructures that need to perform with highest reliability. However, today's CPS are becoming too complex to be analyzed in the traditional monolithic approach, where each domain, e.g., power grid and ICT as well as the energy market, are considered as separate entities while ignoring dependencies and side-effects. To achieve an overall analysis, we introduce the concept for an application of distributed artificial intelligence as a self-adaptive analysis tool that is able to analyze the dependencies between domains in CPS by attacking them. It eschews pre-configured domain knowledge, instead exploring the CPS domains for emergent risk situations and exploitable loopholes in codices, with a focus on rational market actors that exploit the system while still following the market rules.
Wind farms have traditionally made less money for the electricity they produce because they have been unable to predict how windy it will be tomorrow. "The way a lot of power markets work is you have to schedule your assets a day ahead," said Michael Terrell, the head of energy market strategy at Google. "And you tend to get compensated higher when you do that than if you sell into the market real-time. "Well, how do variable assets like wind schedule a day ahead when you don't know the wind is going to blow?" Terrell asked, "and how can you actually reserve your place in line?" Here's how: Google and the Google-owned Artificial Intelligence firm DeepMind combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central United States. Using machine learning, they have been able to better predict wind production, better predict electricity supply and demand, and as a result, reduce operating costs. "What we've been doing is working in partnership with the DeepMind team to use machine learning to take the weather data that's available publicly, actually forecast what we think the wind production will be the next day, and bid that wind into the day-ahead markets," Terrell said in a recent seminar hosted by the Stanford Precourt Institute of Energy. Stanford University posted video of the seminar last week. The result has been a 20 percent increase in revenue for wind farms, Terrell said. The Department of Energy listed improved wind forecasting as a first priority in its 2015 Wind Vision report, largely to improve reliability: "Improve Wind Resource Characterization," the report said at the top of its list of goals. "Collect data and develop models to improve wind forecasting at multiple temporal scales--e.g., minutes, hours, days, months, years." Google's goal has been more sweeping: to scrub carbon entirely from its energy portfolio, which consumes as much power as two San Franciscos. Google achieved an initial milestone by matching its annual energy use with its annual renewable-energy procurement, Terrell said. But the company has not been carbon-free in every location at every hour, which is now its new goal--what Terrell calls its "24x7 carbon-free" goal. "We're really starting to turn our efforts in this direction, and we're finding that it's not something that's easy to do.
Increasing the penetration of variable generation has a substantial effect on the operational reliability of power systems. The higher level of uncertainty that stems from this variability makes it more difficult to determine whether a given operating condition will be secure or insecure. Data-driven techniques provide a promising way to identify security rules that can be embedded in economic dispatch model to keep power system operating states secure. This paper proposes using a sparse weighted oblique decision tree to learn accurate, understandable, and embeddable security rules that are linear and can be extracted as sparse matrices using a recursive algorithm. These matrices can then be easily embedded as security constraints in power system economic dispatch calculations using the Big-M method. Tests on several large datasets with high renewable energy penetration demonstrate the effectiveness of the proposed method. In particular, the sparse weighted oblique decision tree outperforms the state-of-art weighted oblique decision tree while keeping the security rules simple. When embedded in the economic dispatch, these rules significantly increase the percentage of secure states and reduce the average solution time.
This article investigates the optimization of yaw control inputs of a nine-turbine wind farm. The wind farm is simulated using the high-fidelity simulator SOWFA. The optimization is performed with a modifier adaptation scheme based on Gaussian processes. Modifier adaptation corrects for the mismatch between plant and model and helps to converge to the actual plan optimum. In the case study the modifier adaptation approach is compared with the Bayesian optimization approach. Moreover, the use of two different covariance functions in the Gaussian process regression is discussed. Practical recommendations concerning the data preparation and application of the approach are given. It is shown that both the modifier adaptation and the Bayesian optimization approach can improve the power production with overall smaller yaw misalignments in comparison to the Gaussian wake model.
Wind-generated electricity has expanded greatly over the past decade. In the U.S., for example, by 2018 wind was generating 6.6% of utility-scale electricity generation, according to the U.S. Energy Information Administration. The criteria for efficient design and reliable operation of the familiar horizontal-axis wind turbines have been well established through decades of experience, leading to ever-larger structures over time, both to intercept more wind and to reach faster winds higher up. As these gargantuan turbines are assembled into large wind farms, often spread over uneven terrain, complex aerodynamic interactions between them have become increasingly important. To address this issue, researchers have proposed protocols that slightly reorient individual turbines to improve the output of others downwind, and they are working with wind farm operators to assess their real-life performance.
As disruptive technologies such as artificial intelligence (AI) fundamentally alter the way we live and do business, C-suite attitudes toward IT spending and utilization are shifting. Once considered a cost of doing business, technology is now viewed as a business driver that's critical to an organization's ability to perform core functions, even in industries far removed from Silicon Valley. However, many executives still struggle to determine the ROI to justify investments in AI and machine learning, even as AI becomes increasingly crucial to 21st century business decision-making. Except for the IT industry itself, C-suites have historically viewed IT expenses as a cost of entry to do business in the digital age, not revenue-generating investments. Then came new technologies such as mobile, cloud computing and the internet of things (IoT).