Industries with distributed fixed assets--be they telecommunication broadband or railway networks, wind turbines or drilling facilities, elevators and escalators or washing machines--share specific challenges when it comes to maintenance. As the assets are distributed throughout a region, there is usually no dedicated maintenance team per asset. To the contrary, maintenance workers cover whole areas, travel to the assets' various locations, and bring the appropriate instructions, spare parts, and tools. Maintenance costs typically range between 20–60 percent of opex spend, depending on industry, asset type, and capex spend--an opportunity that has only been a minor priority over the past couple of years. At the same time, ensuring high levels of asset availability and system reliability is a key priority for operations leaders. Often, regulations severely penalize shortfalls (eg, of power transmission and distribution), breakdowns incur high revenue losses (eg, for wind turbines), or breakdowns result in high safety and environmental dangers (eg, in drilling facilities).
Wind farms have traditionally made less money for the electricity they produce because they have been unable to predict how windy it will be tomorrow. "The way a lot of power markets work is you have to schedule your assets a day ahead," said Michael Terrell, the head of energy market strategy at Google. "And you tend to get compensated higher when you do that than if you sell into the market real-time. "Well, how do variable assets like wind schedule a day ahead when you don't know the wind is going to blow?" Terrell asked, "and how can you actually reserve your place in line?" Here's how: Google and the Google-owned Artificial Intelligence firm DeepMind combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central United States. Using machine learning, they have been able to better predict wind production, better predict electricity supply and demand, and as a result, reduce operating costs. "What we've been doing is working in partnership with the DeepMind team to use machine learning to take the weather data that's available publicly, actually forecast what we think the wind production will be the next day, and bid that wind into the day-ahead markets," Terrell said in a recent seminar hosted by the Stanford Precourt Institute of Energy. Stanford University posted video of the seminar last week. The result has been a 20 percent increase in revenue for wind farms, Terrell said. The Department of Energy listed improved wind forecasting as a first priority in its 2015 Wind Vision report, largely to improve reliability: "Improve Wind Resource Characterization," the report said at the top of its list of goals. "Collect data and develop models to improve wind forecasting at multiple temporal scales--e.g., minutes, hours, days, months, years." Google's goal has been more sweeping: to scrub carbon entirely from its energy portfolio, which consumes as much power as two San Franciscos. Google achieved an initial milestone by matching its annual energy use with its annual renewable-energy procurement, Terrell said. But the company has not been carbon-free in every location at every hour, which is now its new goal--what Terrell calls its "24x7 carbon-free" goal. "We're really starting to turn our efforts in this direction, and we're finding that it's not something that's easy to do.
An epic number of citizens are video-conferencing to work in these lockdown times. But as they trade in a gas-burning commute for digital connectivity, their personal energy use for each two hours of video is greater than the share of fuel they would have consumed on a four-mile train ride. Add to this, millions of students'driving' to class on the internet instead of walking. Meanwhile in other corners of the digital universe, scientists furiously deploy algorithms to accelerate research. Yet, the pattern-learning phase for a single artificial intelligence application can consume more compute energy than 10,000 cars do in a day.
With 2019 emerging as the warmest on record for the world's oceans, the call to climate action continues as the theme for the 50-year anniversary of Earth Day 2020, described as the world's largest environmental movement to drive transformative change for people and planet. Alongside the pandemic, the climate crisis presents an opportunity to use data and AI in ways never before considered. IBM itself began focusing on environmental sustainability before the first Earth Day was ever celebrated -- but its track record on greening its supply chain and driving innovative uses of tech has put it among the world's top eco-friendly Fortune 500 companies. Where IBM leads, customers reap benefits. Digital transformation efforts across industries has given the company a unique vantage point on critical challenges facing the world -- putting AI the work on a number of different issues, from drastically reducing energy consumption to lower C02 to optimizing large scale food production in the wake of climate chaos.
Increasing the penetration of variable generation has a substantial effect on the operational reliability of power systems. The higher level of uncertainty that stems from this variability makes it more difficult to determine whether a given operating condition will be secure or insecure. Data-driven techniques provide a promising way to identify security rules that can be embedded in economic dispatch model to keep power system operating states secure. This paper proposes using a sparse weighted oblique decision tree to learn accurate, understandable, and embeddable security rules that are linear and can be extracted as sparse matrices using a recursive algorithm. These matrices can then be easily embedded as security constraints in power system economic dispatch calculations using the Big-M method. Tests on several large datasets with high renewable energy penetration demonstrate the effectiveness of the proposed method. In particular, the sparse weighted oblique decision tree outperforms the state-of-art weighted oblique decision tree while keeping the security rules simple. When embedded in the economic dispatch, these rules significantly increase the percentage of secure states and reduce the average solution time.
Machine learning is leading to numerous changes in the energy industry. The Department of Energy recently announced that it is taking steps to accelerate the integration of machine learning technology in energy research and development. The head of the Department of Energy announced that they will be investing $30 million in artificial intelligence and machine learning algorithms. The new programs will have multiple purposes. One of the biggest goals is to use machine learning to facilitate the development of new renewable energy technologies.
This article investigates the optimization of yaw control inputs of a nine-turbine wind farm. The wind farm is simulated using the high-fidelity simulator SOWFA. The optimization is performed with a modifier adaptation scheme based on Gaussian processes. Modifier adaptation corrects for the mismatch between plant and model and helps to converge to the actual plan optimum. In the case study the modifier adaptation approach is compared with the Bayesian optimization approach. Moreover, the use of two different covariance functions in the Gaussian process regression is discussed. Practical recommendations concerning the data preparation and application of the approach are given. It is shown that both the modifier adaptation and the Bayesian optimization approach can improve the power production with overall smaller yaw misalignments in comparison to the Gaussian wake model.
The global energy demands are growing every year, and fossil fuels won't be able to fulfill our energy needs in the future. Carbon emissions from fossil fuels hit an all-time high in 2018 due to increased energy consumption. On the other hand, renewable energy is emerging out as a reliable alternative to fossil fuels. It is much safer and cleaner than conventional sources. With the advancements in technology, the renewable energy sector has made significant progress in the last decade.
A machine learning algorithm is developed to forecast the CO2 emission intensities in electrical power grids in the Danish bidding zone DK2, distinguishing between average and marginal emissions. The analysis was done on data set comprised of a large number (473) of explanatory variables such as power production, demand, import, weather conditions etc. collected from selected neighboring zones. The number was reduced to less than 50 using both LASSO (a penalized linear regression analysis) and a forward feature selection algorithm. Three linear regression models that capture different aspects of the data (non-linearities and coupling of variables etc.) were created and combined into a final model using Softmax weighted average. Cross-validation is performed for debiasing and autoregressive moving average model (ARIMA) implemented to correct the residuals, making the final model the variant with exogenous inputs (ARIMAX). The forecasts with the corresponding uncertainties are given for two time horizons, below and above six hours. Marginal emissions came up independent of any conditions in the DK2 zone, suggesting that the marginal generators are located in the neighbouring zones. The developed methodology can be applied to any bidding zone in the European electricity network without requiring detailed knowledge about the zone.
In Part 3 of our series on how utilities are using artificial intelligence, we look at how AI amplifies analytics for grid operations. Duke Energy saved some $130 million in avoided costs by using predictive data analytics to identify problems before they caused equipment failures. A utility in Brazil estimates savings in the range of $420,000 USD each month through better, analytics-based theft detection. Because, as an article published by Forbes notes, "Machine learning is a continuation of the concepts around predictive analytics, with one key difference: The AI system is able to make assumptions, test and learn autonomously." With these enhancements, data science will become more powerful than ever, and utilities stand to gain.