Goto

Collaborating Authors

Utilities


AI Startup Aims to Extinguish Wildfires

#artificialintelligence

Based on the last two wildfire seasons, including 2018 when an entire California town was destroyed, utilities blamed for recent wildfires need all the help they can get maintaining aging grids. AI technologies may provide new monitoring tools. Paradise, Calif., population of about 27,000, was destroyed by the Camp Fire. The 2018 inferno claimed at least 84 victims. In June, Pacific Gas & Electric (PG&E) was ordered to pay a $3.5 million fine for causing the Camp Fire.


Shelling between Azerbaijan and Armenia ends brief ceasefire

Al Jazeera

Azerbaijan and Armenia have accused each other of shelling military positions and villages, breaking a day of ceasefire in border clashes between the long-feuding former Soviet republics. The Azerbaijan defence ministry said on Thursday one of its soldiers died, while Armenia's defence ministry said a civilian was wounded in Chinari village from an Azeri drone attack. Prior to that, 15 soldiers from both sides and one civilian had died since Sunday in the flareup between nations who fought a 1990s war over the mountainous Nagorno-Karabakh region. In a blizzard of rhetoric on both sides, Azerbaijan warned Armenia it might attack the Metsamor nuclear power station if its Mingechavir reservoir or other strategic outlets were hit. The neighbours have long been in conflict over Azerbaijan's breakaway, mainly ethnic Armenian region of Nagorno-Karabakh. But the latest flareups are around the Tavush region in northeast Armenia, some 300km (190 miles) from the enclave.


MIT's Tiny New Brain Chip Aims for AI in Your Pocket

#artificialintelligence

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant's worth of electricity and racks of chips to learn. That's not to slander machine learning, but nature may have a tip or two to improve the situation. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket. The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors--chip components that can mimic their natural counterparts in the brain.


Sparse Oblique Decision Tree for Power System Security Rules Extraction and Embedding

arXiv.org Machine Learning

Increasing the penetration of variable generation has a substantial effect on the operational reliability of power systems. The higher level of uncertainty that stems from this variability makes it more difficult to determine whether a given operating condition will be secure or insecure. Data-driven techniques provide a promising way to identify security rules that can be embedded in economic dispatch model to keep power system operating states secure. This paper proposes using a sparse weighted oblique decision tree to learn accurate, understandable, and embeddable security rules that are linear and can be extracted as sparse matrices using a recursive algorithm. These matrices can then be easily embedded as security constraints in power system economic dispatch calculations using the Big-M method. Tests on several large datasets with high renewable energy penetration demonstrate the effectiveness of the proposed method. In particular, the sparse weighted oblique decision tree outperforms the state-of-art weighted oblique decision tree while keeping the security rules simple. When embedded in the economic dispatch, these rules significantly increase the percentage of secure states and reduce the average solution time.


AI, you have some explaining to do

#artificialintelligence

When it comes to your next-door neighbors, maybe it's better that way. As we operationalize machine learning (ML) and AI systems, end-users need to know how decisions are made and why actions are taken. What I hear often from clients looking at adopting AI or users in the field that work with AI-based decision making is that they don't trust the black box paradigm of AI. If AI is "learning" and "evolving" based on acquired data, and they can't see its logic flow, they're not comfortable with it and do not want to rely on its decisions or recommendations. I recently discussed this very issue with a client that had developed an AI to assist human teams determine bid ranges based on strategic fit, expected economic return, and competitive intelligence when bidding for oil and gas exploration leases.


Trajectory annotation using sequences of spatial perception

arXiv.org Machine Learning

In the near future, more and more machines will perform tasks in the vicinity of human spaces or support them directly in their spatially bound activities. In order to simplify the verbal communication and the interaction between robotic units and/or humans, reliable and robust systems w.r.t. noise and processing results are needed. This work builds a foundation to address this task. By using a continuous representation of spatial perception in interiors learned from trajectory data, our approach clusters movement in dependency to its spatial context. We propose an unsupervised learning approach based on a neural autoencoding that learns semantically meaningful continuous encodings of spatio-temporal trajectory data. This learned encoding can be used to form prototypical representations. We present promising results that clear the path for future applications.


A new approach for generation of generalized basic probability assignment in the evidence theory

arXiv.org Artificial Intelligence

The process of information fusion needs to deal with a large number of uncertain information with multi-source, heterogeneity, inaccuracy, unreliability, and incompleteness. In practical engineering applications, Dempster-Shafer evidence theory is widely used in multi-source information fusion owing to its effectiveness in data fusion. Information sources have an important impact on multi-source information fusion in an environment of complex, unstable, uncertain, and incomplete characteristics. To address multi-source information fusion problem, this paper considers the situation of uncertain information modeling from the closed world to the open world assumption and studies the generation of basic probability assignment (BPA) with incomplete information. In this paper, a new method is proposed to generate generalized basic probability assignment (GBPA) based on the triangular fuzzy number model under the open world assumption. The proposed method can not only be used in different complex environments simply and flexibly, but also have less information loss in information processing. Finally, a series of comprehensive experiments basing on the UCI data sets are used to verify the rationality and superiority of the proposed method.


NukeBERT: A Pre-trained language model for Low Resource Nuclear Domain

arXiv.org Machine Learning

Significant advances have been made in recent years on Natural Language Processing with machines surpassing human performance in many tasks, including but not limited to Question Answering. The majority of deep learning methods for Question Answering targets domains with large datasets and highly matured literature. The area of Nuclear and Atomic energy has largely remained unexplored in exploiting non-annotated data for driving industry viable applications. Due to lack of dataset, a new dataset was created from the 7000 research papers on nuclear domain. This paper contributes to research in understanding nuclear domain knowledge which is then evaluated on Nuclear Question Answering Dataset (NQuAD) created by nuclear domain experts as part of this research. NQuAD contains 612 questions developed on 181 paragraphs randomly selected from the IGCAR research paper corpus. In this paper, the Nuclear Bidirectional Encoder Representational Transformers (NukeBERT) is proposed, which incorporates a novel technique for building BERT vocabulary to make it suitable for tasks with less training data. The experiments evaluated on NQuAD revealed that NukeBERT was able to outperform BERT significantly, thus validating the adopted methodology. Training NukeBERT is computationally expensive and hence we will be open-sourcing the NukeBERT pretrained weights and NQuAD for fostering further research work in the nuclear domain.


Detecting Fake News in Social Media

Communications of the ACM

In March 2011, the catastrophic accident known as "The Fukushima Daiichi nuclear disaster" took place, initiated by the Tohoku earthquake and tsunami in Japan. The only nuclear accident to receive a Level-7 classification on the International Nuclear Event Scale since the Chernobyl nuclear power plant disaster in 1986, the Fukushima event triggered global concerns and rumors regarding radiation leaks. Among the false rumors was an image, which had been described as a map of radioactive discharge emanating into the Pacific Ocean, as illustrated in the accompanying figure. In fact, this figure, depicting the wave height of the tsunami that followed, still to this date circulates on social media with the inaccurate description. Social media is ideal for spreading rumors, because it lacks censorship.


Predicting Real-Time Locational Marginal Prices: A GAN-Based Video Prediction Approach

arXiv.org Machine Learning

In this paper, we propose an unsupervised data-driven approach to predict real-time locational marginal prices (RTLMPs). The proposed approach is built upon a general data structure for organizing system-wide heterogeneous market data streams into the format of market data images and videos. Leveraging this general data structure, the system-wide RTLMP prediction problem is formulated as a video prediction problem. A video prediction model based on generative adversarial networks (GAN) is proposed to learn the spatio-temporal correlations among historical RTLMPs and predict system-wide RTLMPs for the next hour. An autoregressive moving average (ARMA) calibration method is adopted to improve the prediction accuracy. The proposed RTLMP prediction method takes public market data as inputs, without requiring any confidential information on system topology, model parameters, or market operating details. Case studies using public market data from ISO New England (ISO-NE) and Southwest Power Pool (SPP) demonstrate that the proposed method is able to learn spatio-temporal correlations among RTLMPs and perform accurate RTLMP prediction.