Azerbaijan and Armenia have accused each other of shelling military positions and villages, breaking a day of ceasefire in border clashes between the long-feuding former Soviet republics. The Azerbaijan defence ministry said on Thursday one of its soldiers died, while Armenia's defence ministry said a civilian was wounded in Chinari village from an Azeri drone attack. Prior to that, 15 soldiers from both sides and one civilian had died since Sunday in the flareup between nations who fought a 1990s war over the mountainous Nagorno-Karabakh region. In a blizzard of rhetoric on both sides, Azerbaijan warned Armenia it might attack the Metsamor nuclear power station if its Mingechavir reservoir or other strategic outlets were hit. The neighbours have long been in conflict over Azerbaijan's breakaway, mainly ethnic Armenian region of Nagorno-Karabakh. But the latest flareups are around the Tavush region in northeast Armenia, some 300km (190 miles) from the enclave.
The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant's worth of electricity and racks of chips to learn. That's not to slander machine learning, but nature may have a tip or two to improve the situation. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket. The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors--chip components that can mimic their natural counterparts in the brain.
When it comes to your next-door neighbors, maybe it's better that way. As we operationalize machine learning (ML) and AI systems, end-users need to know how decisions are made and why actions are taken. What I hear often from clients looking at adopting AI or users in the field that work with AI-based decision making is that they don't trust the black box paradigm of AI. If AI is "learning" and "evolving" based on acquired data, and they can't see its logic flow, they're not comfortable with it and do not want to rely on its decisions or recommendations. I recently discussed this very issue with a client that had developed an AI to assist human teams determine bid ranges based on strategic fit, expected economic return, and competitive intelligence when bidding for oil and gas exploration leases.
In the near future, more and more machines will perform tasks in the vicinity of human spaces or support them directly in their spatially bound activities. In order to simplify the verbal communication and the interaction between robotic units and/or humans, reliable and robust systems w.r.t. noise and processing results are needed. This work builds a foundation to address this task. By using a continuous representation of spatial perception in interiors learned from trajectory data, our approach clusters movement in dependency to its spatial context. We propose an unsupervised learning approach based on a neural autoencoding that learns semantically meaningful continuous encodings of spatio-temporal trajectory data. This learned encoding can be used to form prototypical representations. We present promising results that clear the path for future applications.
In March 2011, the catastrophic accident known as "The Fukushima Daiichi nuclear disaster" took place, initiated by the Tohoku earthquake and tsunami in Japan. The only nuclear accident to receive a Level-7 classification on the International Nuclear Event Scale since the Chernobyl nuclear power plant disaster in 1986, the Fukushima event triggered global concerns and rumors regarding radiation leaks. Among the false rumors was an image, which had been described as a map of radioactive discharge emanating into the Pacific Ocean, as illustrated in the accompanying figure. In fact, this figure, depicting the wave height of the tsunami that followed, still to this date circulates on social media with the inaccurate description. Social media is ideal for spreading rumors, because it lacks censorship.
Increasingly complex and autonomous robots are being deployed in real-world environments with far-reaching consequences. High-stakes scenarios, such as emergency response or offshore energy platform and nuclear inspections, require robot operators to have clear mental models of what the robots can and can't do. However, operators are often not the original designers of the robots and thus, they do not necessarily have such clear mental models, especially if they are novice users. This lack of mental model clarity can slow adoption and can negatively impact human-machine teaming. We propose that interaction with a conversational assistant, who acts as a mediator, can help the user with understanding the functionality of remote robots and increase transparency through natural language explanations, as well as facilitate the evaluation of operators' mental models.
Nondestructive evaluation methods play an important role in ensuring component integrity and safety in many industries. Operator fatigue can play a critical role in the reliability of such methods. This is important for inspecting high value assets or assets with a high consequence of failure, such as aerospace and nuclear components. Recent advances in convolution neural networks can support and automate these inspection efforts. This paper proposes using residual neural networks (ResNets) for real-time detection of pitting and stress corrosion cracking, with a focus on dry storage canisters housing used nuclear fuel. The proposed approach crops nuclear canister images into smaller tiles, trains a ResNet on these tiles, and classifies images as corroded or intact using the per-image count of tiles predicted as corroded by the ResNet. The results demonstrate that such a deep learning approach allows to detect the locus of corrosion cracks via smaller tiles, and at the same time to infer with high accuracy whether an image comes from a corroded canister. Thereby, the proposed approach holds promise to automate and speed up nuclear fuel canister inspections, to minimize inspection costs, and to partially replace human-conducted onsite inspections, thus reducing radiation doses to personnel.
Large uncertainties in many phenomena of interest have challenged the reliability of pertaining decisions. Collecting additional information to better characterize involved uncertainties is among decision alternatives. Value of information (VoI) analysis is a mathematical decision framework that quantifies expected potential benefits of new data and assists with optimal allocation of resources for information collection. However, a primary challenge facing VoI analysis is the very high computational cost of the underlying Bayesian inference especially for equality-type information. This paper proposes the first surrogate-based framework for VoI analysis. Instead of modeling the limit state functions describing events of interest for decision making, which is commonly pursued in surrogate model-based reliability methods, the proposed framework models system responses. This approach affords sharing equality-type information from observations among surrogate models to update likelihoods of multiple events of interest. Moreover, two knowledge sharing schemes called model and training points sharing are proposed to most effectively take advantage of the knowledge offered by costly model evaluations. Both schemes are integrated with an error rate-based adaptive training approach to efficiently generate accurate Kriging surrogate models. The proposed VoI analysis framework is applied for an optimal decision-making problem involving load testing of a truss bridge. While state-of-the-art methods based on importance sampling and adaptive Kriging Monte Carlo simulation are unable to solve this problem, the proposed method is shown to offer accurate and robust estimates of VoI with a limited number of model evaluations. Therefore, the proposed method facilitates the application of VoI for complex decision problems.
Japanese authorities are introducing a variety of measures to prevent the wrongful use of drones, which has been increasing due to many people being unfamiliar with regulations, especially tourists from abroad. Under the civil aeronautics law, a drone of 200 grams or more cannot be operated in airspace around airports or residential areas without permission from the government. In addition, the law regulating the use of drones bans flights in airspace near designated important places such as the Prime Minister's Office, the Imperial Palace and nuclear power plants. Foreign tourists and others unfamiliar with the laws continue to violate them. In 2019, 14 foreign nationals had their cases sent to prosecutors, as of Nov. 20.