Goto

Collaborating Authors

calibration


Solar Imaging Is Complicated, But AI Is Helping

#artificialintelligence

NASA scientists are using artificial intelligence to calibrate photographs of the Sun to improve data for solar studies. NASA's Solar Dynamics Observatory (SDO) has been providing high-definition photos of the Sun for nearly a decade since its launch on February 11, 2010. The photos have offered an in-depth examination of a variety of solar phenomena. SDO's Atmospheric Imaging Assembly (AIA) observes the Sun continuously and generates a lot of data about our Sun that has never been possible before. AIA degrades over time as a result of continual looking, and the data must be calibrated frequently.


Rating transitions forecasting: a filtering approach

arXiv.org Machine Learning

Analyzing the effect of business cycle on rating transitions has been a subject of great interest these last fifteen years, particularly due to the increasing pressure coming from regulators for stress testing. In this paper, we consider that the dynamics of rating migrations is governed by an unobserved latent factor. Under a point process filtering framework, we explain how the current state of the hidden factor can be efficiently inferred from observations of rating histories. We then adapt the classical Baum-Welsh algorithm to our setting and show how to estimate the latent factor parameters. Once calibrated, we may reveal and detect economic changes affecting the dynamics of rating migration, in real-time. To this end we adapt a filtering formula which can then be used for predicting future transition probabilities according to economic regimes without using any external covariates. We propose two filtering frameworks: a discrete and a continuous version. We demonstrate and compare the efficiency of both approaches on fictive data and on a corporate credit rating database. The methods could also be applied to retail credit loans.


Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification

arXiv.org Machine Learning

Uncertainty We begin our discussion by first introducing the contents of quantification (UQ) in machine learning generally Uncertainty Toolbox. We then provide an overview of evaluation refers to the task of quantifying the confidence of a given metrics in UQ. Afterwards, we demonstrate the functionalities prediction, and this measure of confidence can be especially of the toolbox with a case study where we train crucial in a variety of downstream applications, including probabilistic neural networks (PNNs) (Nix and Weigend, Bayesian optimization (Jones et al., 1998; Shahriari et al., 1994; Lakshminarayanan et al., 2017) with a set of different 2015), model-based reinforcement learning (Malik et al., loss functions, and evaluate the resulting trained models 2019; Yu et al., 2020), and in high-stakes prediction settings using metrics and visualizations in the toolbox. This case where errors incur large costs (Wexler, 2017; Rudin, 2019).


Predicting with Confidence on Unseen Distributions

arXiv.org Machine Learning

Recent work has shown that the performance of machine learning models can vary substantially when models are evaluated on data drawn from a distribution that is close to but different from the training distribution. As a result, predicting model performance on unseen distributions is an important challenge. Our work connects techniques from domain adaptation and predictive uncertainty literature, and allows us to predict model accuracy on challenging unseen distributions without access to labeled data. In the context of distribution shift, distributional distances are often used to adapt models and improve their performance on new domains, however accuracy estimation, or other forms of predictive uncertainty, are often neglected in these investigations. Through investigating a wide range of established distributional distances, such as Frechet distance or Maximum Mean Discrepancy, we determine that they fail to induce reliable estimates of performance under distribution shift. On the other hand, we find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts. We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference. $DoC$ reduces predictive error by almost half ($46\%$) on several realistic and challenging distribution shifts, e.g., on the ImageNet-Vid-Robust and ImageNet-Rendition datasets.


Scientists Look Up To Artificial Intelligence Techniques to Improve Solar Data from the Sun

#artificialintelligence

Researchers are using artificial intelligence (AI) techniques to calibrate some of NASA's images of the Sun. Launched in 2010, NASA's Solar Dynamics Observatory (SDO) has provided high-definition images of the Sun for over a decade. The Atmospheric Imagery Assembly, or AIA, is one of two imaging instruments on SDO and looks constantly at the Sun, taking images across 10 wavelengths of ultraviolet light every 12 seconds. This creates a wealth of information of the Sun like no other, but like all Sun-staring instruments--AIA degrades over time, and the data needs to be frequently calibrated, NASA said in a statement. To overcome this challenge, scientists decided to look at other options to calibrate the instrument, with an eye towards constant calibration.


NASA's new AI can stare at the sun without shades - and without damaging its vision

#artificialintelligence

When you were a kid, were you ever told not to look directly into the flaming eye of the Sun? It can be almost as dangerous for solar telescopes. The Atmospheric Imagery Assembly or AIA has been staring right into those flames for over a decade aboard the Solar Dynamic Observatory (SDO). AIA can see in 3 UV wavelengths and 7 extreme UV (EUV) wavelengths, and anything in the UV range is too short for the human eye. AIA has to suffer for science.


Artificial Intelligence Helps Improve NASA's Eyes on the Sun

#artificialintelligence

A group of researchers is using artificial intelligence techniques to calibrate some of NASA's images of the Sun, helping improve the data that scientists use for solar research. A solar telescope has a tough job. Staring at the Sun takes a harsh toll, with a constant bombardment by a never-ending stream of solar particles and intense sunlight. Over time, the sensitive lenses and sensors of solar telescopes begin to degrade. To ensure the data such instruments send back is still accurate, scientists recalibrate periodically to make sure they understand just how the instrument is changing.


Soft Calibration Objectives for Neural Networks

arXiv.org Artificial Intelligence

Optimal decision making requires that classifiers produce uncertainty estimates consistent with their empirical accuracy. However, deep neural networks are often under- or over-confident in their predictions. Consequently, methods have been developed to improve the calibration of their predictive uncertainty both during training and post-hoc. In this work, we propose differentiable losses to improve calibration based on a soft (continuous) version of the binning operation underlying popular calibration-error estimators. When incorporated into training, these soft calibration losses achieve state-of-the-art single-model ECE across multiple datasets with less than 1% decrease in accuracy. For instance, we observe an 82% reduction in ECE (70% relative to the post-hoc rescaled ECE) in exchange for a 0.7% relative decrease in accuracy relative to the cross entropy baseline on CIFAR-100. When incorporated post-training, the soft-binning-based calibration error objective improves upon temperature scaling, a popular recalibration method. Overall, experiments across losses and datasets demonstrate that using calibration-sensitive procedures yield better uncertainty estimates under dataset shift than the standard practice of using a cross entropy loss and post-hoc recalibration methods.


Calibrating NASA's images of the Sun using AI

#artificialintelligence

Since it launched on February 11, 2010, NASA's Solar Dynamics Observatory, or SDO, has provided high-definition images of the Sun for over a decade. The images have provided a detailed look at various solar phenomena. SDO uses Atmospheric Imaging Assembly (AIA) to continuously look at the sun, taking images in 10 wavelengths every 10 seconds. It creates a wealth of information about our Sun never previously possible. Due to constant staring, AIA degrades over time, and the data needs to be frequently calibrated.


NASA is using AI to take better pictures of the sun as

#artificialintelligence

The sun may be the most powerful source of energy in the Milky Way, but NASA researchers are using artificial intelligence to get a better view of the giant ball of gas. The US space agency is using machine learning on solar telescopes, including its Solar Dynamics Observatory (SDO), launched in 2010, and its Atmospheric Imagery Assembly (AIA), imaging instrument that looks constantly at the sun. This allows the agency to snap incredible pictures of the celestial giant, while limiting the effects of solar particles and'intense sunlight,' which begins to degrade lenses and sensors over time. The sun goes through an 11-year cycle where it goes from very active to less active. It is tracked by sunspots and it is currently going through a quiet phase.