Goto

Collaborating Authors

Holidays auto-modelling for efficient time-series forecasting

#artificialintelligence

Dealing with forecasts in retail often means the need to adjust future sales for some special events such as promo campaigns or holidays. Those excessive sales we observe right before or after official, unofficial or religious holidays have natural consumption patterns underneath: customers may want to get prepaid for the celebration or may need to replenish their supplies afterwards. Sometimes they also enjoy significant discounts, or may simply have enough free time (due to official day-offs) for shopping during the holidays period. This way or another, here at RBC Group we add a calendar of local holidays to our time-series forecasting models as a rule of thumb. Some algorithms we occasionally use for predicting sales such as Prophet already have quite a convenient out-of-box option to model holidays and recurring events.


Low-dimensional Convolutional Neural Network for Solar Flares GOES Time-series Classification - IOPscience

#artificialintelligence

Space weather phenomena such as solar flares have a massive destructive power when they reach a certain magnitude. Here, we explore the deep-learning approach in order to build a solar flare-forecasting model, while examining its limitations and feature-extraction ability based on the available Geostationary Operational Environmental Satellite (GOES) X-ray time-series data. We present a multilayer 1D convolutional neural network to forecast the solar flare event probability occurrence of M- and X-class flares at 1, 3, 6, 12, 24, 48, 72, and 96 hr time frames. The forecasting models were trained and evaluated in two different scenarios: (1) random selection and (2) chronological selection, which were compared afterward in terms of common score metrics. Additionally, we also compared our results to state-of-the-art flare-forecasting models.


Lower bounds on the robustness to adversarial perturbations

Neural Information Processing Systems

The input-output mappings learned by state-of-the-art neural networks are significantly discontinuous. It is possible to cause a neural network used for image recognition to misclassify its input by applying very specific, hardly perceptible perturbations to the input, called adversarial perturbations. Many hypotheses have been proposed to explain the existence of these peculiar samples as well as several methods to mitigate them. A proven explanation remains elusive, however. In this work, we take steps towards a formal characterization of adversarial perturbations by deriving lower bounds on the magnitudes of perturbations necessary to change the classification of neural networks.


Adversarial vulnerability for any classifier

Neural Information Processing Systems

Despite achieving impressive performance, state-of-the-art classifiers remain highly vulnerable to small, imperceptible, adversarial perturbations. This vulnerability has proven empirically to be very intricate to address. In this paper, we study the phenomenon of adversarial perturbations under the assumption that the data is generated with a smooth generative model. We derive fundamental upper bounds on the robustness to perturbations of any classification function, and prove the existence of adversarial perturbations that transfer well across different classifiers with small risk. Our analysis of the robustness also provides insights onto key properties of generative models, such as their smoothness and dimensionality of latent space.


Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense

arXiv.org Machine Learning

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations. Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense. In particular, given a set of risk sources (domains), minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT, since the maximization is taken over the probability simplex of the domain set. Examples of this general formulation include attacking model ensembles, devising universal perturbation to input samples or data transformations, and generalized AT over multiple norm-ball threat models. We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. Our proposed approach leads to substantial performance improvement over the uniform averaging strategy in four different tasks. Moreover, we show how the self-adjusted weighting factors of the probability simplex from our proposed algorithms can be used to explain the importance of different attack and defense models.