Evaluation of autonomous systems under data distribution shifts

Sikar, Daniel, Garcez, Artur

arXiv.org Artificial Intelligence 

We posit that data can only be safe to use up to a certain threshold of the data distribution shift, after which control Zhang et al. [39] debated the need to rethink generalization, must be relinquished by the autonomous system and operation by demonstrating how traditional benchmarking approaches halted or handed to a human operator. With the use of a fail to explain why large neural networks generalize computer vision toy example we demonstrate that network well in practice. By randomizing target labels, the experiments predictive accuracy is impacted by data distribution shifts show that state-of-the-art convolutional neural networks and propose distance metrics between training and testing for image classification trained with SGD (stochastic data to define safe operation limits within said shifts. We gradient descent) are large enough to fit a random labelling conclude that beyond an empirically obtained threshold of of the training data. This is achieved with a simple twolayer the data distribution shift, it is unreasonable to expect network neural network, which presents a "perfect finite sample predictive accuracy not to degrade.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found