Safe Exploration for Interactive Machine Learning

Matteo Turchetta, Felix Berkenkamp, Andreas Krause

Neural Information Processing Systems 

In Interactive Machine Learning (IML), we iteratively make decisions and obtain noisy observations of an unknown function. While IML methods, e.g., Bayesian optimization and active learning, have been successful in applications, on realworld systems they must provably avoid unsafe decisions. To this end, safe IML algorithms must carefully learn about a priori unknown constraints without making unsafe decisions. Existing algorithms for this problem learn about the safety of all decisions to ensure convergence. This is sample-inefficient, as it explores decisions that are not relevant for the original IML objective.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found