Human-in-the-loop: The future of Machine Learning in Automated Electron Microscopy

Kalinin, Sergei V., Liu, Yongtao, Biswas, Arpan, Duscher, Gerd, Pratiush, Utkarsh, Roccapriore, Kevin, Ziatdinov, Maxim, Vasudevan, Rama

arXiv.org Artificial Intelligence 

Machine learning methods are progressively gaining acceptance in the electron microscopy community for de-noising, semantic segmentation, and dimensionality reduction of data post-acquisition. The introduction of the APIs by major instrument manufacturers now allows the deployment of ML workflows in microscopes, not only for data analytics but also for real-time decision-making and feedback for microscope operation. However, the number of use cases for real-time ML remains remarkably small. Here, we discuss some considerations in designing ML-based active experiments and pose that the likely strategy for the next several years will be human-in-the-loop automated experiments (hAE). In this paradigm, the ML learning agent directly controls beam position and image and spectroscopy acquisition functions, and human operator monitors experiment progression in real-and feature space of the system and tunes the policies of the ML agent to steer the experiment towards specific objectives. One of the hallmarks of the meeting was the large number of presentations on machine learning (ML) in microscopy, ranging from denoising, unsupervised data analysis via variational autoencoders, and supervised learning applications for semantic segmentations and feature identification. Remarkably, by now most manufacturers offer or have plans to offer Python application programming interfaces (APIs), allowing the deployment of the codes on operational microscopes. From this perspective, the technical barriers for the broad implementation of automated microscopy in which ML algorithms analyze the data streaming from instrument detectors and make decisions based on this data are lower than ever.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found