Providing Digital Provenance: from Modeling through Production

#artificialintelligence

R User conference, I spoke on digital provenance, the importance of reproducible research, and how Domino has solved many of the challenges faced by data scientists when attempting this best practice. What are you doing to ensure that you're mitigating the many risks associated with provenance (or lack thereof)? Reproducibility is important throughout the entire data science process. As recent studies have shown, subconscious biases in the exploratory analysis phase of a project can have vast repercussions over final conclusions. The problems with managing the deployment and life-cycle of models in production are vast and varied, and often reproducibility stops at the level of the individual analyst.


AI Safety and Reproducibility: Establishing Robust Foundations for the Neuroscience of Human Values

arXiv.org Artificial Intelligence

We propose the creation of a systematic effort to identify and replicate key findings in neuroscience and allied fields related to understanding human values. Our aim is to ensure that research underpinning the value alignment problem of artificial intelligence has been sufficiently validated to play a role in the design of AI systems.


Science's "Reproducibility Crisis" Is Now Political Ammunition

WIRED

This story originally appeared on Undark and is part of the Climate Desk collaboration. David Randall and Christopher Welser are unlikely authorities on the reproducibility crisis in science. Randall, a historian and librarian, is the director of research at the National Association of Scholars, a small higher education advocacy group. Welser teaches Latin at a Christian college in Minnesota. Neither has published anything on replication or reproducibility.


Standardizing the Machine Learning Lifecycle

#artificialintelligence

Successfully building and deploying a machine-learning model can be difficult to do once. Enabling other data scientists (or yourself) to reproduce your pipeline, compare the results of different versions, track what's running where, and redeploy and rollback updated models, is much harder. In this eBook, we'll explore in greater depth what makes the ML lifecycle so challenging compared to the traditional software-development lifecycle, and share the Databricks approach to addressing these challenges. Key challenges faced by organizations when managing ML models throughout their lifecycle and how to overcome them. How MLflow, an open source framework unveiled by Databricks, can help address these challenges, specifically around experiment tracking, project reproducibility, and model deployment.


Building a Reproducible Machine Learning Pipeline

arXiv.org Machine Learning

Reproducibility of modeling is a problem that exists for any machine learning practitioner, whether in industry or academia. The consequences of an irreproducible model can include significant financial costs, lost time, and even loss of personal reputation (if results prove unable to be replicated). This paper will first discuss the problems we have encountered while building a variety of machine learning models, and subsequently describe the framework we built to tackle the problem of model reproducibility. The framework is comprised of four main components (data, feature, scoring, and evaluation layers), which are themselves comprised of well defined transformations. This enables us to not only exactly replicate a model, but also to reuse the transformations across different models. As a result, the platform has dramatically increased the speed of both offline and online experimentation while also ensuring model reproducibility.