Goto

Collaborating Authors

The Reproducibility Crisis Is Good for Science

Slate

Scientific funders have joined efforts to make studies more reliable. This January, the National Institutes of Health required that grant applicants explicitly defend the validity of their experimental design and materials. CHDI, a research foundation, has created a new position devoted to helping scientists plan robust experiments. In 2014, dozens of journals endorsed principles aimed at making research more rigorous and transparent. Nature and other journals now ask authors to complete checklists describing experimental design.


ICLR 2018 Reproducibility Challenge

@machinelearnbot

Essentially, think of your role as an inspector verifying the validity of the experimental results and conclusions of the paper. In some instances, your role will also extend to helping the authors improve the quality of their work and paper.


How Do We Address The Reproducibility Crisis In Artificial Intelligence?

#artificialintelligence

Yet a reproducibility crisis is creating a cloud of uncertainty over the entire field, eroding the confidence on which the AI economy depends.


[Policy Forum] Enhancing reproducibility for computational methods

Science

Over the past two decades, computational methods have radically changed the ability of researchers from all areas of scholarship to process and analyze data and to simulate complex systems. But with these advances come challenges that are contributing to broader concerns over irreproducibility in the scholarly literature, among them the lack of transparency in disclosure of computational methods. Current reporting methods are often uneven, incomplete, and still evolving. We present a novel set of Reproducibility Enhancement Principles (REP) targeting disclosure challenges involving computation. These recommendations, which build upon more general proposals from the Transparency and Openness Promotion (TOP) guidelines (1) and recommendations for field data (2), emerged from workshop discussions among funding agencies, publishers and journal editors, industry participants, and researchers representing a broad range of domains. Although some of these actions may be aspirational, we believe it is important to recognize and move toward ameliorating irreproducibility in computational research.


Reproducibility in computational research

#artificialintelligence

Jane Frazier spoke at our research team meeting today on "Reproducibility in computational research". We had a very stimulating and lively discussion about the issues involved. One interesting idea was that reproducibility is on a scale, and we can all aim to move further along the scale towards making our own research more reproducible. For example Can you reproduce your results tomorrow on the same computer with the same software installed?