Collaborating Authors

Microsoft, Timnit Gebru, and Google AI


Last week was very interesting for Machine Learning. There were a lot of events with potentially long-lasting consequences for Machine Learning. This article will go over some of them so that you're more informed about some important aspects of Machine Learning and the discussions surrounding them. These events might seem disjointed, but they provide different sides to a very important discussion in machine learning. GLUE (General Language Understanding Evaluation) and SuperGLUE are benchmarks for Natural Language Processing.

My 5 Year Machine Learning Journey


Recently, my content crossed 100,000 views. I've been writing for about a year (and really picked up consistency over this summer). I never expected this level of viewership or the positive reception my work has gotten. Therefore, this was quite a surprise to me. It got me thinking about my AI/ML/Tech journey.

Ideas for Improving the Field of Machine Learning: Summarizing Discussion from the NeurIPS 2019 Retrospectives Workshop Artificial Intelligence

This report documents ideas for improving the field of machine learning, which arose from discussions at the ML Retrospectives workshop at NeurIPS 2019. The goal of the report is to disseminate these ideas more broadly, and in turn encourage continuing discussion about how the field could improve along these axes. We focus on topics that were most discussed at the workshop: incentives for encouraging alternate forms of scholarship, restructuring the review process, participation from academia and industry, and how we might better train computer scientists as scientists. Videos from the workshop can be accessed at Lowe et al. (2019).

5 Unsexy Truths About Working in Machine Learning


I work in Machine Learning. To readers/viewers of my work, this won't come as a surprise. To people who don't know me as well, feel free to check out my LinkedIn/articles/videos for a better understanding of my skills/experience. My specialty is in statistical analysis. I've had experience working in Road Safety, Health System Analysis, Big Data Analysis for a Bank, disease detection, biometric recreation, and currently work in Supply Chain Analysis.

Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program) Machine Learning

One of the challenges in machine learning research is to ensure that presented and published results are sound and reliable. Reproducibility, that is obtaining similar results as presented in a paper or talk, using the same code and data (when available), is a necessary step to verify the reliability of research findings. Reproducibility is also an important step to promote open and accessible research, thereby allowing the scientific community to quickly integrate new findings and convert ideas to practice. Reproducibility also promotes the use of robust experimental workflows, which potentially reduce unintentional errors. In 2019, the Neural Information Processing Systems (NeurIPS) conference, the premier international conference for research in machine learning, introduced a reproducibility program, designed to improve the standards across the community for how we conduct, communicate, and evaluate machine learning research. The program contained three components: a code submission policy, a community-wide reproducibility challenge, and the inclusion of the Machine Learning Reproducibility checklist as part of the paper submission process. In this paper, we describe each of these components, how it was deployed, as well as what we were able to learn from this initiative.