neurips 2019
9bc99c590be3511b8d53741684ef574c-AuthorFeedback.pdf
We thank the reviewers for the insightful comments. Due to space limitation, we only discuss major comments below. This example is shown in Fig(a) below. This has been shown for ECE (e.g., Sec. 3 of [i], pointed out by To further understand this, in Sec. D.2 we evaluate the performance of all D.1 due to its adaptive binning scheme (see We will update Sec D.1 as follows: Before giving the fooling example, we highlight that ECE is not a proper We were not able to finish the OOD experiments on time and have to do it in future work.
NeurIPS 2019: Pseudo-Extended Markov chain Monte Carlo (paper ID: 2415) 1 We would like to thank the reviewers for dedicating their time to review our paper and the helpful feedback they have
All of the reviewers' minor comments and corrections have been added to Below, we address the reviewers' main questions. The paper focuses on HMC sampling. Unfortunately, HMC can't be applied in the discrete setting due to discontinuous How do you recommend setting π and g to best estimate β? Therefore, it's quite straightforward to implement pseudo-extended HMC within Stan by As a minor comment in line 58, it would be good to state that delta is an arbitrary differentiable function. This is a good point and we've corrected this in the paper. The experiments in 4.1 and 4.2 use the RMSE error of the target variables which is quite unusual.
9bc99c590be3511b8d53741684ef574c-AuthorFeedback.pdf
We thank the reviewers for the insightful comments. Due to space limitation, we only discuss major comments below. This example is shown in Fig(a) below. This has been shown for ECE (e.g., Sec. 3 of [i], pointed out by To further understand this, in Sec. D.2 we evaluate the performance of all D.1 due to its adaptive binning scheme (see We will update Sec D.1 as follows: Before giving the fooling example, we highlight that ECE is not a proper We were not able to finish the OOD experiments on time and have to do it in future work.
Post-Workshop Report on Science meets Engineering in Deep Learning, NeurIPS 2019, Vancouver
Sagun, Levent, Gulcehre, Caglar, Romero, Adriana, Rostamzadeh, Negar, Mannelli, Stefano Sarao
Science meets Engineering in Deep Learning took place in Vancouver as part of the Workshop section of NeurIPS 2019. As organizers of the workshop, we created the following report in an attempt to isolate emerging topics and recurring themes that have been presented throughout the event. Deep learning can still be a complex mix of art and engineering despite its tremendous success in recent years. The workshop aimed at gathering people across the board to address seemingly contrasting challenges in the problems they are working on. As part of the call for the workshop, particular attention has been given to the interdependence of architecture, data, and optimization that gives rise to an enormous landscape of design and performance intricacies that are not well-understood. This year, our goal was to emphasize the following directions in our community: (i) identify obstacles in the way to better models and algorithms; (ii) identify the general trends from which we would like to build scientific and potentially theoretical understanding; and (iii) the rigorous design of scientific experiments and experimental protocols whose purpose is to resolve and pinpoint the origin of mysteries while ensuring reproducibility and robustness of conclusions. In the event, these topics emerged and were broadly discussed, matching our expectations and paving the way for new studies in these directions. While we acknowledge that the text is naturally biased as it comes through our lens, here we present an attempt to do a fair job of highlighting the outcome of the workshop.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Research Report (0.82)
- Instructional Material > Course Syllabus & Notes (0.34)
Ideas for Improving the Field of Machine Learning: Summarizing Discussion from the NeurIPS 2019 Retrospectives Workshop
Sodhani, Shagun, Jaiswal, Mayoore S., Baker, Lauren, Sinha, Koustuv, Shneider, Carl, Henderson, Peter, Lehman, Joel, Lowe, Ryan
This report documents ideas for improving the field of machine learning, which arose from discussions at the ML Retrospectives workshop at NeurIPS 2019. The goal of the report is to disseminate these ideas more broadly, and in turn encourage continuing discussion about how the field could improve along these axes. We focus on topics that were most discussed at the workshop: incentives for encouraging alternate forms of scholarship, restructuring the review process, participation from academia and industry, and how we might better train computer scientists as scientists. Videos from the workshop can be accessed at Lowe et al. (2019).
- Africa (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.68)
How To Write A Top ML Paper: A Checklist From NeurIPS
Thousands of machine learning papers get published every week. It is almost impossible to find the most useful paper in this vast and growing list. A paper typically gets credit when it finds a real-world application, or is applauded by top researchers in the community, or even if it gets accepted in prestigious AI conferences, such as NeurIPS, ICML, ICLR etc. Usually, these conferences act as platforms to promote research. The acceptance guidelines for these top conferences vary, but they all are stringent nevertheless. The reviewers who skim through papers have thumb rules, such as the availability of code, replicability of results, etc. to judge a paper.
Preferred Networks at NeurIPS 2019 Preferred Networks Research & Development
Preferred Networks, as a research-oriented AI startup, participates every year in NeurIPS, the world's biggest machine learning conference. This post highlights our accomplishments and activities at NeurIPS 2019. We are very excited to be a part of it & looking forward to seeing top ML researchers from all over the world there! This year, four papers from Preferred Networks have been accepted for poster presentation. Three of them are based on ex-intern's work and we are very proud of their dedication and high-quality research.
Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program)
Pineau, Joelle, Vincent-Lamarre, Philippe, Sinha, Koustuv, Larivière, Vincent, Beygelzimer, Alina, d'Alché-Buc, Florence, Fox, Emily, Larochelle, Hugo
One of the challenges in machine learning research is to ensure that presented and published results are sound and reliable. Reproducibility, that is obtaining similar results as presented in a paper or talk, using the same code and data (when available), is a necessary step to verify the reliability of research findings. Reproducibility is also an important step to promote open and accessible research, thereby allowing the scientific community to quickly integrate new findings and convert ideas to practice. Reproducibility also promotes the use of robust experimental workflows, which potentially reduce unintentional errors. In 2019, the Neural Information Processing Systems (NeurIPS) conference, the premier international conference for research in machine learning, introduced a reproducibility program, designed to improve the standards across the community for how we conduct, communicate, and evaluate machine learning research. The program contained three components: a code submission policy, a community-wide reproducibility challenge, and the inclusion of the Machine Learning Reproducibility checklist as part of the paper submission process. In this paper, we describe each of these components, how it was deployed, as well as what we were able to learn from this initiative.
- North America > Canada > Quebec > Montreal (0.14)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Sweden (0.04)
- (3 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.66)