731c83db8d2ff01bdc000083fd3c3740-AuthorFeedback.pdf
We are glad that the reviewers found MetaSDF to be a "novel approach [with] many potential benefits" (R1), the "motivation [...] very convincing and perfectly pitched to the reader" (R3), and "providing interesting insights about We believe that this will spur follow-up work benefitting both of these promising research directions. Further ShapeNet classes (R1, 3) We have trained models on the ShapeNet "benches" class--please see qualitative We note that 2D results (Sec. Figure 1 of DeepSDF--see qualitative result in (c)--with no further fine-tuning or heuristics. We will add experiments and comparisons with further classes to the final manuscript. Related work & IM-NET (R2) We will discuss DISN in-depth. We benchmark against this architecture (see submission Table 3, Figure 1).
Spatial Mixture-of-Experts
Many data have an underlying dependence on spatial location; it may be weather on the Earth, a simulation on a mesh, or a registered image. Yet this feature is rarely taken advantage of, and violates common assumptions made by many neural network layers, such as translation equivariance. Further, many works that do incorporate locality fail to capture fine-grained structure.
The State of Data at An Assessment of Development Practices in the and Benchmarks Track
If labels are obtained from elsewhere: documentation discusses where they were obtained from, how they were reused, and how the collected annotations and labels are combined with existing ones. DATA QUALITY 10 Suitability Suitability is a measure of a dataset's Documentation discusses how the dataset Documentation discusses how quality with regards to the purpose is appropriate for the defined purpose.
The State of Data Curation at NeurIPS: An Assessment of Dataset Development Practices in the Datasets and Benchmarks Track
Data curation is a field with origins in librarianship and archives, whose scholarship and thinking on data issues go back centuries, if not millennia. The field of machine learning is increasingly observing the importance of data curation to the advancement of both applications and fundamental understanding of machine learning models - evidenced not least by the creation of the Datasets and Benchmarks track itself. This work provides an analysis of recent dataset development practices at NeurIPS through the lens of data curation. We present an evaluation framework for dataset documentation, consisting of a rubric and toolkit developed through a thorough literature review of data curation principles. We use the framework to systematically assess the strengths and weaknesses in current dataset development practices of 60 datasets published in the NeurIPS Datasets and Benchmarks track from 2021-2023.
A Network Architecture
For a fair comparison, our network follows the same structure as CEM-RL [19]. The architecture is originally from Fujimoto et al. [5], the only difference is using tanh instead of RELU. We use (400, 300) hidden layer for all environment except Humanoid-v2. For Humanoid-v2, we used (256, 256) as in TD3 [5]. Most of hyperparameters are the same value as CEM-RL [19].
An Efficient Asynchronous Method for Integrating Evolutionary and Gradient-based Policy Search
Deep reinforcement learning (DRL) algorithms and evolution strategies (ES) have been applied to various tasks, showing excellent performances. These have the opposite properties, with DRL having good sample efficiency and poor stability, while ES being vice versa. Recently, there have been attempts to combine these algorithms, but these methods fully rely on synchronous update scheme, making it not ideal to maximize the benefits of the parallelism in ES. To solve this challenge, asynchronous update scheme was introduced, which is capable of good time-efficiency and diverse policy exploration. In this paper, we introduce an Asynchronous Evolution Strategy-Reinforcement Learning (AES-RL) that maximizes the parallel efficiency of ES and integrates it with policy gradient methods. Specifically, we propose 1) a novel framework to merge ES and DRL asynchronously and 2) various asynchronous update methods that can take all advantages of asynchronism, ES, and DRL, which are exploration and time efficiency, stability, and sample efficiency, respectively. The proposed framework and update methods are evaluated in continuous control benchmark work, showing superior performance as well as time efficiency compared to the previous methods.
reflecting reviewers ' comments which are not mentioned in this response
We thank the reviewers for the reviews, providing meaningful insight with constructive feedback. The result was reversed in Hopper, where RL contributed 200.86 while EA actors did 363.53. Therefore, all performance result scores are measured in the fixed interaction step. R2: Ablation study is missing. We presented the effect of the variance update rule in Appendix C.3 by comparing the result Then, we provided all combinations of our proposed mean and variance in Table 2. We will add a section so that it can be seen at a glance.
Autoformalizing Mathematical Statements by Symbolic Equivalence and Semantic Consistency Zenan Li1 Yifan Wu2 Zhaoyu Li3 Xinming Wei 2
Autoformalization, the task of automatically translating natural language descriptions into a formal language, poses a significant challenge across various domains, especially in mathematics. Recent advancements in large language models (LLMs) have unveiled their promising capabilities to formalize even competition-level math problems. However, we observe a considerable discrepancy between pass@1 and pass@k accuracies in LLM-generated formalizations. To address this gap, we introduce a novel framework that scores and selects the best result from k autoformalization candidates based on two complementary self-consistency methods: symbolic equivalence and semantic consistency. Elaborately, symbolic equivalence identifies the logical homogeneity among autoformalization candidates using automated theorem provers, and semantic consistency evaluates the preservation of the original meaning by informalizing the candidates and computing the similarity between the embeddings of the original and informalized texts. Our extensive experiments on the MATH and miniF2F datasets demonstrate that our approach significantly enhances autoformalization accuracy, achieving up to 0.22-1.35x
Supplementary Material for " AllClear: A Comprehensive Dataset and Benchmark for Cloud Removal in Satellite Imagery "
In Sec. 2 we include a The data is publicly available at https://allclear.cs.cornell.edu. We include a datasheet for our dataset following the methodology from "Datasheets for Datasets" Gebru In this section, we include the prompts from Gebru et al. [2021] in blue, and in For what purpose was the dataset created? Was there a specific task in mind? The dataset was created to facilitate research development on cloud removal in satellite imagery. Specifically, our task is more temporally aligned than previous benchmarks.
AllClear: A Comprehensive Dataset and Benchmark for Cloud Removal in Satellite Imagery
Clouds in satellite imagery pose a significant challenge for downstream applications. A major challenge in current cloud removal research is the absence of a comprehensive benchmark and a sufficiently large and diverse training dataset. To address this problem, we introduce the largest public dataset -- AllClear for cloud removal, featuring 23,742 globally distributed regions of interest (ROIs) with diverse land-use patterns, comprising 4 million images in total. Each ROI includes complete temporal captures from the year 2022, with (1) multi-spectral optical imagery from Sentinel-2 and Landsat 8/9, (2) synthetic aperture radar (SAR) imagery from Sentinel-1, and (3) auxiliary remote sensing products such as cloud masks and land cover maps. We validate the effectiveness of our dataset by benchmarking performance, demonstrating the scaling law -- the PSNR rises from 28.47 to 33.87 with 30 more data, and conducting ablation studies on the temporal length and the importance of individual modalities. This dataset aims to provide comprehensive coverage of the Earth's surface and promote better cloud removal results.