Goto

Collaborating Authors

 Natural Language


504fa7e518da9d1b53a233ed20a38b46-Paper-Conference.pdf

Neural Information Processing Systems

Trained on vast corpora of human language, language models demonstrate emergent human-like reasoning abilities. Yet they are still far from true intelligence, which opens up intriguing opportunities to explore the parallels of humans and model behaviors. In this work, we study the ability to skip steps in reasoning--a hallmark of human expertise developed through practice. Unlike humans, who may skip steps to enhance efficiency or to reduce cognitive load, models do not inherently possess such motivations to minimize reasoning steps. To address this, we introduce a controlled framework that stimulates step-skipping behavior by iteratively refining models to generate shorter and accurate reasoning paths. Empirical results indicate that models can develop the step skipping ability under our guidance. Moreover, after fine-tuning on expanded datasets that include both complete and skipped reasoning sequences, the models can not only resolve tasks with increased efficiency without sacrificing accuracy, but also exhibit comparable and even enhanced generalization capabilities in out-of-domain scenarios. Our work presents the first exploration into human-like step-skipping ability and provides fresh perspectives on how such cognitive abilities can benefit AI models.


4Real: Towards Photorealistic 4D Scene Generation via Video Diffusion Models

Neural Information Processing Systems

Existing dynamic scene generation methods mostly rely on distilling knowledge from pre-trained 3D generative models, which are typically fine-tuned on synthetic object datasets. As a result, the generated scenes are often object-centric and lack photorealism. To address these limitations, we introduce a novel pipeline designed for photorealistic text-to-4D scene generation, discarding the dependency on multi-view generative models and instead fully utilizing video generative models trained on diverse real-world datasets. Our method begins by generating a reference video using the video generation model. We then learn the canonical 3D representation of the video using a freeze-time video, delicately generated from the reference video. To handle inconsistencies in the freeze-time video, we jointly learn a per-frame deformation to model these imperfections. We then learn the temporal deformation based on the canonical representation to capture dynamic interactions in the reference video.


Large Scale Transfer Learning for Tabular Data via Language Modeling Josh Gardner, Juan C. Perdomo # Ludwig Schmidt

Neural Information Processing Systems

Tabular data - structured, heterogeneous, spreadsheet-style data with rows and columns - is widely used in practice across many domains. However, while recent foundation models have reduced the need for developing task-specific datasets and predictors in domains such as language modeling and computer vision, this transfer learning paradigm has not had similar impact in the tabular domain.


A Appendix

Neural Information Processing Systems

A.1 Dataset Accessibility and Long-Term Preservation Plan The dataset can be freely downloaded from https://github.com/ai4ed/XES3G5M. The files of the dataset are stored on the Google Drive platform which can provide encrypted and secure access to the uploaded files and is widely used to share data with each other, researchers can directly download the dataset from https://drive.google.com/file/d/ We believe this platform can ensure stable accessibility and our dataset can be stored for long-term preservation. A.2 License & Dataset Usage Our dataset is opened sourced under the MIT license which can be found in https://github.com/ Except the KT task defined in this paper, the users can use the dataset for other custom tasks under the license. A.3 Author Statement of Responsibility The authors acknowledge full responsibility in the event of any rights violations and confirm the license associated with the dataset. The circle is evenly divided into 6 parts, and the shaded part occupies 3 parts, so use 3/6. Known sequence 4, 10, 16, 22,..., what is the 61st number in this sequence ().





Aligning Target-Aware Molecule Diffusion Models with Exact Energy Optimization

Neural Information Processing Systems

Generating ligand molecules for specific protein targets, known as structure-based drug design, is a fundamental problem in therapeutics development and biological discovery. Recently, target-aware generative models, especially diffusion models, have shown great promise in modeling protein-ligand interactions and generating candidate drugs. However, existing models primarily focus on learning the chemical distribution of all drug candidates, which lacks effective steerability on the chemical quality of model generations.



Hypotheses Paradise: An Open and Strong Baseline for Speech Recognition with Large Language Models Chen Chen 1, Chao-Han Huck Yang 2

Neural Information Processing Systems

Advancements in deep neural networks have allowed automatic speech recognition (ASR) systems to attain human parity on several publicly available clean speech datasets. However, even state-of-the-art ASR systems experience performance degradation when confronted with adverse conditions, as a well-trained acoustic model is sensitive to variations in the speech domain, e.g., background noise. Intuitively, humans address this issue by relying on their linguistic knowledge: the meaning of ambiguous spoken terms is usually inferred from contextual cues thereby reducing the dependency on the auditory system. Inspired by this observation, we introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction, where N-best decoding hypotheses provide informative elements for true transcription prediction. This approach is a paradigm shift from the traditional language model rescoring strategy that can only select one candidate hypothesis as the output transcription.