Goto

Collaborating Authors

 synthesizer


SYNTHONY: A Stress-Aware, Intent-Conditioned Agent for Deep Tabular Generative Models Selection

Son, Hochan, Lin, Xiaofeng, Ni, Jason, Cheng, Guang

arXiv.org Machine Learning

Deep generative models for tabular data (GANs, diffusion models, and LLM-based generators) exhibit highly non-uniform behavior across datasets; the best-performing synthesizer family depends strongly on distributional stressors such as long-tailed marginals, high-cardinality categorical, Zipfian imbalance, and small-sample regimes. This brittleness makes practical deployment challenging, especially when users must balance competing objectives of fidelity, privacy, and utility. We study {intent-conditioned tabular synthesis selection}: given a dataset and a user intent expressed as a preference over evaluation metrics, the goal is to select a synthesizer that minimizes regret relative to an intent-specific oracle. We propose {stress profiling}, a synthesis-specific meta-feature representation that quantifies dataset difficulty along four interpretable stress dimensions, and integrate it into {SYNTHONY}, a selection framework that matches stress profiles against a calibrated capability registry of synthesizer families. Across a benchmark of 7 datasets, 10 synthesizers, and 3 intents, we demonstrate that stress-based meta-features are highly predictive of synthesizer performance: a $k$NN selector using these features achieves strong Top-1 selection accuracy, substantially outperforming zero-shot LLM selectors and random baselines. We analyze the gap between meta-feature-based and capability-based selection, identifying the hand-crafted capability registry as the primary bottleneck and motivating learned capability representations as a direction for future work.








283f1354f1de1c53a14afe0a6740e889-Paper-Conference.pdf

Neural Information Processing Systems

Intheearly visual system, high dimensional natural stimuli areencoded into the trains of neuronal spikes that transmit the information to the brain to produce perception. However, is all the visual scene information required to explain the neuronal responses?


Audeo: AudioGenerationforaSilentPerformance Video

Neural Information Processing Systems

In the last step, we implement Midi synthesizers to generate realistic music.Audeoconverts video to audio smoothly and clearly withonlyafewsetupconstraints.Weevaluate Audeoonpianoperformancevideos collected from YouTube and obtain that their generated music is of reasonable audio quality andcanbesuccessfully recognized withhighprecision bypopular music identification software. The source code with examples is available in a Githubrepository3.


Program Synthesis and Semantic Parsing with Learned Code Idioms

Neural Information Processing Systems

Program synthesis of general-purpose source code from natural language specifications is challenging due to the need to reason about high-level patterns in the target program and low-level implementation details at the same time. In this work, we present Patois, a system that allows a neural program synthesizer to explicitly interleave high-level and low-level reasoning at every generation step. It accomplishes this by automatically mining common code idioms from a given corpus, incorporating them into the underlying language for neural synthesis, and training a tree-based neural synthesizer to use these idioms during code generation. We evaluate Patois on two complex semantic parsing datasets and show that using learned code idioms improves the synthesizer's accuracy.