Goto

Collaborating Authors

 instrument


Penalized GMM Framework for Inference on Functionals of Nonparametric Instrumental Variable Estimators

Bakhitov, Edvard

arXiv.org Machine Learning

This paper develops a penalized GMM (PGMM) framework for automatic debiased inference on functionals of nonparametric instrumental variable estimators. We derive convergence rates for the PGMM estimator and provide conditions for root-n consistency and asymptotic normality of debiased functional estimates, covering both linear and nonlinear functionals. Monte Carlo experiments on average derivative show that the PGMM-based debiased estimator performs on par with the analytical debiased estimator that uses the known closed-form Riesz representer, achieving 90-96% coverage while the plug-in estimator falls below 5%. We apply our procedure to estimate mean own-price elasticities in a semiparametric demand model for differentiated products. Simulations confirm near-nominal coverage while the plug-in severely undercovers. Applied to IRI scanner data on carbonated beverages, debiased semiparametric estimates are approximately 20% more elastic compared to the logit benchmark, and debiasing corrections are heterogeneous across products, ranging from negligible to several times the standard error.


Double Machine Learning for Static Panel Data with Instrumental Variables: New Method and Applications

Baiardi, Anna, Clarke, Paul S., Naghi, Andrea A., Polselli, Annalivia

arXiv.org Machine Learning

Panel data methods are widely used in empirical analysis to address unobserved heterogeneity, but causal inference remains challenging when treatments are endogenous and confounding variables high-dimensional and potentially nonlinear. Standard instrumental variables (IV) estimators, such as two-stage least squares (2SLS), become unreliable when instrument validity requires flexibly conditioning on many covariates with potentially non-linear effects. This paper develops a Double Machine Learning estimator for static panel models with endogenous treatments (panel IV DML), and introduces weak-identification diagnostics for it. We revisit three influential migration studies that use shift-share instruments. In these settings, instrument validity depends on a rich covariate adjustment. In one application, panel IV DML strengthens the predictive power of the instrument and broadly confirms 2SLS results. In the other cases, flexible adjustment makes the instruments weak, leading to substantially more cautious causal inference than conventional 2SLS. Monte Carlo evidence supports these findings, showing that panel IV DML improves estimation accuracy under strong instruments and delivers more reliable inference under weak identification.


SING: Symbol-to-Instrument Neural Generator

Neural Information Processing Systems

Recent progress in deep learning for audio synthesis opens the way to models that directly produce the waveform, shifting away from the traditional paradigm of relying on vocoders or MIDI synthesizers for speech or music generation. Despite their successes, current state-of-the-art neural audio synthesizers such as WaveNet and SampleRNN suffer from prohibitive training and inference times because they are based on autoregressive models that generate audio samples one at a time at a rate of 16kHz. In this work, we study the more computationally efficient alternative of generating the waveform frame-by-frame with large strides. We present a lightweight neural audio synthesizer for the original task of generating musical notes given desired instrument, pitch and velocity. Our model is trained end-to-end to generate notes from nearly 1000 instruments with a single decoder, thanks to a new loss function that minimizes the distances between the log spectrograms of the generated and target waveforms. On the generalization task of synthesizing notes for pairs of pitch and instrument not seen during training, SING produces audio with significantly improved perceptual quality compared to a state-of-the-art autoencoder based on WaveNet as measured by a Mean Opinion Score (MOS), and is about 32 times faster for training and 2, 500 times faster for inference.


These Musical Instruments of the Future Sound Weird, Wacky--and Are Easy for Anyone to Play

WIRED

A bicycle wheel with guitar strings, a touch-operated synth, and the "Demon Box" were just a few of the new instruments on show at Georgia Tech's Guthman Musical Instrument Competition this weekend. An open-source, touch-operated synth built to resemble a puzzle piece and keep accessibility at the forefront. A pressure-sensitive surface allows for polyphonic synthesis that can be triggered by hands, feet, textured fabrics, or even Play-Doh. Brand new sounds floated through a concert hall at Georgia Tech this weekend, as the 28th annual Guthman Musical Instrument Competition showcased an array of new instruments from around the world--and crowned one champion. Ten finalists, chosen from candidates who built all kinds of new music-making devices, converged in Atlanta, Georgia, to present their instruments to a panel of judges.


Restoring surgeons' sense of touch with robotic fingertips

Robohub

Modern surgery has gone from long incisions to tiny cuts guided by robots and AI. In the process, however, surgeons have lost something vital: the chance to feel inside the body directly. Without palpation, it becomes harder to detect tissue abnormalities during an operation. A group of surgeons and engineers across Europe is now trying to bring back this vital aspect of surgery. Working within an EU-funded research collaboration called PALPABLE, they are developing a soft robotic "fingertip" that can sense how firm or soft tissue is during minimally invasive and robotic surgery.


4 surprising scientific benefits of music

Popular Science

From reducing dementia to speeding up recovery after surgery, music is more powerful than you knew. Listening to music can help your brain, research suggests. Breakthroughs, discoveries, and DIY tips sent six days a week. The oldest known musical instruments-- flutes carved from bones --are over 40,000 years old . And humans were likely making music before that, based on fossils showing our ancestors had the ability to sing over 530,000 years ago.


Nonparametric Identification and Inference for Counterfactual Distributions with Confounding

Sun, Jianle, Zhang, Kun

arXiv.org Machine Learning

We propose nonparametric identification and semiparametric estimation of joint potential outcome distributions in the presence of confounding. First, in settings with observed confounding, we derive tighter, covariate-informed bounds on the joint distribution by leveraging conditional copulas. To overcome the non-differentiability of bounding min/max operators, we establish the asymptotic properties for both a direct estimator with polynomial margin condition and a smooth approximation with log-sum-exp operator, facilitating valid inference for individual-level effects under the canonical rank-preserving assumption. Second, we tackle the challenge of unmeasured confounding by introducing a causal representation learning framework. By utilizing instrumental variables, we prove the nonparametric identifiability of the latent confounding subspace under injectivity and completeness conditions. We develop a ``triple machine learning" estimator that employs cross-fitting scheme to sequentially handle the learned representation, nuisance parameters, and target functional. We characterize the asymptotic distribution with variance inflation induced by representation learning error, and provide conditions for semiparametric efficiency. We also propose a practical VAE-based algorithm for confounding representation learning. Simulations and real-world analysis validate the effectiveness of proposed methods. By bridging classical semiparametric theory with modern representation learning, this work provides a robust statistical foundation for distributional and counterfactual inference in complex causal systems.