instrument
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Asia > India > NCT > Delhi (0.04)
- (10 more...)
- Research Report (0.46)
- Overview (0.46)
- Health & Medicine (1.00)
- Information Technology (0.67)
- Law > Environmental Law (0.46)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)
Supplementary Material for Text Promptable Surgical Instrument Segmentation with Vision-Language Models Zijian Zhou
They are used in our experiments section. OpenAI GPT -4 based prompts The input template for OpenAI GPT -4 is defined as: Please describe the appearance of [class_name] in endoscopic surgery, and change the description to a phrase with subject, and not use colons. The dataset consists of both training and test cases. Each video is recorded at 25 FPS and has annotations for instruments and operation phases. For EndoVis2019, the results are shown in Tab. 1, our method (input size 448) notably surpasses the competition's top performers, with +3% increase in DSC and +2% enhancement in NSD, which demonstrates the superiority of our method.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.46)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Surgery (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.93)
A Algorithm
The proposed implementation of Gunsilius' algorithm computes For example, in the expenditure dataset (see Section I.3), In Figure 4, we show the results of Gunsilius's algorithm for three different Note that this algorithm works on the empirical CDFs of all variables, i.e., they are all scaled to lie Figure 4: We show results of Gunsilius's algorithm for 3 different settings of The practical issue of course is the optimization. That alone is already very computationally demanding and has convergence problems. A practical resource, sample size, limits the representational size of the estimator. How to achieve "enough variability" without aiming at a completely flexible distribution of In any case, the finite mixture of Gaussians approach can still be implemented with the reparameter-ization trick. The relation to Gunsilius algorithm is that our "base measure" is smoothly adaptive, leading to possibly more stable behavior in practice.
Many Experiments, Few Repetitions, Unpaired Data, and Sparse Effects: Is Causal Inference Possible?
Schur, Felix, Pfister, Niklas, Ding, Peng, Mukherjee, Sach, Peters, Jonas
We study the problem of estimating causal effects under hidden confounding in the following unpaired data setting: we observe some covariates $X$ and an outcome $Y$ under different experimental conditions (environments) but do not observe them jointly; we either observe $X$ or $Y$. Under appropriate regularity conditions, the problem can be cast as an instrumental variable (IV) regression with the environment acting as a (possibly high-dimensional) instrument. When there are many environments but only a few observations per environment, standard two-sample IV estimators fail to be consistent. We propose a GMM-type estimator based on cross-fold sample splitting of the instrument-covariate sample and prove that it is consistent as the number of environments grows but the sample size per environment remains constant. We further extend the method to sparse causal effects via $\ell_1$-regularized estimation and post-selection refitting.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > Greenland (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
When Does Pairing Seeds Reduce Variance? Evidence from a Multi-Agent Economic Simulation
Machine learning systems appear stochastic but are deterministically random, as seeded pseudorandom number generators produce identical realisations across repeated executions. Standard evaluation practice typically treats runs across alternatives as independent and does not exploit shared sources of randomness. This paper analyses the statistical structure of comparative evaluation under shared random seeds. Under this design, competing systems are evaluated using identical seeds, inducing matched stochastic realisations and yielding strict variance reduction whenever outcomes are positively correlated at the seed level. We demonstrate these effects using an extended learning-based multi-agent economic simulator, where paired evaluation exposes systematic differences in aggregate and distributional outcomes that remain statistically inconclusive under independent evaluation at fixed budgets.
- North America > United States (0.04)
- South America > Colombia > Meta Department > Villavicencio (0.04)
- Asia > China > Hong Kong (0.04)
- Government (1.00)
- Law (0.69)
- Banking & Finance > Economy (0.68)
Lost in Aggregation: The Causal Interpretation of the IV Estimand
Tsao, Danielle, Muandet, Krikamol, Eberhardt, Frederick, Perković, Emilija
Instrumental variable based estimation of a causal effect has emerged as a standard approach to mitigate confounding bias in the social sciences and epidemiology, where conducting randomized experiments can be too costly or impossible. However, justifying the validity of the instrument often poses a significant challenge. In this work, we highlight a problem generally neglected in arguments for instrumental variable validity: the presence of an ''aggregate treatment variable'', where the treatment (e.g., education, GDP, caloric intake) is composed of finer-grained components that each may have a different effect on the outcome. We show that the causal effect of an aggregate treatment is generally ambiguous, as it depends on how interventions on the aggregate are instantiated at the component level, formalized through the aggregate-constrained component intervention distribution. We then characterize conditions on the interventional distribution and the aggregate setting under which standard instrumental variable estimators identify the aggregate effect. The contrived nature of these conditions implies major limitations on the interpretation of instrumental variable estimates based on aggregate treatments and highlights the need for a broader justificatory base for the exclusion restriction in such settings.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > California (0.04)
- North America > Canada (0.04)
- (4 more...)
- Health & Medicine > Epidemiology (0.48)
- Health & Medicine > Consumer Health (0.34)
Canonical correlation regression with noisy data
We study instrumental variable regression in data rich environments. The goal is to estimate a linear model from many noisy covariates and many noisy instruments. Our key assumption is that true covariates and true instruments are repetitive, though possibly different in nature; they each reflect a few underlying factors, however those underlying factors may be misaligned. We analyze a family of estimators based on two stage least squares with spectral regularization: canonical correlations between covariates and instruments are learned in the first stage, which are used as regressors in the second stage. As a theoretical contribution, we derive upper and lower bounds on estimation error, proving optimality of the method with noisy data. As a practical contribution, we provide guidance on which types of spectral regularization to use in different regimes.
SING: Symbol-to-Instrument Neural Generator
Recent progress in deep learning for audio synthesis opens the way to models that directly produce the waveform, shifting away from the traditional paradigm of relying on vocoders or MIDI synthesizers for speech or music generation. Despite their successes, current state-of-the-art neural audio synthesizers such as WaveNet and SampleRNN suffer from prohibitive training and inference times because they are based on autoregressive models that generate audio samples one at a time at a rate of 16kHz. In this work, we study the more computationally efficient alternative of generating the waveform frame-by-frame with large strides. We present a lightweight neural audio synthesizer for the original task of generating musical notes given desired instrument, pitch and velocity. Our model is trained end-to-end to generate notes from nearly 1000 instruments with a single decoder, thanks to a new loss function that minimizes the distances between the log spectrograms of the generated and target waveforms. On the generalization task of synthesizing notes for pairs of pitch and instrument not seen during training, SING produces audio with significantly improved perceptual quality compared to a state-of-the-art autoencoder based on WaveNet as measured by a Mean Opinion Score (MOS), and is about 32 times faster for training and 2, 500 times faster for inference.
- Media > Music (0.59)
- Leisure & Entertainment (0.59)