th percentile
- North America > United States (0.29)
- North America > Canada (0.16)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Asia > China > Liaoning Province > Shenyang (0.04)
- Transportation > Infrastructure & Services (0.31)
- Transportation > Ground > Road (0.31)
Appendix A Analysis of variance of uncertainty estimators We demonstrate the lower variance of the Importance sampling-based estimator compared to the naive Monte
Carlo estimator, focusing on the Character V AE for molecular generation setting described in 5.3.1 and Importance sampling-based estimator (IS-MI) described in 3, and the naive Monte Carlo (MC-MI) equivalent. (Figure 1). The training dataset is comprised of 60k images and the test dataset is comprised of 10k images. No data augmentation is used at train time nor at inference. We jointly train a variational autoencoder with an auxiliary network (the "Property network") predicting digit thickness based on latent representation (see Figure 1).
Supplementary Material S1 Pseudocode Algorithm 1 gives pseudocode for autofocusing a broad class of model-based optimization (MBO)
"E-step" (Steps 1 and 2 in Algorithm 1) and a weighted maximum likelihood estimation (MLE) "M-step" (Step 3; see [ ( t 1) (t 1) One may use these in a number of different ways. The following observation is due to Chebyshev's inequality. One can use Proposition S2.1 to construct a confidence interval on, for example, the expected squared Note that 1) the bound in Proposition S2.1 is CbAS naturally controls the importance weight variance. Design procedures that leverage a trust region can naturally bound the variance of the importance weights. We used CbAS as follows.
- North America > United States (0.29)
- North America > Canada (0.16)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Asia > China > Liaoning Province > Shenyang (0.04)
- Transportation > Infrastructure & Services (0.31)
- Transportation > Ground > Road (0.31)
Sample-Aware Test-Time Adaptation for Medical Image-to-Image Translation
Iele, Irene, Di Feola, Francesco, Guarrasi, Valerio, Soda, Paolo
Image-to-image translation has emerged as a powerful technique in medical imaging, enabling tasks such as image denoising and cross-modality conversion. However, it suffers from limitations in handling out-of-distribution samples without causing performance degradation. To address this limitation, we propose a novel Test-Time Adaptation (TTA) framework that dynamically adjusts the translation process based on the characteristics of each test sample. Our method introduces a Reconstruction Module to quantify the domain shift and a Dynamic Adaptation Block that selectively modifies the internal features of a pretrained translation model to mitigate the shift without compromising the performance on in-distribution samples that do not require adaptation. We evaluate our approach on two medical image-to-image translation tasks: low-dose CT denoising and T1 to T2 MRI translation, showing consistent improvements over both the baseline translation model without TTA and prior TTA methods. Our analysis highlights the limitations of the state-of-the-art that uniformly apply the adaptation to both out-of-distribution and in-distribution samples, demonstrating that dynamic, sample-specific adjustment offers a promising path to improve model resilience in real-world scenarios. The code is available at: https://github.com/Sample-Aware-TTA/Code.
- Information Technology > Sensing and Signal Processing > Image Processing (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
Supplementary Material S1 Pseudocode Algorithm 1 gives pseudocode for autofocusing a broad class of model-based optimization (MBO)
"E-step" (Steps 1 and 2 in Algorithm 1) and a weighted maximum likelihood estimation (MLE) "M-step" (Step 3; see [ ( t 1) (t 1) One may use these in a number of different ways. The following observation is due to Chebyshev's inequality. One can use Proposition S2.1 to construct a confidence interval on, for example, the expected squared Note that 1) the bound in Proposition S2.1 is CbAS naturally controls the importance weight variance. Design procedures that leverage a trust region can naturally bound the variance of the importance weights. We used CbAS as follows.
Air Traffic Controller Task Demand via Graph Neural Networks: An Interpretable Approach to Airspace Complexity
Henderson, Edward, Gould, Dewi, Everson, Richard, De Ath, George, Pepper, Nick
Real-time assessment of near-term Air Traffic Controller (ATCO) task demand is a critical challenge in an increasingly crowded airspace, as existing complexity metrics often fail to capture nuanced operational drivers beyond simple aircraft counts. This work introduces an interpretable Graph Neural Network (GNN) framework to address this gap. Our attention-based model predicts the number of upcoming clearances, the instructions issued to aircraft by ATCOs, from interactions within static traffic scenarios. Crucially, we derive an interpretable, per-aircraft task demand score by systematically ablating aircraft and measuring the impact on the model's predictions. Our framework significantly outperforms an ATCO-inspired heuristic and is a more reliable estimator of scenario complexity than established baselines. The resulting tool can attribute task demand to specific aircraft, offering a new way to analyse and understand the drivers of complexity for applications in controller training and airspace redesign.
- Europe > United Kingdom > England > Greater London > London (0.04)
- North America > United States (0.04)
- Europe > United Kingdom > England > East Midlands (0.04)
- (3 more...)
- Transportation > Air (1.00)
- Transportation > Infrastructure & Services > Airport (0.46)