Goto

Collaborating Authors

 kev


Latent-space Field Tension for Astrophysical Component Detection An application to X-ray imaging

Guardiani, Matteo, Eberle, Vincent, Westerkamp, Margret, Rüstig, Julian, Frank, Philipp, Enßlin, Torsten

arXiv.org Machine Learning

Modern observatories are designed to deliver increasingly detailed views of astrophysical signals. To fully realize the potential of these observations, principled data-analysis methods are required to effectively separate and reconstruct the underlying astrophysical components from data corrupted by noise and instrumental effects. In this work, we introduce a novel multi-frequency Bayesian model of the sky emission field that leverages latent-space tension as an indicator of model misspecification, enabling automated separation of diffuse, point-like, and extended astrophysical emission components across wavelength bands. Deviations from latent-space prior expectations are used as diagnostics for model misspecification, thus systematically guiding the introduction of new sky components, such as point-like and extended sources. We demonstrate the effectiveness of this method on synthetic multi-frequency imaging data and apply it to observational X-ray data from the eROSITA Early Data Release (EDR) of the SN1987A region in the Large Magellanic Cloud (LMC). Our results highlight the method's capability to reconstruct astrophysical components with high accuracy, achieving sub-pixel localization of point sources, robust separation of extended emission, and detailed uncertainty quantification. The developed methodology offers a general and well-founded framework applicable to a wide variety of astronomical datasets, and is therefore well suited to support the analysis needs of next-generation multi-wavelength and multi-messenger surveys.


KEVS: Enhancing Segmentation of Visceral Adipose Tissue in Pre-Cystectomy CT with Gaussian Kernel Density Estimation

Boucher, Thomas, Tetlow, Nicholas, Fung, Annie, Dewar, Amy, Arina, Pietro, Kerneis, Sven, Whittle, John, Mazomenos, Evangelos B.

arXiv.org Artificial Intelligence

Purpose: The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of post-operative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. Methods: We introduce the Kernel density Enhanced VAT Segmentator ( KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. Results: We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a 4.80% and 6.02% improvement in Dice Coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. Conclusion: This research introduces KEVS; an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.


Further Exploration of Precise Binding Energies from Physics Informed Machine Learning and the Development of a Practical Ensemble Model

Bentley, I., Tedder, J., Gebran, M., Paul, A.

arXiv.org Artificial Intelligence

Sixteen new physics informed machine learning models have been trained on binding energy residuals from modern mass models that leverage shape parameters and other physical features. The models have been trained on a subset of AME 2012 data and have been verified with a subset of the AME 2020 data. Among the machine learning approaches tested in this work, the preferred approach is the least squares boosted ensemble of trees which appears to have a superior ability to both interpolate and extrapolate binding energy residuals. The machine learning models for four mass models created from the ensemble of trees approach have been combined to create a composite model called the Four Model Tree Ensemble (FMTE). The FMTE model predicts binding energy values from AME 2020 with a standard deviation of 76 keV and a mean deviation of 34 keV for all nuclei with N > 7 and Z > 7. A comparison with new mass measurements for 33 isotopes not included in AME 2012 or AME 2020 indicates that the FMTE performs better than all mass models that were tested.


Pulse Shape Simulation and Discrimination using Machine-Learning Techniques

Dutta, Shubham, Ghosh, Sayan, Bhattacharya, Satyaki, Saha, Satyajit

arXiv.org Artificial Intelligence

An essential metric for the quality of a particle-identification experiment is its statistical power to discriminate between signal and background. Pulse shape discrimination (PSD) is a basic method for this purpose in many nuclear, high-energy and rare-event search experiments where scintillation detectors are used. Conventional techniques exploit the difference between decay-times of the pulses from signal and background events or pulse signals caused by different types of radiation quanta to achieve good discrimination. However, such techniques are efficient only when the total light-emission is sufficient to get a proper pulse profile. This is only possible when adequate amount of energy is deposited from recoil of the electrons or the nuclei of the scintillator materials caused by the incident particle on the detector. But, rare-event search experiments like direct search for dark matter do not always satisfy these conditions. Hence, it becomes imperative to have a method that can deliver a very efficient discrimination in these scenarios. Neural network based machine-learning algorithms have been used for classification problems in many areas of physics especially in high-energy experiments and have given better results compared to conventional techniques. We present the results of our investigations of two network based methods \viz Dense Neural Network and Recurrent Neural Network, for pulse shape discrimination and compare the same with conventional methods.


Assessment of few-hits machine learning classification algorithms for low energy physics in liquid argon detectors

Biassoni, Matteo, Giachero, Andrea, Grossi, Michele, Guffanti, Daniele, Labranca, Danilo, Moretti, Roberto, Rossi, Marco, Terranova, Francesco, Vallecorsa, Sofia

arXiv.org Artificial Intelligence

The physics potential of massive liquid argon TPCs in the low-energy regime is still to be fully reaped because few-hits events encode information that can hardly be exploited by conventional classification algorithms. Machine learning (ML) techniques give their best in these types of classification problems. In this paper, we evaluate their performance against conventional (deterministic) algorithms. We demonstrate that both Convolutional Neural Networks (CNN) and Transformer-Encoder methods outperform deterministic algorithms in one of the most challenging classification problems of low-energy physics (single- versus double-beta events). We discuss the advantages and pitfalls of Transformer-Encoder methods versus CNN and employ these methods to optimize the detector parameters, with an emphasis on the DUNE Phase II detectors ("Module of Opportunity").


"Prompt-Gamma Neutron Activation Analysis (PGNAA)" Metal Spectral Classification using Deep Learning Method

Cheng, Ka Yung, Shayan, Helmand, Krycki, Kai, Lange-Hegermann, Markus

arXiv.org Artificial Intelligence

There is a pressing market demand to minimize the test time of Prompt Gamma Neutron Activation Analysis (PGNAA) spectra measurement machine, so that it could function as an instant material analyzer, e.g. to classify waste samples instantaneously and determine the best recycling method based on the detected compositions of the testing sample. This article introduces a new development of the deep learning classification and contrive to reduce the test time for PGNAA machine. We propose both Random Sampling Methods and Class Activation Map (CAM) to generate "downsized" samples and train the CNN model continuously. Random Sampling Methods (RSM) aims to reduce the measuring time within a sample, and Class Activation Map (CAM) is for filtering out the less important energy range of the downsized samples. We shorten the overall PGNAA measuring time down to 2.5 seconds while ensuring the accuracy is around 96.88 % for our dataset with 12 different species of substances. Compared with classifying different species of materials, it requires more test time (sample count rate) for substances having the same elements to archive good accuracy. For example, the classification of copper alloys requires nearly 24 seconds test time to reach 98 % accuracy.


Eigenvoice Speaker Adaptation via Composite Kernel Principal Component Analysis

Kwok, James T., Mak, Brian, Ho, Simon

Neural Information Processing Systems

Eigenvoice speaker adaptation has been shown to be effective when only a small amount of adaptation data is available. At the heart of the method is principal component analysis (PCA) employed to find the most important eigenvoices. In this paper, we postulate that nonlinear PCA, in particular kernel PCA, may be even more effective. One major challenge is to map the feature-space eigenvoices back to the observation space so that the state observation likelihoods can be computed during the estimation of eigenvoice weights and subsequent decoding. Our solution is to compute kernel PCA using composite kernels, and we will call our new method kernel eigenvoice speaker adaptation. On the TIDIGITS corpus, we found that compared with a speaker-independent model, our kernel eigenvoice adaptation method can reduce the word error rate by 28-33% while the standard eigenvoice approach can only match the performance of the speaker-independent model.


Eigenvoice Speaker Adaptation via Composite Kernel Principal Component Analysis

Kwok, James T., Mak, Brian, Ho, Simon

Neural Information Processing Systems

Eigenvoice speaker adaptation has been shown to be effective when only a small amount of adaptation data is available. At the heart of the method is principal component analysis (PCA) employed to find the most important eigenvoices. In this paper, we postulate that nonlinear PCA, in particular kernel PCA, may be even more effective. One major challenge is to map the feature-space eigenvoices back to the observation space so that the state observation likelihoods can be computed during the estimation of eigenvoice weights and subsequent decoding. Our solution is to compute kernel PCA using composite kernels, and we will call our new method kernel eigenvoice speaker adaptation. On the TIDIGITS corpus, we found that compared with a speaker-independent model, our kernel eigenvoice adaptation method can reduce the word error rate by 28-33% while the standard eigenvoice approach can only match the performance of the speaker-independent model.