Goto

Collaborating Authors

 adhikari


We could spot a new type of black hole thanks to a mirror-wobbling AI

New Scientist

Efforts to understand the universe could get a boost from an AI developed by Google DeepMind. The algorithm, which can reduce unwanted noise by up to 100 times, could allow the Laser Interferometer Gravitational-Wave Observatory (LIGO) to spot a particular type of black hole that has so far eluded us. LIGO is designed to detect the gravitational waves produced when objects such as black holes spiral into each other and collide. These waves cross the universe at the speed of light, but the fluctuations they cause in space-time are extremely small – 10,000 times smaller than the nucleus of an atom. Since its first observations 10 years ago, LIGO has recorded such signals produced by nearly 100 black hole collisions.


AI Is Designing Bizarre New Physics Experiments That Actually Work

WIRED

The original version of this story appeared in Quanta Magazine. There are precision measurements, and then there's the Laser Interferometer Gravitational-Wave Observatory. In each of LIGO's twin gravitational wave detectors (one in Hanford, Washington, and the other in Livingston, Louisiana), laser beams bounce back and forth down the four-kilometer arms of a giant L. When a gravitational wave passes through, the length of one arm changes relative to the other by less than the width of a proton. It's by measuring these minuscule differences--a sensitivity akin to sensing the distance to the star Alpha Centauri down to the width of a human hair--that discoveries are made. The design of the machine was decades in the making, as physicists needed to push every aspect to its absolute physical limits. Construction began in 1994 and took more than 20 years, including a four-year shutdown to improve the detectors, before LIGO detected its first gravitational wave in 2015: a ripple in the space-time fabric coming from the faraway collision of a pair of black holes.


Parameter-efficient Fine-tuning for improved Convolutional Baseline for Brain Tumor Segmentation in Sub-Saharan Africa Adult Glioma Dataset

Adhikari, Bijay, Kulung, Pratibha, Bohaju, Jakesh, Poudel, Laxmi Kanta, Raymond, Confidence, Zhang, Dong, Anazodo, Udunna C, Khanal, Bishesh, Shakya, Mahesh

arXiv.org Artificial Intelligence

Automating brain tumor segmentation using deep learning methods is an ongoing challenge in medical imaging. Multiple lingering issues exist including domain-shift and applications in low-resource settings which brings a unique set of challenges including scarcity of data. As a step towards solving these specific problems, we propose Convolutional adapter-inspired Parameter-efficient Fine-tuning (PEFT) of MedNeXt architecture. To validate our idea, we show our method performs comparable to full fine-tuning with the added benefit of reduced training compute using BraTS-2021 as pre-training dataset and BraTS-Africa as the fine-tuning dataset. BraTS-Africa consists of a small dataset (60 train / 35 validation) from the Sub-Saharan African population with marked shift in the MRI quality compared to BraTS-2021 (1251 train samples). We first show that models trained on BraTS-2021 dataset do not generalize well to BraTS-Africa as shown by 20% reduction in mean dice on BraTS-Africa validation samples. Then, we show that PEFT can leverage both the BraTS-2021 and BraTS-Africa dataset to obtain mean dice of 0.8 compared to 0.72 when trained only on BraTS-Africa. Finally, We show that PEFT (0.80 mean dice) results in comparable performance to full fine-tuning (0.77 mean dice) which may show PEFT to be better on average but the boxplots show that full finetuning results is much lesser variance in performance. Nevertheless, on disaggregation of the dice metrics, we find that the model has tendency to oversegment as shown by high specificity (0.99) compared to relatively low sensitivity(0.75). The source code is available at https://github.com/CAMERA-MRI/SPARK2024/tree/main/PEFT_MedNeXt


Normalizing Flows for Hierarchical Bayesian Analysis: A Gravitational Wave Population Study

Ruhe, David, Wong, Kaze, Cranmer, Miles, Forré, Patrick

arXiv.org Artificial Intelligence

We propose parameterizing the population distribution of the gravitational wave population modeling framework (Hierarchical Bayesian Analysis) with a normalizing flow. We first demonstrate the merit of this method on illustrative experiments and then analyze four parameters of the latest LIGO/Virgo data release: primary mass, secondary mass, redshift, and effective spin. Our results show that despite the small and notoriously noisy dataset, the posterior predictive distributions (assuming a prior over the parameters of the flow) of the observed gravitational wave population recover structure that agrees with robust previous phenomenological modeling results while being less susceptible to biases introduced by less flexible models. Therefore, the method forms a promising flexible, reliable replacement for population inference distributions, even when data is highly noisy.


Identification of Binary Neutron Star Mergers in Gravitational-Wave Data Using YOLO One-Shot Object Detection

Aveiro, João, Freitas, Felipe F., Ferreira, Márcio, Onofre, Antonio, Providência, Constança, Gonçalves, Gonçalo, Font, José A.

arXiv.org Artificial Intelligence

We demonstrate the application of the YOLOv5 model, a general purpose convolution-based single-shot object detection model, in the task of detecting binary neutron star (BNS) coalescence events from gravitational-wave data of current generation interferometer detectors. We also present a thorough explanation of the synthetic data generation and preparation tasks based on approximant waveform models used for the model training, validation and testing steps. Using this approach, we achieve mean average precision ($\text{mAP}_{[0.50]}$) values of 0.945 for a single class validation dataset and as high as 0.978 for test datasets. Moreover, the trained model is successful in identifying the GW170817 event in the LIGO H1 detector data. The identification of this event is also possible for the LIGO L1 detector data with an additional pre-processing step, without the need of removing the large glitch in the final stages of the inspiral. The detection of the GW190425 event is less successful, which attests to performance degradation with the signal-to-noise ratio. Our study indicates that the YOLOv5 model is an interesting approach for first-stage detection alarm pipelines and, when integrated in more complex pipelines, for real-time inference of physical source parameters.


Intelligent Machines Are Changing Point of Care

#artificialintelligence

Literature abounds on the use of ultrasound. The technology is applied to everything from peering at a fetus inside a body to helping to diagnose shock. However, it demands a fair amount of expertise and concentration to obtain correct measurements from ultrasound images and to interpret those images. Today, artificial intelligence (AI) adds to ultrasound a powerful layer that lets users gain critically needed information – especially when time is of the essence. When a patient arrives in the ER with symptoms characteristic of shock, the attending physician needs to act quickly.