Goto

Collaborating Authors

 cta


Automated anatomy-based post-processing reduces false positives and improved interpretability of deep learning intracranial aneurysm detection

Kim, Jisoo, Lin, Chu-Hsuan, Ceballos-Arroyo, Alberto, Liu, Ping, Jiang, Huaizu, Yadav, Shrikanth, Wan, Qi, Qin, Lei, Young, Geoffrey S

arXiv.org Artificial Intelligence

Introduction: Deep learning (DL) models can help detect intracranial aneurysms on CTA, but high false positive (FP) rates remain a barrier to clinical translation, despite improvement in model architectures and strategies like detection threshold tuning. We employed an automated, anatomy-based, heuristic-learning hybrid artery-vein segmentation post-processing method to further reduce FPs. Methods: Two DL models, CPM-Net and a deformable 3D convolutional neural network-transformer hybrid (3D-CNN-TR), were trained with 1,186 open-source CTAs (1,373 annotated aneurysms), and evaluated with 143 held-out private CTAs (218 annotated aneurysms). Brain, artery, vein, and cavernous venous sinus (CVS) segmentation masks were applied to remove possible FPs in the DL outputs that overlapped with: (1) brain mask; (2) vein mask; (3) vein more than artery masks; (4) brain plus vein mask; (5) brain plus vein more than artery masks. Results: CPM-Net yielded 139 true-positives (TP); 79 false-negative (FN); 126 FP. 3D-CNN-TR yielded 179 TP; 39 FN; 182 FP. FPs were commonly extracranial (CPM-Net 27.3%; 3D-CNN-TR 42.3%), venous (CPM-Net 56.3%; 3D-CNN-TR 29.1%), arterial (CPM-Net 11.9%; 3D-CNN-TR 53.3%), and non-vascular (CPM-Net 25.4%; 3D-CNN-TR 9.3%) structures. Method 5 performed best, reducing CPM-Net FP by 70.6% (89/126) and 3D-CNN-TR FP by 51.6% (94/182), without reducing TP, lowering the FP/case rate from 0.88 to 0.26 for CPM-NET, and from 1.27 to 0.62 for the 3D-CNN-TR. Conclusion: Anatomy-based, interpretable post-processing can improve DL-based aneurysm detection model performance. More broadly, automated, domain-informed, hybrid heuristic-learning processing holds promise for improving the performance and clinical acceptance of aneurysm detection models.


RACOON: An LLM-based Framework for Retrieval-Augmented Column Type Annotation with a Knowledge Graph

Wei, Lindsey Linxi, Xiao, Guorui, Balazinska, Magdalena

arXiv.org Artificial Intelligence

As an important component of data exploration and integration, Column Type Annotation (CTA) aims to label columns of a table with one or more semantic types. With the recent development of Large Language Models (LLMs), researchers have started to explore the possibility of using LLMs for CTA, leveraging their strong zero-shot capabilities. In this paper, we build on this promising work and improve on LLM-based methods for CTA by showing how to use a Knowledge Graph (KG) to augment the context information provided to the LLM. Our approach, called RACOON, combines both pre-trained parametric and non-parametric knowledge during generation to improve LLMs' performance on CTA. Our experiments show that RACOON achieves up to a 0.21 micro F-1 improvement compared against vanilla LLM inference.


POD-Attention: Unlocking Full Prefill-Decode Overlap for Faster LLM Inference

Kamath, Aditya K, Prabhu, Ramya, Mohan, Jayashree, Peter, Simon, Ramjee, Ramachandran, Panwar, Ashish

arXiv.org Artificial Intelligence

Each request in LLM inference goes through two phases: compute-bound prefill and memory-bandwidth-bound decode. To improve GPU utilization, recent systems use hybrid batching that combines the prefill and decode phases of different requests into the same batch. Hybrid batching works well for linear operations as it amortizes the cost of loading model weights from HBM. However, attention computation in hybrid batches remains inefficient because existing attention kernels are optimized for either prefill or decode. In this paper, we present POD-Attention -- the first GPU kernel that efficiently computes attention for hybrid batches. POD-Attention aims to maximize the utilization of both compute and memory bandwidth by carefully allocating the GPU's resources such that prefill and decode operations happen concurrently on the same multiprocessor. We integrate POD-Attention in a state-of-the-art LLM inference scheduler Sarathi-Serve. POD-Attention speeds up attention computation by up to 75% (mean 28%) and increases LLM serving throughput by up to 22% in offline inference. In online inference, POD-Attention enables lower time-to-first-token (TTFT), time-between-tokens (TBT), and request execution latency versus Sarathi-Serve.


Detecting Calls to Action in Multimodal Content: Analysis of the 2021 German Federal Election Campaign on Instagram

Achmann-Denkler, Michael, Fehle, Jakob, Haim, Mario, Wolff, Christian

arXiv.org Artificial Intelligence

This study investigates the automated classification of Calls to Action (CTAs) within the 2021 German Instagram election campaign to advance the understanding of mobilization in social media contexts. We analyzed over 2,208 Instagram stories and 712 posts using fine-tuned BERT models and OpenAI's GPT-4 models. The fine-tuned BERT model incorporating synthetic training data achieved a macro F1 score of 0.93, demonstrating a robust classification performance. Our analysis revealed that 49.58% of Instagram posts and 10.64% of stories contained CTAs, highlighting significant differences in mobilization strategies between these content types. Additionally, we found that FDP and the Greens had the highest prevalence of CTAs in posts, whereas CDU and CSU led in story CTAs.


Lean Attention: Hardware-Aware Scalable Attention Mechanism for the Decode-Phase of Transformers

Sanovar, Rya, Bharadwaj, Srikant, Amant, Renee St., Rühle, Victor, Rajmohan, Saravan

arXiv.org Artificial Intelligence

Transformer-based models have emerged as one of the most widely used architectures for natural language processing, natural language generation, and image generation. The size of the state-of-the-art models has increased steadily reaching billions of parameters. These huge models are memory hungry and incur significant inference latency even on cutting edge AI-accelerators, such as GPUs. Specifically, the time and memory complexity of the attention operation is quadratic in terms of the total context length, i.e., prompt and output tokens. Thus, several optimizations such as key-value tensor caching and FlashAttention computation have been proposed to deliver the low latency demands of applications relying on such large models. However, these techniques do not cater to the computationally distinct nature of different phases during inference. To that end, we propose LeanAttention, a scalable technique of computing self-attention for the token-generation phase (decode-phase) of decoder-only transformer models. LeanAttention enables scaling the attention mechanism implementation for the challenging case of long context lengths by re-designing the execution flow for the decode-phase. We identify that the associative property of online softmax can be treated as a reduction operation thus allowing us to parallelize the attention computation over these large context lengths. We extend the "stream-K" style reduction of tiled calculation to self-attention to enable parallel computation resulting in an average of 2.6x attention execution speedup over FlashAttention-2 and up to 8.33x speedup for 512k context lengths.


Leveraging Interesting Facts to Enhance User Engagement with Conversational Interfaces

Vedula, Nikhita, Castellucci, Giuseppe, Agichtein, Eugene, Rokhlenko, Oleg, Malmasi, Shervin

arXiv.org Artificial Intelligence

Conversational Task Assistants (CTAs) guide users in performing a multitude of activities, such as making recipes. However, ensuring that interactions remain engaging, interesting, and enjoyable for CTA users is not trivial, especially for time-consuming or challenging tasks. Grounded in psychological theories of human interest, we propose to engage users with contextual and interesting statements or facts during interactions with a multi-modal CTA, to reduce fatigue and task abandonment before a task is complete. To operationalize this idea, we train a high-performing classifier (82% F1-score) to automatically identify relevant and interesting facts for users. We use it to create an annotated dataset of task-specific interesting facts for the domain of cooking. Finally, we design and validate a dialogue policy to incorporate the identified relevant and interesting facts into a conversation, to improve user engagement and task completion. Live testing on a leading multi-modal voice assistant shows that 66% of the presented facts were received positively, leading to a 40% gain in the user satisfaction rating, and a 37% increase in conversation length. These findings emphasize that strategically incorporating interesting facts into the CTA experience can promote real-world user participation for guided task interactions.


TopCoW: Benchmarking Topology-Aware Anatomical Segmentation of the Circle of Willis (CoW) for CTA and MRA

Yang, Kaiyuan, Musio, Fabio, Ma, Yihui, Juchler, Norman, Paetzold, Johannes C., Al-Maskari, Rami, Höher, Luciano, Li, Hongwei Bran, Hamamci, Ibrahim Ethem, Sekuboyina, Anjany, Shit, Suprosanna, Huang, Houjing, Waldmannstetter, Diana, Kofler, Florian, Navarro, Fernando, Menten, Martin, Ezhov, Ivan, Rueckert, Daniel, Vos, Iris, Ruigrok, Ynte, Velthuis, Birgitta, Kuijf, Hugo, Hämmerli, Julien, Wurster, Catherine, Bijlenga, Philippe, Westphal, Laura, Bisschop, Jeroen, Colombo, Elisa, Baazaoui, Hakim, Makmur, Andrew, Hallinan, James, Wiestler, Bene, Kirschke, Jan S., Wiest, Roland, Montagnon, Emmanuel, Letourneau-Guillon, Laurent, Galdran, Adrian, Galati, Francesco, Falcetta, Daniele, Zuluaga, Maria A., Lin, Chaolong, Zhao, Haoran, Zhang, Zehan, Ra, Sinyoung, Hwang, Jongyun, Park, Hyunjin, Chen, Junqiang, Wodzinski, Marek, Müller, Henning, Shi, Pengcheng, Liu, Wei, Ma, Ting, Yalçin, Cansu, Hamadache, Rachika E., Salvi, Joaquim, Llado, Xavier, Estrada, Uma Maria Lal-Trehan, Abramova, Valeriia, Giancardo, Luca, Oliver, Arnau, Liu, Jialu, Huang, Haibin, Cui, Yue, Lin, Zehang, Liu, Yusheng, Zhu, Shunzhi, Patel, Tatsat R., Tutino, Vincent M., Orouskhani, Maysam, Wang, Huayu, Mossa-Basha, Mahmud, Zhu, Chengcheng, Rokuss, Maximilian R., Kirchhoff, Yannick, Disch, Nico, Holzschuh, Julius, Isensee, Fabian, Maier-Hein, Klaus, Sato, Yuki, Hirsch, Sven, Wegener, Susanne, Menze, Bjoern

arXiv.org Artificial Intelligence

The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neuro-vascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited public datasets with annotations on CoW anatomy, especially for CTA. Therefore we organized the TopCoW Challenge in 2023 with the release of an annotated CoW dataset. The TopCoW dataset was the first public dataset with voxel-level annotations for thirteen possible CoW vessel components, enabled by virtual-reality (VR) technology. It was also the first large dataset with paired MRA and CTA from the same patients. TopCoW challenge formalized the CoW characterization problem as a multiclass anatomical segmentation task with an emphasis on topological metrics. We invited submissions worldwide for the CoW segmentation task, which attracted over 140 registered participants from four continents. The top performing teams managed to segment many CoW components to Dice scores around 90%, but with lower scores for communicating arteries and rare variants. There were also topological mistakes for predictions with high Dice scores. Additional topological analysis revealed further areas for improvement in detecting certain CoW components and matching CoW variant topology accurately. TopCoW represented a first attempt at benchmarking the CoW anatomical segmentation task for MRA and CTA, both morphologically and topologically.


Continuous-time Autoencoders for Regular and Irregular Time Series Imputation

Wi, Hyowon, Shin, Yehjin, Park, Noseong

arXiv.org Artificial Intelligence

Time series imputation is one of the most fundamental tasks for time series. Real-world time series datasets are frequently incomplete (or irregular with missing observations), in which case imputation is strongly required. Many different time series imputation methods have been proposed. Recent self-attention-based methods show the state-of-the-art imputation performance. However, it has been overlooked for a long time to design an imputation method based on continuous-time recurrent neural networks (RNNs), i.e., neural controlled differential equations (NCDEs). To this end, we redesign time series (variational) autoencoders based on NCDEs. Our method, called continuous-time autoencoder (CTA), encodes an input time series sample into a continuous hidden path (rather than a hidden vector) and decodes it to reconstruct and impute the input. In our experiments with 4 datasets and 19 baselines, our method shows the best imputation performance in almost all cases.


Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective

Chung, Ming-Yu, Chou, Sheng-Yen, Yu, Chia-Mu, Chen, Pin-Yu, Kuo, Sy-Yen, Ho, Tsung-Yi

arXiv.org Artificial Intelligence

Dataset distillation offers a potential means to enhance data efficiency in deep learning. Recent studies have shown its ability to counteract backdoor risks present in original training samples. In this study, we delve into the theoretical aspects of backdoor attacks and dataset distillation based on kernel methods. We introduce two new theory-driven trigger pattern generation methods specialized for dataset distillation. Following a comprehensive set of analyses and experiments, we show that our optimization-based trigger design framework informs effective backdoor attacks on dataset distillation. Notably, datasets poisoned by our designed trigger prove resilient against conventional backdoor attack detection and mitigation methods. Our empirical results validate that the triggers developed using our approaches are proficient at executing resilient backdoor attacks.


ArcheType: A Novel Framework for Open-Source Column Type Annotation using Large Language Models

Feuer, Benjamin, Liu, Yurong, Hegde, Chinmay, Freire, Juliana

arXiv.org Artificial Intelligence

Existing deep-learning approaches to semantic column type annotation (CTA) have important shortcomings: they rely on semantic types which are fixed at training time; require a large number of training samples per type and incur large run-time inference costs; and their performance can degrade when evaluated on novel datasets, even when types remain constant. Large language models have exhibited strong zero-shot classification performance on a wide range of tasks and in this paper we explore their use for CTA. We introduce ArcheType, a simple, practical method for context sampling, prompt serialization, model querying, and label remapping, which enables large language models to solve CTA problems in a fully zero-shot manner. We ablate each component of our method separately, and establish that improvements to context sampling and label remapping provide the most consistent gains. ArcheType establishes a new state-of-the-art performance on zero-shot CTA benchmarks (including three new domain-specific benchmarks which we release along with this paper), and when used in conjunction with classical CTA techniques, it outperforms a SOTA DoDuo model on the fine-tuned SOTAB benchmark. Our code is available at https://github.com/penfever/ArcheType.