Goto

Collaborating Authors

 Packham, Josef


A large-scale and PCR-referenced vocal audio dataset for COVID-19

arXiv.org Artificial Intelligence

The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.


Audio-based AI classifiers show no evidence of improved COVID-19 screening over simple symptoms checkers

arXiv.org Artificial Intelligence

Recent work has reported that respiratory audio-trained AI classifiers can accurately predict SARS-CoV-2 infection status. Here, we undertake a large-scale study of audio-based AI classifiers, as part of the UK government's pandemic response. We collect a dataset of audio recordings from 67,842 individuals, with linked metadata, of whom 23,514 had positive PCR tests for SARS-CoV-2. In an unadjusted analysis, similar to that in previous works, AI classifiers predict SARS-CoV-2 infection status with high accuracy (ROC-AUC=0.846 However, after matching on measured confounders, such as selfreported symptoms, performance is much weaker (ROC-AUC=0.619 Upon quantifying the utility of audio-based classifiers in practical settings, we find them to be outperformed by predictions based on user-reported symptoms. We make best-practice recommendations for handling recruitment bias, and for assessing audio-based classifiers by their utility in relevant practical settings. Our work provides novel insights into the value of AI audio analysis and the importance of study design and treatment of confounders in AI-enabled diagnostics. The coronavirus disease 2019 (COVID-19) pandemic has been estimated by the World Health Organization (WHO) to have caused 14.9 million excess deaths over the 2020-2021 period (link). Table S1 summarises nine highly cited datasets and corresponding classification performance. Here, we analyse the largest PCR-validated dataset collected to date in the field of audio-based COVID-19 screening (ABCS). We design and specify an analysis plan in advance, to investigate whether using audio-based classifiers can improve the accuracy of COVID-19 screening over using self-reported symptoms. Our contribution is as follows: - We collect a respiratory acoustic dataset of 67,842 individuals with linked PCR test outcomes, including 23,514 who tested positive for COVID-19.