Rene: A Pre-trained Multi-modal Architecture for Auscultation of Respiratory Diseases
Zhang, Pengfei, Zheng, Zhihang, Zhang, Shichen, Yang, Minghao, Tang, Shaojun
–arXiv.org Artificial Intelligence
Compared with invasive examinations that require tissue sampling, respiratory sound testing is a non-invasive examination method that is safer and easier for patients to accept. In this study, we introduce Rene, a pioneering large-scale model tailored for respiratory sound recognition. Rene has been rigorously fine-tuned with an extensive dataset featuring a broad array of respiratory audio samples, targeting disease detection, sound pattern classification, and event identification. Our innovative approach applies a pre-trained speech recognition model to process respiratory sounds, augmented with patient medical records. The resulting multi-modal deep-learning framework addresses interpretability and real-time diagnostic challenges that have hindered previous respiratory-focused models. Benchmark comparisons reveal that Rene significantly outperforms existing models, achieving improvements of 10.27%, 16.15%, 15.29%, and 18.90% in respiratory event detection and audio classification on the SPRSound database. Disease prediction accuracy on the ICBHI database improved by 23% over the baseline in both mean average and harmonic scores. Moreover, we have developed a real-time respiratory sound discrimination system utilizing the Rene architecture. Employing state-of-the-art Edge AI technology, this system enables rapid and accurate responses for respiratory sound auscultation(https://github.com/zpforlove/Rene). KEYWORDS
arXiv.org Artificial Intelligence
Jun-6-2024
- Country:
- Asia > China > Guangdong Province (0.14)
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Technology: