Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes
Datta, Surabhi, Kaochar, Tasneem, Lam, Hio Cheng, Nwosu, Nelly, Giancardo, Luca, Chuang, Alice Z., Feldman, Robert M., Roberts, Kirk
–arXiv.org Artificial Intelligence
These findings are documented based on interpretations from imaging examinations (e.g., fundus examination), complications or outcomes associated with surgeries (e.g., cataract surgery), and experiences or symptoms shared by patients. Such findings are oftentimes described along with their exact eye locations as well as other contextual information such as their timing and status. Thus, ophthalmology notes comprise of spatial relations between eye findings and their corresponding locations, and these findings are further described using different spatial characteristics such as laterality and size. Although there has been recent advancements in using natural language processing (NLP) methods in the ophthalmology domain, they are mainly targeted for specific ocular conditions. Some work leveraged electronic health record text data to identify conditions such as glaucoma [1], herpes zoster ophthalmicus [2], and exfoliation syndrome [3], while another set of work extracted quantitative measures particularly related to visual acuity [4, 5] and microbial keratitis [6]. In this work, we aim to extract more comprehensive information related to all eye findings, covering both spatial and contextual, from the ophthalmology notes. Besides automated screening and diagnosis of various ocular conditions, identifying such detailed information can aid in applications such as automated monitoring of eye findings or diseases and cohort retrieval for retrospective epidemiological studies. For this, we propose to extend our existing radiology spatial representation schema-Rad-SpatialNet [7] to the ophthalmology domain. We refer to this as the Eye-SpatialNet schema in this paper.
arXiv.org Artificial Intelligence
May-19-2023