Goto

Collaborating Authors

 Kaochar, Tasneem


Eye-SpatialNet: Spatial Information Extraction from Ophthalmology Notes

arXiv.org Artificial Intelligence

These findings are documented based on interpretations from imaging examinations (e.g., fundus examination), complications or outcomes associated with surgeries (e.g., cataract surgery), and experiences or symptoms shared by patients. Such findings are oftentimes described along with their exact eye locations as well as other contextual information such as their timing and status. Thus, ophthalmology notes comprise of spatial relations between eye findings and their corresponding locations, and these findings are further described using different spatial characteristics such as laterality and size. Although there has been recent advancements in using natural language processing (NLP) methods in the ophthalmology domain, they are mainly targeted for specific ocular conditions. Some work leveraged electronic health record text data to identify conditions such as glaucoma [1], herpes zoster ophthalmicus [2], and exfoliation syndrome [3], while another set of work extracted quantitative measures particularly related to visual acuity [4, 5] and microbial keratitis [6]. In this work, we aim to extract more comprehensive information related to all eye findings, covering both spatial and contextual, from the ophthalmology notes. Besides automated screening and diagnosis of various ocular conditions, identifying such detailed information can aid in applications such as automated monitoring of eye findings or diseases and cohort retrieval for retrospective epidemiological studies. For this, we propose to extend our existing radiology spatial representation schema-Rad-SpatialNet [7] to the ophthalmology domain. We refer to this as the Eye-SpatialNet schema in this paper.


Human Natural Instruction of a Simulated Electronic Student

AAAI Conferences

Humans naturally use multiple modes of instruction while teaching one another. We would like our robots and artificial agents to be instructed in the same way, rather than programmed. In this paper, we review prior work on human instruction of autonomous agents and present observations from two exploratory pilot studies and the results of a full study investigating how multiple instruction modes are used by humans. We describe our Bootstrapped Learning User Interface, a prototype multiinstruction interface informed by our human-user studies.