Stefanski, Adrianne L.
An Open-Source Knowledge Graph Ecosystem for the Life Sciences
Callahan, Tiffany J., Tripodi, Ignacio J., Stefanski, Adrianne L., Cappelletti, Luca, Taneja, Sanya B., Wyrwa, Jordan M., Casiraghi, Elena, Matentzoglu, Nicolas A., Reese, Justin, Silverstein, Jonathan C., Hoyt, Charles Tapley, Boyce, Richard D., Malec, Scott A., Unni, Deepak R., Joachimiak, Marcin P., Robinson, Peter N., Mungall, Christopher J., Cavalleri, Emanuele, Fontana, Tommaso, Valentini, Giorgio, Mesiti, Marco, Gillenwater, Lucas A., Santangelo, Brook, Vasilevsky, Nicole A., Hoehndorf, Robert, Bennett, Tellen D., Ryan, Patrick B., Hripcsak, George, Kahn, Michael G., Bada, Michael, Baumgartner, William A. Jr, Hunter, Lawrence E.
Translational research requires data at multiple scales of biological organization. Advancements in sequencing and multi-omics technologies have increased the availability of these data but researchers face significant integration challenges. Knowledge graphs (KGs) are used to model complex phenomena, and methods exist to automatically construct them. However, tackling complex biomedical integration problems requires flexibility in the way knowledge is modeled. Moreover, existing KG construction methods provide robust tooling at the cost of fixed or limited choices among knowledge representation models. PheKnowLator (Phenotype Knowledge Translator) is a semantic ecosystem for automating the FAIR (Findable, Accessible, Interoperable, and Reusable) construction of ontologically grounded KGs with fully customizable knowledge representation. The ecosystem includes KG construction resources (e.g., data preparation APIs), analysis tools (e.g., SPARQL endpoints and abstraction algorithms), and benchmarks (e.g., prebuilt KGs and embeddings). We evaluate the ecosystem by surveying open-source KG construction methods and analyzing its computational performance when constructing 12 large-scale KGs. With flexible knowledge representation, PheKnowLator enables fully customizable KGs without compromising performance or usability.
Ontologizing Health Systems Data at Scale: Making Translational Discovery a Reality
Callahan, Tiffany J., Stefanski, Adrianne L., Wyrwa, Jordan M., Zeng, Chenjie, Ostropolets, Anna, Banda, Juan M., Baumgartner, William A. Jr., Boyce, Richard D., Casiraghi, Elena, Coleman, Ben D., Collins, Janine H., Deakyne-Davies, Sara J., Feinstein, James A., Haendel, Melissa A., Lin, Asiyah Y., Martin, Blake, Matentzoglu, Nicolas A., Meeker, Daniella, Reese, Justin, Sinclair, Jessica, Taneja, Sanya B., Trinkley, Katy E., Vasilevsky, Nicole A., Williams, Andrew, Zhang, Xingman A., Denny, Joshua C., Robinson, Peter N., Ryan, Patrick, Hripcsak, George, Bennett, Tellen D., Hunter, Lawrence E., Kahn, Michael G.
Background: Common data models solve many challenges of standardizing electronic health record (EHR) data, but are unable to semantically integrate all the resources needed for deep phenotyping. Open Biological and Biomedical Ontology (OBO) Foundry ontologies provide computable representations of biological knowledge and enable the integration of heterogeneous data. However, mapping EHR data to OBO ontologies requires significant manual curation and domain expertise. Objective: We introduce OMOP2OBO, an algorithm for mapping Observational Medical Outcomes Partnership (OMOP) vocabularies to OBO ontologies. Results: Using OMOP2OBO, we produced mappings for 92,367 conditions, 8611 drug ingredients, and 10,673 measurement results, which covered 68-99% of concepts used in clinical practice when examined across 24 hospitals. When used to phenotype rare disease patients, the mappings helped systematically identify undiagnosed patients who might benefit from genetic testing. Conclusions: By aligning OMOP vocabularies to OBO ontologies our algorithm presents new opportunities to advance EHR-based deep phenotyping.