An Investigation of Memorization Risk in Healthcare Foundation Models
Tonekaboni, Sana, Stempfle, Lena, Fallahpour, Adibvafa, Gerych, Walter, Ghassemi, Marzyeh
–arXiv.org Artificial Intelligence
Foundation models trained on large-scale de-identified electronic health records (EHRs) hold promise for clinical applications. However, their capacity to memorize patient information raises important privacy concerns. In this work, we introduce a suite of black-box evaluation tests to assess privacy-related memorization risks in foundation models trained on structured EHR data. Our framework includes methods for probing memorization at both the embedding and generative levels, and aims to distinguish between model generalization and harmful memorization in clinically relevant settings. We contextualize memorization in terms of its potential to compromise patient privacy, particularly for vulnerable subgroups. We validate our approach on a publicly available EHR foundation model and release an open-source toolkit to facilitate reproducible and collaborative privacy assessments in healthcare AI.
arXiv.org Artificial Intelligence
Oct-16-2025
- Country:
- Europe > Sweden
- Vaestra Goetaland > Gothenburg (0.04)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Massachusetts
- Middlesex County > Cambridge (0.04)
- Canada > Ontario
- Europe > Sweden
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Health & Medicine
- Consumer Health (1.00)
- Health Care Technology > Medical Record (1.00)
- Pharmaceuticals & Biotechnology (1.00)
- Therapeutic Area
- Immunology (1.00)
- Infections and Infectious Diseases (1.00)
- Psychiatry/Psychology (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine
- Technology: