Goto

Collaborating Authors

 model response











MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language Models

Neural Information Processing Systems

However, there is little to no understanding of the notion of medical safety in the context of LLMs, let alone how to evaluate and improve it. To address this gap, we first define the notion of medical safety in LLMs based on the Principles of Medical Ethics set forth by the American Medical Association.