Developing trust in healthcare AI, step by step
A new Chilmark Research report by Dr. Jody Ranck, the firm's senior analyst, explores state-of-the-art processes for bias and risk mitigation in artificialI that can be used to develop more trustworthy machine learning tools for healthcare. WHY IT MATTERS As the usage of artificial intelligence in healthcare grows, some providers are skeptical about how much they should trust machine learning models deployed in clinical settings. AI products and services have the potential to determine who gets what form of medical care and when – so stakes are high when algorithms are deployed, as Chilmark's 2022 "AI and Trust in Healthcare Report," published September 13, explains. Growth in enterprise-level augmented and artificial intelligence has touched population health research, clinical practice, emergency room management, health system operations, revenue cycle management, supply chains and more. Efficiencies and cost-savings that AI can help organizations realize are driving that array of use cases, along with deeper insights into clinical patterns that machine learning can surface.
Sep-15-2022, 15:15:02 GMT