Nuance's Voice Recognition Software Deployed at Providence Health & Services' Hospitals, Clinics

AITopics Original Links

Nuance Communications, Inc. announced today that Providence Health & Services is deploying Dragon Medical 360 Network Edition across its healthcare enterprise, making medical voice recognition available at 27 hospitals and 250 clinics for approximately 8,000 clinicians. According to a news release, Providence Health & Services is ranked by Thomson Reuters as among the top 20 percent of best-performing health systems in the country. The organization-wide use of Dragon Medical will help Providence roll out the Epic electronic health record (EHR) system by letting clinical staff document and navigate the EHR simply by speaking, saving time and allowing them to be more efficient and effective. Over the next year, Dragon Medical will be integrated with Epic and once fully deployed, clinicians will be able to use the just by using their voice, leading to faster EHR system adoption and improved physician satisfaction with EHR use, the press release claims. "Dragon Medical builds upon our current Nuance-driven background speech workflow through eScription and will give physicians more documentation options," said Laureen O'Brien, chief information officer, Providence Health & Services.

Crowdsourced Continuous Improvement of Medical Speech Recognition

AAAI Conferences

We describe a method for continuously improving the accuracy of a large-scale medical automatic speech recognizer (ASR) using a multi-step cycle involving several groups of workers. The paper will address the unique challenges of the medical domain, and discuss how automatically created and crowdsourced input data is combined to refine the ASR language models. The improvement cycle helped to decrease the original system's word error rate from 34.1% to 10.4%, which approaches the accuracy of human transcribers trained in medical transcription.

Does Amazon have answers for the future of the NHS?

The Guardian

Enthusiasts predicted the plan would relieve the pressure on hard-pressed GPs. Critics saw it as a sign of creeping privatisation and a data-protection disaster in waiting. Reactions to news last month that Amazon's voice-controlled digital assistant Alexa was to begin using NHS website information to answer health queries were many and varied. US-based healthcare tech analysts say the deal is just the latest of a series of recent moves that together reveal an audacious, long-term strategy on the part of Amazon. From its entry into the lucrative prescription drugs market and development of AI tools to analyse patient records, to Alexa apps that manage diabetes and data-driven experiments on how to cut medical bills, the $900bn global giant's determination to make the digital disruption of healthcare a central part of its future business model is becoming increasingly clear.

M*Modal Enhances its Artificial Intelligence Platform to Support the Epic NoteReader CDI Workflow


FRANKLIN, TN--(Marketwired - June 14, 2017) - M*Modal, a leading provider of clinical documentation and Speech Understanding solutions, announced that its artificial intelligence (AI) powered portfolio of solutions is further enhanced to also support the Epic NoteReader CDI (Clinical Documentation Improvement) module. Along with the embedded M*Modal Computer-Assisted Physician Documentation (CAPD) technology, Epic NoteReader CDI also utilizes the M*Modal CAPD infrastructure and rigorous reporting capabilities to deliver automated physician feedback. The M*Modal CAPD technology continuously analyzes the documentation and applies machine learning and clinical reasoning across the entire patient record to deliver high-value insights and suggest improvements in quality and compliance as the note is being created. Uniquely, M*Modal CAPD technology extends significantly beyond CDI feedback focused on better capturing Hierarchical Condition Categories (HCCs), Risk-Adjusted Quality Scores, etc. This extends M*Modal's collaboration with Epic on the company's cloud-based speech recognition and natural language understanding technologies.

5 Ways In Which AI Is Improving Accessibility For The Hearing Impaired


With the potential of AI permeating all aspects of our lives, the scope of the technology to help people with hearing disability has increased. Multiple wearable devices with artificial intelligence AI, ML and NLP embedded in it are available in the market, making the lives of people with hearing disability easier. Language translation and captioning: Tech giants are already working in the field as part of its larger corporate social responsibility programme. Microsoft, as part of its inclusive mission, has developed headsets with its embedded AI-powered communication technology, Microsoft Translator for hearing impaired. The system uses automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text.