Nuance Communications, Inc. announced today that Providence Health & Services is deploying Dragon Medical 360 Network Edition across its healthcare enterprise, making medical voice recognition available at 27 hospitals and 250 clinics for approximately 8,000 clinicians. According to a news release, Providence Health & Services is ranked by Thomson Reuters as among the top 20 percent of best-performing health systems in the country. The organization-wide use of Dragon Medical will help Providence roll out the Epic electronic health record (EHR) system by letting clinical staff document and navigate the EHR simply by speaking, saving time and allowing them to be more efficient and effective. Over the next year, Dragon Medical will be integrated with Epic and once fully deployed, clinicians will be able to use the just by using their voice, leading to faster EHR system adoption and improved physician satisfaction with EHR use, the press release claims. "Dragon Medical builds upon our current Nuance-driven background speech workflow through eScription and will give physicians more documentation options," said Laureen O'Brien, chief information officer, Providence Health & Services.
Saykara today announced the release of Kara 2.0, an AI-powered healthcare assistant that further simplifies the documentation process for physicians. Now featuring Ambient Mode, Kara 2.0 is a breakthrough AI-powered voice application for healthcare, allowing physicians and patients to interact as they normally do, all while Saykara listens, transcribes to text, parses text into structured data, and intelligently completes each form in a patient's electronic health record (EHR or chart). Saykara then automatically generates a clinic note including patient history, physical, assessment, plan, orders and referrals. With the release of Ambient Mode, Saykara is the only virtual healthcare assistant that can be used passively'in the room' during physician-patient appointments with no voice commands. Ambient Mode builds on Saykara's versatility and agnosticity, allowing it to better serve up to 18 disparate healthcare specialties, including primary care, pediatrics, internal medicine, orthopedics, urology and more.
With the potential of AI permeating all aspects of our lives, the scope of the technology to help people with hearing disability has increased. Multiple wearable devices with artificial intelligence AI, ML and NLP embedded in it are available in the market, making the lives of people with hearing disability easier. Language translation and captioning: Tech giants are already working in the field as part of its larger corporate social responsibility programme. Microsoft, as part of its inclusive mission, has developed headsets with its embedded AI-powered communication technology, Microsoft Translator for hearing impaired. The system uses automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text.
We describe a method for continuously improving the accuracy of a large-scale medical automatic speech recognizer (ASR) using a multi-step cycle involving several groups of workers. The paper will address the unique challenges of the medical domain, and discuss how automatically created and crowdsourced input data is combined to refine the ASR language models. The improvement cycle helped to decrease the original system's word error rate from 34.1% to 10.4%, which approaches the accuracy of human transcribers trained in medical transcription.
FRANKLIN, TN--(Marketwired - June 14, 2017) - M*Modal, a leading provider of clinical documentation and Speech Understanding solutions, announced that its artificial intelligence (AI) powered portfolio of solutions is further enhanced to also support the Epic NoteReader CDI (Clinical Documentation Improvement) module. Along with the embedded M*Modal Computer-Assisted Physician Documentation (CAPD) technology, Epic NoteReader CDI also utilizes the M*Modal CAPD infrastructure and rigorous reporting capabilities to deliver automated physician feedback. The M*Modal CAPD technology continuously analyzes the documentation and applies machine learning and clinical reasoning across the entire patient record to deliver high-value insights and suggest improvements in quality and compliance as the note is being created. Uniquely, M*Modal CAPD technology extends significantly beyond CDI feedback focused on better capturing Hierarchical Condition Categories (HCCs), Risk-Adjusted Quality Scores, etc. This extends M*Modal's collaboration with Epic on the company's cloud-based speech recognition and natural language understanding technologies.