San Francisco-based Whisper, which launched in 2017, announced Thursday its flagship Whisper Hearing System. The innovative hearing aid comes with its own'brain' unit, which contains the companies proprietary Sound Separation Engine. This engine optimizes the nearby sounds and transmits them to the earpieces worn by the consumer. An accompanying mobile app ensures that software upgrades are implemented in the system. The user can also use the app to modulate the hearing aid system from the phone.
VarsityGenie will be hosting its first sign-language webinar to uplift the deaf community through technology-based skills on 14 October via Microsoft Teams. The student leadership platform was established by the 26-year-old Durban University of Technology (DUT) information and communication technology (ICT) masters student Fanie Ndlovu, who is passionate about developing the skills of young people in communities and townships. According to the World Health Organisation (WHO) there are 466 million people across the world that live with disabling hearing loss, which equates to more than 5% of the world's population. WHO also estimated that by 2050, more than 900 million people will have disabling hearing loss. Majority of these individuals live in low- to middle-income countries where they do not have access to the appropriate hearing care services.
Google has introduced a new feature for Android that notifies deaf users if there is water running, a dog barking or a fire alarm going off. Users can be notified about'critical' sounds through push notifications, vibrations on their phone or a flash from their camera light. While Google said Sound Notifications is designed for the estimated 466 million people in the world with hearing loss, it can also help people who are wearing headphones or otherwise distracted. Google's new Sound Notification feature alerts users to up to 10 'critical' sounds, including smoke alarms, sirens, dogs barking and door knocks. Developed with machine learning, Sound Notifications uses your phone's microphone to recognize ten different noises--including baby noises, shouting, water running, smoke and fire alarms, sirens, appliances beeping, door knocking and a landline phone ringing.
Experts fear the U.S. will see thousands more cancer deaths in the coming years due to delayed screenings, treatments and trials; Dr. Marc Siegel reacts. Throat cancer is a term that can apply to several different types of cancers that occur in different locations in the head and neck. In 2018, more than 30,000 people in the U.S. received a throat cancer diagnosis of some kind, according to MD Anderson Cancer Center. Both laryngeal and hypopharyngeal cancers start in the lower part of the throat. Patients diagnosed with laryngeal cancer mean that the disease was detected in an area affecting the voice box, including the supraglottis, which is located above the vocal cords, the glottis, which contains the vocal cords or the subglottis, which is below the vocal cords, according to the American Cancer Society.
Changi General Hospital (CGH), a 1000-bed academic medical institution under SingHealth located in the eastern part of Singapore, together with the Integrated Health Information System (IHiS), Singapore's national HIT agency, have developed a Community Acquired Pneumonia and COVID-19 Artificial Intelligence (AI) Predictive Engine (CAPE) that can determine the likelihood of whether the patient has mild or severe pneumonia, based on the chest X-ray image. The ability to quickly predict the patient's expected severity of pneumonia would enable clinicians and administrators to efficiently allocate healthcare resources and treat patients, particularly in pandemic situations, where there may be an increased need for inpatient care and critical care support. As pneumonia severity correlates to the degree of Chest X-Ray (CXR) lung image abnormality, CGH's Respiratory and Critical Care Medicine and Radiology teams recognized the potential in leveraging AI to predict the severity of pneumonia from CXR images, and worked with the IHiS Health Insights team to develop CAPE. Using more than 3,000 CXR images and 200,000 data points including lab results and clinical history, CAPE was trained to generate a score for (a) low-risk pneumonia with anticipated short inpatient hospitalization; (b) the risk of mortality (death); and (c) the risk of requiring critical care support – indicators of pneumonia severity – from CXR images. Initial results have been promising – validation tests at CGH showed that CAPE has an approximate accuracy of 80% in predicting the future presence or absence of severe pneumonia.
Hearing loss happens to many of us. The US National Institutes of Health, for example, estimates that one in eight Americans aged 12 years and older has hearing loss in both ears. Twenty-five per cent of adults aged between 65 and 75, and half of those aged 75 and older, experience disabling hearing loss. According to NIH's National Institute on Deafness and Other Communication Disorders, 28.8 million US adults could benefit from using hearing aids. Those who could benefit aren't necessarily rushing to achieve better hearing.
Automatic Chest Radiograph X-ray (CXR) interpretation by machines is an important research topic of Artificial Intelligence. As part of my journey through the California Science Fair, I have developed an algorithm that can detect pneumonia from a CXR image to compensate for human fallibility. My algorithm, PneumoXttention, is an ensemble of two 13 layer convolutional neural network trained on the RSNA dataset, a dataset provided by the Radiological Society of North America, containing 26,684 frontal X-ray images split into the categories of pneumonia and no pneumonia. The dataset was annotated by many professional radiologists in North America. It achieved an impressive F1 score, 0.82, on the test set (20% random split of RSNA dataset) and completely compensated Human Radiologists on a random set of 25 test images drawn from RSNA and NIH. I don't have a direct comparison but Stanford's Chexnet has a F1 score of 0.435 on the NIH dataset for category Pneumonia.
Chest radiography is an important diagnostic tool for chest-related diseases. Medical imaging research is currently embracing the automatic detection techniques used in computer vision. Over the past decade, Deep Learning techniques have shown an enormous breakthrough in the field of medical diagnostics. Various automated systems have been proposed for the rapid detection of pneumonia on chest x-rays images Although such detection algorithms are many and varied, they have not been summarized into a review that would assist practitioners in selecting the best methods from a real-time perspective, perceiving the available datasets, and understanding the currently achieved results in this domain. After summarizing the topic, the review analyzes the usability, goodness factors, and computational complexities of the algorithms that implement these techniques.
From US real estate giant inadvertently leaking 900 million records to Danish hearing aid manufacturer Demant being a victim to a 95 million US dollars hack –cybercriminals ran rampant in the last year. In the USA alone, there were ransomware attacks against 621 government agencies, schools and healthcare providers in the first nine months of 2019. Cybercrime also became much more sophisticated in the year. And this is a trend that will continue in 2020 and beyond. While the classic phishing method –where a login page tricks a user into giving their information – is still very much popular, the use of Artificial Intelligence by malicious parties is an emerging threat that cannot be ignored.
In a 2018 Accenture Digital Health report, 75 percent of respondents said technology played an essential role in managing their health. When it comes to artificial intelligence (AI) powered digital health and wearable devices, 72 percent said they're willing to share their wearable data with their health insurance plan. The report also found that when AI and robotics consumer interest surpassed the choices available today for virtual care. At the Consumer Electronics Show (CES) 2020 in Las Vegas, January 7-10, 2020, AI-powered digital health devices will be prevalent. For people with hearing loss or who are visually impaired, machine learning in digital health devices can open new possibilities to hear conversations more clearly or see the world around them.