Two years ago, MIT research scientist Amar Gupta and his wife Poonam were on a trip to Los Angeles when she fell and broke both wrists. After being whisked by ambulance to a private medical center where she underwent a series of tests, staff members informed Poonam that they couldn't treat her further because she was not a member of the hospital's health care system. The staff spent hours trying to arrange for treatment elsewhere, but when they couldn't find another local facility that would accept the referral, the couple was forced to take the hospital's stunning advice: return to Boston with the fractures and consult a surgeon there. The episode abruptly ended the couple's trip, but it also, due to delays in obtaining the needed surgery for his wife, forced Gupta to give up a major professional opportunity in the Los Angeles area. In his view, the experience was bitter confirmation of the need for his work addressing dysfunction and inefficiency in the U.S. health care system -- and it inspired him to redouble those efforts.
In a May 16 editorial published by The New York Times Magazine, Abraham Verghese, MD, a professor of internal medicine at Stanford University, explained that the popularity of electronic health records (EHRs) and the emergence of artificial intelligence (AI) in medicine may be overriding physicians' clinical judgement more than informing it. Unfortunately, he argues, physicians are becoming more vulnerable to self-doubt in the face of technologic advancement--and physicians are to blame. "In America today, the patient in the hospital bed is just the icon, a place holder for the real patient who is not in the bed but in the computer; that virtual entity gets all our attention" Verghese wrote. "The living, breathing source of the data and images we juggle, meanwhile, is in the bed and left wondering: Where is everyone? It's my body, you know!"
A group of engineers have created a weird type of smart gel that not only retains the soft nature and flexibility of a hydrogel, but also walks and manipulates objects placed underwater. Though most hydrogels host more than 70 percent water and are commonly found in the human body, diapers, contact lenses, and many other things, this particular creation, developed through sophisticated 3D-printing techniques, goes a step ahead and moves and changes its shape under the impact of electricity. Essentially, whenever the hydrogel is kept in a saltwater solution, such as an electrolyte, and electricity is applied, it reacts and starts walking forward, reversing course, grabbing and moving objects. The team behind this project, researchers from Rutgers University, United States, even shared a video showcasing how the unique gel works. In the video, the smart gel can be seen picking up and dropping as well as moving an object under the impact of an electric field.
Machine learning and artificial intelligence (AI) have long been heralded as the future of transformative technologies. From diagnostic and imaging technologies to therapeutic applications and robotics, the potential for machine learning and AI technologies reaches almost every corner of the medtech world. So, what does that mean for the development and application of next-gen medical devices? Dave Saunders is the chief technology officer of Galen Robotics, an emerging surgical robotics company that specializes in a new line of robotic technologies that provide a cooperatively controlled surgical platform. The company aims to provide robot-assisted technologies that can extend increased precision and unprecedented tool stabilization to microsurgery procedures.
Magnetoencephalography (MEG) is a functional neuroimaging modality that records the magnetic fields induced by neuronal activity. It provides better temporal resolution than fMRI and is less affected by noise from intervening tissues than EEG. We propose a data driven, fully automated approach that extracts statistically independent MEG components and a convolutional neural network to discriminate the artifactual components from neuronal ones, without tedious manual labeling. Our custom, 10-layer Convolutional Neural Network (CNN) directly labels eye-blink artifacts. The spatial features the CNN learns are visualized using attention mapping, to reveal what it has learned and bolster confidence in the method's ability to generalize to unseen data.
Machine learning has sparked tremendous interest over the past few years, particularly deep learning, a branch of machine learning that employs multi-layered neural networks. Deep learning has done remarkably well in image classification and processing tasks, mainly owing to convolutional neural networks (CNN) . Their use became popularized after Drs. Krizhevsky and Hinton used a deep CNN called AlexNet  to win the 2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an international competition for object detection and classification, consisting of 1.2 million everyday color images . The goal of this paper is to provide a high-level introduction into practical machine learning for purposes of medical image classification.
From Quadrant (a D-Wave business), this whitepaper "Data-Efficient Machine Learning" describes a practical impediment to the application of deep neural network models when large training data sets are unavailable. Encouragingly however, it is shown that recent machine learning advances make it possible to obtain the benefits of deep neural networks by making more efficient use of training data that most practitioners do have. Quadrant leverages generative machine learning, which requires much less labeled data than common discriminative models. This is incredibly useful in countless applications, including medical imaging which is often limited to relatively small data sets (i.e. For a first case study, Siemens Healthineers partnered with Quadrant to identify surgical tools used in cataract surgery with 99.71% accuracy.
Google is betting that its deep learning systems can sort through the electronic health record morass. At Google I/O, CEO Sundar Pichai outlined how the company was using its artificial intelligence and machine learning infrastructure to better predict healthcare outcomes. The field is emerging on multiple fronts, but much of healthcare data is unstructured and requires a lot of wrangling. For Google, the interest in healthcare is more of a way to prove its models and algorithms in the field. Google has also partnered with Fitbit on data and health APIs.
Machines are getting better and better at analyzing complex health data to help physicians better understand their patients' future needs. In a study out today in Nature Digital Medicine, an advanced algorithm evaluated de-identified electronic health records of more than 216,000 adult patient hospitalizations to predict unexpected readmissions, long hospital stays, and in-hospital deaths more accurately than previous approaches. I caught up with one of the authors, Nigam Shah, MBBS, PhD, an associate professor at Stanford, to learn about the new study and discuss the implications for artificial intelligence in medicine. What is deep learning and how does it fit in the larger universe of artificial intelligence? Deep learning is one of several machine learning techniques that can be used to build intelligent systems.