In 2012, IBM Watson went to medical school. So said The New York Times, announcing that the tech giant's artificially intelligent question-and-answer machine had begun a "stint as a medical student" at the Cleveland Clinic Lerner College of Medicine. This was just a metaphor. Clinicians were helping IBM train Watson for use in medical research. But as metaphors go, it wasn't a very good one.
The current study examined the degree to which the quality and characteristics of students’ essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed using recurrence quantification analysis (RQA). Results of correlation and regression analyses revealed that the RQA indices were significantly related to the quality of students’ essays, at both holistic and sub-scale levels (e.g., organization, cohesion). Additionally, these indices were able to account for between 11% and 43% of the variance in students’ holistic and sub-scale essay scores. Overall, our results suggest that dynamic techniques can be used to improve natural language processing assessments of student essays.
Google wants to put its artificial intelligence technology to use in top hospitals. Earlier this week, the search giant announced it would work with the U.K.'s National Health Service, or NHS, to alert staff to patients at risk of serious complications due to kidney failure. Details about the technology are fairly thin on the ground at this stage. But it is known that Google DeepMind recently acquired an app called Hark, which is a task management app that aims to replace paper-based systems and pagers. Hark was developed over four years by a team at Imperial College London, which is one of the U.K.'s top medical schools.
Medical imaging is expected to be one of the early useful applications of artificial intelligence and machine learning in healthcare. And a slew of deals have been built around that premise in the last year or so--IBM Watson Health bought cloud-based imaging company Merge for 1 billion; Philips partnered with Hitachi to incorporate AI into its image management; and GE added deep learning software from startup Arterys to its cardiac imaging. Now, another major cloud-based imaging startup is working to incorporate machine learning, first into X-ray analysis and eventually into other imaging modalities including CT and MRI. The Goldman Sachs-backed startup Imaging Advantage, which reportedly tapped into up to 250 million in debt in January 2015, has partnered with the Massachusetts Institute of Technology as well as Harvard Medical School and Massachusetts General Hospital to develop an artificial intelligence engine known as Singularity Healthcare. The result is expected to launch this quarter.
The Deep Learning 101 series is a companion piece to a talk given as part of the Department of Biomedical Informatics @ Harvard Medical School'Open Insights' series. Each post in this series is a collection of explanations, references and pointers meant to help someone new to the field quickly bootstrap their knowledge of key events, people, and terms in deep learning. In the same way that neural nets use a distributed representation to process data, reference materials for deep learning are scattered across the far flung corners of the internet and embedded in the dark ether of social media. The hope is that coalescing at least some of these materials into a central location will make it easier for new comers to start their own walk over this knowledge graph. This collection is intentionally peppered with trivia and articles from the popular press that are relevant to deep learning to keep things interesting and to provide context.