Data mining, text mining, natural language processing, and computational linguistics: some definitions

#artificialintelligence

Every once in a while an innocuous technical term suddenly enters public discourse with a bizarrely negative connotation. I first noticed the phenomenon some years ago, when I saw a Republican politician accusing Hillary Clinton of "parsing." From the disgust with which he said it, he clearly seemed to feel that parsing was morally equivalent to puppy-drowning. It seemed quite odd to me, since I'd only ever heard the word "parse" used to refer to the computer analysis of sentence structures. The most recent word to suddenly find itself stigmatized by Republicans (yes, it does somehow always seem to be Republican politicians who are involved in this particular kind of linguistic bullshittery) is "encryption."


How AI and machine learning are transforming clinical decision support

#artificialintelligence

"Between 12 to 18 million Americans every year will experience some sort of diagnostic error," said Paul Cerrato, a journalist and researcher. "So the question is: Why such a huge number? And what can we do better in terms of reinventing the tools so they catch these conditions more effectively?" Cerrato is co-author, alongside Dr. John Halamka, newly minted president of Mayo Clinic Platform, of the new HIMSS Book Series edition, Reinventing Clinical Decision Support: Data Analytics, Artificial Intelligence, and Diagnostic Reasoning. At HIMSS20, the two of them will discuss the book, and the bigger picture around CDS tools that are fast being transformed by the advent of artificial intelligence, machine learning and big data analytics.


Distance Metric Learning for Conditional Anomaly Detection

AAAI Conferences

Anomaly detection methods can be very useful in identifying unusual or interesting patterns in data. A recently proposed conditional anomaly detection framework extends anomaly detection to the problem of identifying anomalous patterns on a subset of attributes in the data. The anomaly always depends (is conditioned) on the value of remaining attributes. The work presented in this paper focuses on instance-based methods for detecting conditional anomalies. The methods depend heavily on the distance metric that lets us identify examples in the dataset that are most critical for detecting the anomaly. To optimize the performance of the anomaly detection methods we explore and study metric learning methods. We evaluate the quality of our methods on the Pneumonia PORT dataset by detecting unusual admission decisions for patients with the community-acquired pneumonia. The results of our metric learning methods show an improved detection performance over standard distance metrics, which is very promising for building automated anomaly detection systems for variety of intelligent monitoring applications.


Making big data small BizTimes Media Milwaukee

#artificialintelligence

We live in a world of health data. With fitness trackers, electronic health records, sleep monitoring and countless other ways to track and measure our health, we've entered an exciting era in which the flow of data to our doctors, pharmacists and other care providers is revolutionizing how and how fast health care services are delivered. It's also given consumers windows into their own health that was just a dream 20, 10, even five years ago. Not long ago, we really only got a picture of our health once a year when we went to our doctor for an annual check-up. We'd get blood drawn, blood pressure, weight and other vital statistics were taken, and our doctor would declare us healthy or give us things to work on.


Digging for data that can change our world

AITopics Original Links

Scientific research is being added to at an alarming rate: the Human Genome Project alone is generating enough documentation to "sink battleships". So it's not surprising that academics seeking data to support a new hypothesis are getting swamped with information overload. As data banks build up worldwide, and access gets easier through technology, it has become easier to overlook vital facts and figures that could bring about groundbreaking discoveries. The government's response has been to set up the National Centre for Text Mining, the world's first centre devoted to developing tools that can systematically analyse multiple research papers, abstracts and other documents, and then swiftly determine what they contain. Text mining uses artificial intelligence techniques to look in texts for entities (a quality or characteristic, such as a date or job title) and concepts (the relationship between two genes, for example).