A Stanford University-led team of scientists has developed a machine learning tool that can analyse electronic healthcare records (EHR) to identify individuals who are likely to have familial hypercholesterolemia (FH), an underdiagnosed genetic cause of elevated low-density lipoprotein (LDL) cholesterol, which puts patients at a 20-fold increased risk of coronary artery disease. In separate test runs the classifier, described today in npj Digital Medicine, correctly identified more than 80% of cases--its positive predictive value (PPV)--and demonstrated 99% specificity. The team says the classifier could help to flag up patients who are most likely to have FH, so that they and their families can undergo further genetic testing. "Theoretically, when someone comes into the clinic with high cholesterol or heart disease, we would run this algorithm," said Nigam Shah, MBBS, PhD, Stanford University associate professor of medicine and biomedical data science. "If they're flagged, it means there's an 80% chance that they have FH. Those few individuals could then get sequenced to confirm the diagnosis and could start an LDL-lowering treatment right away."
Using large healthcare encounter datasets, a machine learning algorithm is able to identify patients with a common genetic disorder that carries a high risk for early heart attacks and strokes. While individuals with familial hypercholesterolaemia (FH) have 20 times the risk of developing cardiovascular disease than the general population, fewer than 10 percent of the 1.3 million Americans born with the genetic disease are diagnosed. "People born with familial hypercholesterolemia develop cardiovascular damage by puberty, often culminating in early heart attacks or the need for surgery as young or middle-aged adults," says Katherine Wilemon, founder and CEO of the FH Foundation, a non-profit research and advocacy organization. "Since diagnosis of this deadly but treatable condition has stalled in the American medical system, the FH Foundation harnessed artificial intelligence and big data to accelerate identification of those most likely to have FH." In a new study, a machine learning model created by the FH Foundation successfully leveraged healthcare encounter databases to identify individuals with the genetic disorder.
Authors from the Center for Data Science at New York University were Nan Wu, Jason Phang, Jungkyu Park, Yiqiu Shen, Zhe Huang, Thibault Févry, and Kyunghyun Cho, who is also on the faculty of NYU's Courant Institute of Mathematical Sciences. Also authors were Kara Ho at SUNY Downstate College of Medicine; Masha Zorin in the Department of Computer Science and Technology at the University of Cambridge in the United Kingdom; and Stanisław Jastrzębski from Jagiellonian University in Poland, and Joe Katsnelson in the Department of Information Technology, NYU Langone Health.
With the decade winding down it's time for us to set our sights on the next one. The 2020s promises to be anything but dull. From the automation revolution and increasingly dangerous AI to geohacking the planet and radical advances in biotechnology, here are the most futuristic developments to expect in the next 10 years. Making predictions is easy; it's getting them right that's tough. That said, some tangible trends are emerging that should allow us to make some informed guesses about what the future will hold over the next 10 years. Of great concern, of course, is the pending automation revolution and the associated onset of technological unemployment.
To learn more about research projects like this that are enabled by AWS, see the AWS Machine Learning Research Awards website – https://amzn.to/2RR78PM Detecting and starting treatment of autism spectrum disorder (ASD) at an age of 18 to 24 months can increase a child's IQ by up to 17 points--in some cases moving them into the "average" child IQ range of 90-110 (or above it)--and improving the child's quality of life significantly. Researchers at Duke University are using Machine Learning on AWS to create a faster, less expensive, more reliable, and more accessible system to screen children early for ASD.
How often are you in a situation where you have 2 alternatives either yes or a no, black or a white, and so on. These are instances where you'classify' your scenario into only two solutions, a number of solutions may vary but usually, they are two solutions. This is what we call as'Classification' we classify the outcomes in a set number of instances usually two. This week at The Datum we have how can we use Neural Networks as the classification model. And once we have the model in hands we will go about prediction using the model and lastly, we will evaluate our model and predictions for its rightness.
Trained on nearly 1 million screening mammography images, researchers from New York University found their algorithm could push radiologists' ability to accurately identify breast cancer to nearly 90%. The researchers published their findings earlier this month in IEEE Transactions on Medical Imaging.
An artificial intelligence (AI) tool--trained on roughly a million screening mammography images--identified breast cancer with approximately 90 percent accuracy when combined with analysis by radiologists, a new study finds. Led by researchers from NYU School of Medicine and the NYU Center for Data Science, the study examined the ability of a type of AI, a machine learning computer program, to add value to the diagnoses reached by a group of 14 radiologists as they reviewed 720 mammogram images. "Our study found that AI identified cancer-related patterns in the data that radiologists could not, and vice versa," says senior study author Krzysztof J. Geras, PhD, assistant professor in the Department of Radiology at NYU Langone. "AI detected pixel-level changes in tissue invisible to the human eye, while humans used forms of reasoning not available to AI," adds Dr. Geras, also an affiliated faculty member at the NYU Center for Data Science. "The ultimate goal of our work is to augment, not replace, human radiologists."
Deep learning1 has revolutionized the field of biomedical image analysis. Conventional approaches have used problem-specific algorithms to describe images with manually crafted features, such as cell morphology, count, intensity, and texture. Feature learning with deep convolutional neural networks is implicit, and training the network usually focuses on particular tasks, such as breast cancer detection in mammography2, subcellular protein localization3, or plant disease detection4. Training a deep network usually requires a large number of images, which limits its utility. For example, the classifier for plant disease detection by Mohanty et al.4 was trained on 54,306 images of diseased and healthy plants, and the yeast protein localization model by Kraus et al.3 was inferred from 22,000 annotated images, but not everyone who could benefit from image analysis has so many well-annotated images.