Goto

Collaborating Authors

 ldl cholesterol


Testing for 'Bad Cholesterol' Doesn't Tell the Whole Story

WIRED

Testing for'Bad Cholesterol' Doesn't Tell the Whole Story So why don't more doctors use it? For decades, assessing cholesterol risk has been built around a simple idea: Lower "bad" cholesterol, lower your chance of a heart attack . The test at the center of that approach measures how much low-density lipoprotein, or LDL cholesterol, is circulating in part of the blood. It has shaped everything from clinical guidelines to the widespread use of statins, medications that reduce LDL. Lowering LDL cholesterol reduces heart attacks, strokes, and early death.


Artificial Intelligence Identifies Patients with Potentially Fatal Genetic Disease

#artificialintelligence

A Stanford University-led team of scientists has developed a machine learning tool that can analyse electronic healthcare records (EHR) to identify individuals who are likely to have familial hypercholesterolemia (FH), an underdiagnosed genetic cause of elevated low-density lipoprotein (LDL) cholesterol, which puts patients at a 20-fold increased risk of coronary artery disease. In separate test runs the classifier, described today in npj Digital Medicine, correctly identified more than 80% of cases--its positive predictive value (PPV)--and demonstrated 99% specificity. The team says the classifier could help to flag up patients who are most likely to have FH, so that they and their families can undergo further genetic testing. "Theoretically, when someone comes into the clinic with high cholesterol or heart disease, we would run this algorithm," said Nigam Shah, MBBS, PhD, Stanford University associate professor of medicine and biomedical data science. "If they're flagged, it means there's an 80% chance that they have FH. Those few individuals could then get sequenced to confirm the diagnosis and could start an LDL-lowering treatment right away."


Multi-modal Predictive Models of Diabetes Progression

Ramazi, Ramin, Perndorfer, Christine, Soriano, Emily, Laurenceau, Jean-Philippe, Beheshti, Rahmatollah

arXiv.org Machine Learning

With the increasing availability of wearable devices, continuous monitoring of individuals' physiological and behavioral patterns has become significantly more accessible. Access to these continuous patterns about individuals' statuses offers an unprecedented opportunity for studying complex diseases and health conditions such as type 2 diabetes (T2D). T2D is a widely common chronic disease that its roots and progression patterns are not fully understood. Predicting the progression of T2D can inform timely and more effective interventions to prevent or manage the disease. In this study, we have used a dataset related to 63 patients with T2D that includes the data from two different types of wearable devices worn by the patients: continuous glucose monitoring (CGM) devices and activity trackers (ActiGraphs). Using this dataset, we created a model for predicting the levels of four major biomarkers related to T2D after a one-year period. We developed a wide and deep neural network and used the data from the demographic information, lab tests, and wearable sensors to create the model. The deep part of our method was developed based on the long short-term memory (LSTM) structure to process the time-series dataset collected by the wearables. In predicting the patterns of the four biomarkers, we have obtained a root mean square error of 1.67% for HBA1c, 6.22 mg/dl for HDL cholesterol, 10.46 mg/dl for LDL cholesterol, and 18.38 mg/dl for Triglyceride. Compared to existing models for studying T2D, our model offers a more comprehensive tool for combining a large variety of factors that contribute to the disease.