Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

#artificialintelligence

A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction.


Predicting Diabetes Using a Machine Learning Approach - DZone Big Data

#artificialintelligence

Diabetes is one of deadliest diseases in the world. It is not only a disease but also a creator of different kinds of diseases like heart attack, blindness, kidney diseases, etc. The normal identifying process is that patients need to visit a diagnostic center, consult their doctor, and sit tight for a day or more to get their reports. Moreover, every time they want to get their diagnosis report, they have to waste their money in vain. But with the rise of Machine Learning approaches we have the ability to find a solution to this issue, we have developed a system using data mining which has the ability to predict whether the patient has diabetes or not.


Johnson & Johnson Post-doc federated and privacy-preserving machine learning Beerse, Belgium Informatics

#artificialintelligence

Janssen Research & Development seeks to drive innovation and deliver transformational medicines for the treatment of diseases in six therapeutic areas: neuroscience, cardiovascular diseases and metabolism, infectious diseases, immunology, oncology and pulmonary hypertension. In these areas, Janssen aims to address and solve unmet medical needs through the development of small and large molecules, as well as vaccines. The Janssen campus in Beerse (Belgium) has a unique ecosystem covering the complete drug development life cycle, with all capabilities from basic science to market access on one campus. The integrated environment of our campus gives our people the chance to experience many different aspects of drug development throughout their career. It has a successful track record of over sixty years of drug discovery and development and is one of the most important innovation engines of the Janssen group worldwide.


Celebrating TensorFlow's First Year

#artificialintelligence

Originally posted on Google Research blog It has been an eventful year since the Google Brain Team open-sourced TensorFlow to accelerate machine learning research and make technology work better for everyone. There has been an amazing amount of activity around the project: more than 480 people have contributed directly to TensorFlow, including Googlers, external researchers, independent programmers, students, and senior developers at other large companies. TensorFlow is now the most popular machine learning project on GitHub. With more than 10,000 commits in just twelve months, we've made numerous performance improvements, added support for distributed training, brought TensorFlow to iOS and Raspberry Pi, and integrated TensorFlow with widely-used big data infrastructure. We've also made TensorFlow accessible from Go, Rust, and Haskell, released state-of-the-art image classification models – and answered thousands of questions on GitHub, StackOverflow, and the TensorFlow mailing list along the way.


A Primer on Causality in Data Science

arXiv.org Machine Learning

Many questions in Data Science are fundamentally causal in that our objective is to learn the effect of some exposure (randomized or not) on an outcome interest. Even studies that are seemingly non-causal (e.g. prediction or prevalence estimation) have causal elements, such as differential censoring or measurement. As a result, we, as Data Scientists, need to consider the underlying causal mechanisms that gave rise to the data, rather than simply the pattern or association observed in the data. In this work, we review the "Causal Roadmap", a formal framework to augment our traditional statistical analyses in an effort to answer the causal questions driving our research. Specific steps of the Roadmap include clearly stating the scientific question, defining of the causal model, translating the scientific question into a causal parameter, assessing the assumptions needed to translate the causal parameter into a statistical estimand, implementation of statistical estimators including parametric and semi-parametric methods, and interpretation of our findings. Throughout we focus on the effect of an exposure occurring at a single time point and provide extensions to more advanced settings.