dvc
Beyond the Eye: A Relational Model for Early Dementia Detection Using Retinal OCTA Images
Liu, Shouyue, Hao, Jinkui, Liu, Yonghuai, Fu, Huazhu, Guo, Xinyu, Zhang, Shuting, Zhao, Yitian
Early detection of dementia, such as Alzheimer's disease (AD) or mild cognitive impairment (MCI), is essential to enable timely intervention and potential treatment. Accurate detection of AD/MCI is challenging due to the high complexity, cost, and often invasive nature of current diagnostic techniques, which limit their suitability for large-scale population screening. Given the shared embryological origins and physiological characteristics of the retina and brain, retinal imaging is emerging as a potentially rapid and cost-effective alternative for the identification of individuals with or at high risk of AD. In this paper, we present a novel PolarNet+ that uses retinal optical coherence tomography angiography (OCTA) to discriminate early-onset AD (EOAD) and MCI subjects from controls. Our method first maps OCTA images from Cartesian coordinates to polar coordinates, allowing approximate sub-region calculation to implement the clinician-friendly early treatment of diabetic retinopathy study (ETDRS) grid analysis. We then introduce a multi-view module to serialize and analyze the images along three dimensions for comprehensive, clinically useful information extraction. Finally, we abstract the sequence embedding into a graph, transforming the detection task into a general graph classification problem. A regional relationship module is applied after the multi-view module to excavate the relationship between the sub-regions. Such regional relationship analyses validate known eye-brain links and reveal new discriminative patterns.
- Europe > United Kingdom (0.14)
- Asia > China > Zhejiang Province > Ningbo (0.04)
- Asia > China > Sichuan Province > Chengdu (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
- Health & Medicine > Therapeutic Area > Neurology > Dementia (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (1.00)
Generating Realistic Counterfactuals for Retinal Fundus and OCT Images using Diffusion Models
Ilanchezian, Indu, Boreiko, Valentyn, Kühlewein, Laura, Huang, Ziwei, Ayhan, Murat Seçkin, Hein, Matthias, Koch, Lisa, Berens, Philipp
Counterfactual reasoning is often used in clinical settings to explain decisions or weigh alternatives. Therefore, for imaging based specialties such as ophthalmology, it would be beneficial to be able to create counterfactual images, illustrating answers to questions like "If the subject had had diabetic retinopathy, how would the fundus image have looked?". Here, we demonstrate that using a diffusion model in combination with an adversarially robust classifier trained on retinal disease classification tasks enables the generation of highly realistic counterfactuals of retinal fundus images and optical coherence tomography (OCT) B-scans. The key to the realism of counterfactuals is that these classifiers encode salient features indicative for each disease class and can steer the diffusion model to depict disease signs or remove disease-related lesions in a realistic way. In a user study, domain experts also found the counterfactuals generated using our method significantly more realistic than counterfactuals generated from a previous method, and even indistinguishable from real images.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- Europe > Switzerland (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.94)
Turn VS Code into a One-Stop Shop for ML Experiments
One of the biggest threats to productivity in recent times is context switching. It is a term originating from computer science but applied to humans it refers to the process of stopping work on one thing, performing a different task, and then picking back up the initial task. During a work day, you might want to check something on Stack Overflow, for example, which normalization technique to choose for your project. While doing so, you start exploring the documentation of scikit-learn to see which approaches are already implemented and how they compare against each other. This might lead to you some interesting comparison articles on Medium or video tutorials on YouTube.
- Instructional Material (0.54)
- Workflow (0.50)
Versioning Machine Learning Experiments vs Tracking Them - KDnuggets
When working on a machine learning project it is common to run numerous experiments in search of a combination of an algorithm, parameters and data preprocessing steps that would yield the best model for the task at hand. To keep track of these experiments Data Scientists used to log them into Excel sheets due to a lack of a better option. However, being mostly manual, this approach had its downsides. To name a few, it was error-prone, inconvenient, slow, and completely detached from the actual experiments. Luckily, over the last few years experiment tracking has come a long way and we have seen a number of tools appear on the market that improve the way experiments can be tracked, e.g.
Versioning Machine Learning Experiments vs Tracking Them
When working on a machine learning project it is common to run numerous experiments in search of a combination of an algorithm, parameters and data preprocessing steps that would yield the best model for the task at hand. To keep track of these experiments Data Scientists used to log them into Excel sheets due to a lack of a better option. However, being mostly manual, this approach had its downsides. To name a few, it was error-prone, inconvenient, slow, and completely detached from the actual experiments. Luckily, over the last few years experiment tracking has come a long way and we have seen a number of tools appear on the market that improve the way experiments can be tracked, e.g.
MLOps for Conversational AI with Rasa, DVC, and CML (Part I)
This is the first part of a series of blog posts that describe how to use Data Version Control (DVC), and Continuous Machine Learning (CML) when developing conversational AI assistants using the Rasa framework. This post is mostly an introduction to these three components, in the next post I'll delve into the code, and how to get everything connected for Rasa MLOps bliss. If you've not heard of Data Version Control (DVC), you've been missing out. DVC is an exciting tool from iterative.ai DVC extends git's functionality to cover your data wherever you want to store it, whether that is locally, on a cloud platform like AWS S3, or a Hadoop File System. Like git, DVC is language agnostic.
SBTB 2021: AI, NLP, and MLOps
Register for Scale By the Bay to attend the 2021 edition online, October 28–29, covering EU and US timezones! A few years ago, at AI By the Bay, we predicted that every application will soon be an AI application. Today this is the case practically everywhere. On the one hand, we see a broad deployment of AI, and the production issues related to it: reliability, ease of use, integration in the end to end data pipelines. On the other hand, the high-level question of AI ethics didn't go anywhere -- rather, they are elevated into practical question of explainability, bias, and AI safety.
DagsHub Github for Data Science
DagsHub is an open-source data science & machine learning collaboration platform that allows you to quickly build, scale and deploy machine learning projects by leveraging the power of git (Source code Versioning) and DVC (Data Version Control). Since the inception of the field, the handling code and data together were the key pain points for the data professionals. Unlike the conventional software engineering projects where you just have to track the code, in ML projects you have to track the data and the models along with the code which is a complex task in itself. If you have ever tried working on an enterprise-grade ML project, you can totally relate to the multiple components like code, data, monitoring that come into play. Altogether it's a dreadful task to put together all those pieces and make them work in tandem mainly because standard code versioning platforms like GitHub, Bitbucket, or GitLab do not support pushing and pulling vast amounts of data.
An introduction: Version Control for Data Science projects with DAGsHub
Platforms like GitHub have been tools for version controlling software projects. However, Machine learning projects are faced with new challenges while working with GitHub: "Model & Data version control". GitHub has a strict file limit of 100MB. This means that Data Scientists & ML Engineers will have to improvise in order to work with GitHub as this restriction prevents version control for Large Datasets and Model Weights. The good news is that DAGsHub solves this challenge thereby allowing efficient Version Control for Data Science projects!
Machine Learning experiments and engineering with DVC
Online video course to teach basics for Machine Learning experiment management, pipelines automation and CI/CD to deliver ML solution into production. During these lessons you'll discover base features of Data Version Control (DVC), how it works and how it may benefit your Machine Learning and Data Science projects. During this course listeners learn engineering approaches in ML around a few practical examples. Screencast videos, repositories with examples and templates to put your hands dirty and make it easier apply best features in your own projects.
- Instructional Material > Course Syllabus & Notes (0.48)
- Instructional Material > Online (0.31)