nature biomedical engineering
Graph representation learning in biomedicine and healthcare - Nature Biomedical Engineering
Networks—or graphs—are universal descriptors of systems of interacting elements. In biomedicine and healthcare, they can represent, for example, molecular interactions, signalling pathways, disease co-morbidities or healthcare systems. In this Perspective, we posit that representation learning can realize principles of network medicine, discuss successes and current limitations of the use of representation learning on graphs in biomedicine and healthcare, and outline algorithmic strategies that leverage the topology of graphs to embed them into compact vectorial spaces. We argue that graph representation learning will keep pushing forward machine learning for biomedicine and healthcare applications, including the identification of genetic variants underlying complex traits, the disentanglement of single-cell behaviours and their effects on health, the assistance of patients in diagnosis and treatment, and the development of safe and effective medicines. This Perspective outlines the successes and limitations of graph deep learning for biomedical and healthcare applications.
Fast and scalable search of whole-slide images via self-supervised deep learning - Nature Biomedical Engineering
The adoption of digital pathology has enabled the curation of large repositories of gigapixel whole-slide images (WSIs). Computationally identifying WSIs with similar morphologic features within large repositories without requiring supervised training can have significant applications. However, the retrieval speeds of algorithms for searching similar WSIs often scale with the repository size, which limits their clinical and research potential. Here we show that self-supervised deep learning can be leveraged to search for and retrieve WSIs at speeds that are independent of repository size. The algorithm, which we named SISH (for self-supervised image search for histology) and provide as an open-source package, requires only slide-level annotations for training, encodes WSIs into meaningful discrete latent representations and leverages a tree data structure for fast searching followed by an uncertainty-based ranking algorithm for WSI retrieval. We evaluated SISH on multiple tasks (including retrieval tasks based on tissue-patch queries) and on datasets spanning over 22,000 patient cases and 56 disease subtypes. SISH can also be used to aid the diagnosis of rare cancer types for which the number of available WSIs is often insufficient to train supervised deep-learning models. A self-supervised deep-learning algorithm searches for and retrieves gigapixel whole-slide images at speeds that are independent of the size of the image repository
- Health & Medicine > Therapeutic Area > Oncology (0.64)
- Health & Medicine > Health Care Technology (0.40)
Self-supervised learning in medicine and healthcare - Nature Biomedical Engineering
The development of medical applications of machine learning has required manual annotation of data, often by medical experts. Yet, the availability of large-scale unannotated data provides opportunities for the development of better machine-learning models. In this Review, we highlight self-supervised methods and models for use in medicine and healthcare, and discuss the advantages and limitations of their application to tasks involving electronic health records and datasets of medical images, bioelectrical signals, and sequences and structures of genes and proteins. We also discuss promising applications of self-supervised learning for the development of models leveraging multimodal datasets, and the challenges in collecting unbiased data for their training. Self-supervised learning may accelerate the development of medical artificial intelligence.
Shifting machine learning for healthcare from development to deployment and from models to data - Nature Biomedical Engineering
In the past decade, the application of machine learning (ML) to healthcare has helped drive the automation of physician tasks as well as enhancements in clinical capabilities and access to care. This progress has emphasized that, from model development to model deployment, data play central roles. In this Review, we provide a data-centric view of the innovations and challenges that are defining ML for healthcare. We discuss deep generative models and federated learning as strategies to augment datasets for improved model performance, as well as the use of the more recent transformer models for handling larger datasets and enhancing the modelling of clinical text. We also discuss data-focused problems in the deployment of ML, emphasizing the need to efficiently deliver data to ML models for timely clinical predictions and to account for natural data shifts that can deteriorate model performance. This Review discusses the use of deep generative models, federated learning and transformer models to address challenges in the deployment of machine learning for healthcare.
- North America > United States (0.20)
- Europe > United Kingdom (0.06)
Electronic conversations between neurons could soon be recorded precisely
Electrodes are used to pick up electrical signals. Neurons however are tiny units of the nervous system and it is difficult to create an electrode small enough to record all the workings of a neurons. Researchers from Harvard University have overcome this obstacle by creating an electronic chip that could be implanted within these networks and could thus perform highly sensitive intracellular recordings of the neurons individually. This could be a boon for neuronal research believes the team as it could help provide an in-depth knowledge of the neuronal connections and synaptic workings. The team wrote, "Current electrophysiological or optical techniques cannot reliably perform simultaneous intracellular recordings from more than a few tens of neurons."