Goto

Collaborating Authors

 rgc





A Analysis details for the linear model

Neural Information Processing Systems

Here, we provide details for the derivation of (9). Now, consider the determinants in (2). That is, we can enforce a looser restriction on firing rates by bounding the power used by the filter. In practice, we bound the square of this expression, which yields the continuous objective (9). As noted in Section 3.1 the optimal solution for (9) takes the form (10). Our starting point is (9).




Multi-modal Pre-training for Medical Vision-language Understanding and Generation: An Empirical Study with A New Benchmark

Xu, Li, Liu, Bo, Khan, Ameer Hamza, Fan, Lu, Wu, Xiao-Ming

arXiv.org Artificial Intelligence

With the availability of large-scale, comprehensive, and general-purpose vision-language (VL) datasets such as MSCOCO, vision-language pre-training (VLP) has become an active area of research and proven to be effective for various VL tasks such as visual-question answering. However, studies on VLP in the medical domain have so far been scanty. To provide a comprehensive perspective on VLP for medical VL tasks, we conduct a thorough experimental analysis to study key factors that may affect the performance of VLP with a unified vision-language Transformer. To allow making sound and quick pre-training decisions, we propose RadioGraphy Captions (RGC), a high-quality, multi-modality radiographic dataset containing 18,434 image-caption pairs collected from an open-access online database MedPix. RGC can be used as a pre-training dataset or a new benchmark for medical report generation and medical image-text retrieval. By utilizing RGC and other available datasets for pre-training, we develop several key insights that can guide future medical VLP research and new strong baselines for various medical VL tasks.


Reinforcement Graph Clustering with Unknown Cluster Number

Liu, Yue, Liang, Ke, Xia, Jun, Yang, Xihong, Zhou, Sihang, Liu, Meng, Liu, Xinwang, Li, Stan Z.

arXiv.org Artificial Intelligence

Deep graph clustering, which aims to group nodes into disjoint clusters by neural networks in an unsupervised manner, has attracted great attention in recent years. Although the performance has been largely improved, the excellent performance of the existing methods heavily relies on an accurately predefined cluster number, which is not always available in the real-world scenario. To enable the deep graph clustering algorithms to work without the guidance of the predefined cluster number, we propose a new deep graph clustering method termed Reinforcement Graph Clustering (RGC). In our proposed method, cluster number determination and unsupervised representation learning are unified into a uniform framework by the reinforcement learning mechanism. Concretely, the discriminative node representations are first learned with the contrastive pretext task. Then, to capture the clustering state accurately with both local and global information in the graph, both node and cluster states are considered. Subsequently, at each state, the qualities of different cluster numbers are evaluated by the quality network, and the greedy action is executed to determine the cluster number. In order to conduct feedback actions, the clustering-oriented reward function is proposed to enhance the cohesion of the same clusters and separate the different clusters. Extensive experiments demonstrate the effectiveness and efficiency of our proposed method. The source code of RGC is shared at https://github.com/yueliu1999/RGC and a collection (papers, codes and, datasets) of deep graph clustering is shared at https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering on Github.


Revealing structure components of the retina by deep learning networks

Yan, Qi, Yu, Zhaofei, Chen, Feng, Liu, Jian K.

arXiv.org Machine Learning

Deep convolutional neural networks (CNNs) have demonstrated impressive performance on visual object classification tasks. In addition, it is a useful model for predication of neuronal responses recorded in visual system. However, there is still no clear understanding of what CNNs learn in terms of visual neuronal circuits. Visualizing CNN's features to obtain possible connections to neuronscience underpinnings is not easy due to highly complex circuits from the retina to higher visual cortex. Here we address this issue by focusing on single retinal ganglion cells with a simple model and electrophysiological recordings from salamanders. By training CNNs with white noise images to predicate neural responses, we found that convolutional filters learned in the end are resembling to biological components of the retinal circuit. Features represented by these filters tile the space of conventional receptive field of retinal ganglion cells. These results suggest that CNN could be used to reveal structure components of neuronal circuits.


Regeneron: Spark R&D Developer

@machinelearnbot

The Regeneron Genetics Center is a wholly-owned subsidiary of the Company organized to collaborate with health systems and research groups to elucidate, on a large scale, genetic factors that cause or influence a range of human diseases. Building upon Regeneron's strengths in mouse genetics and genetics-driven drug discovery and development, the Center will specialize in ultra-high-throughput exome sequencing and computational biology; discovery of genotype-phenotype associations through linkage to well-annotated de-identified patient electronic medical records; and validation of discoveries using Regeneron's VelociGene technology. Our interests encompass a breadth of different areas such as Mendelian and family frameworks, large-scale population genetics (both common and rare variants), and gene-gene interactions. Program goals include target discovery, indication discovery, and patient-disease stratification. Objectives include advancing basic science around the world through public sharing of discoveries, providing clinically-valuable insights to physicians and patients of collaborating health-care systems, and identifying novel targets for drug development.