Collaborating Authors


Machine learning helps grow artificial organs


IMAGE: Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed a neural network capable of... view more Researchers from the Moscow Institute of Physics and Technology, Ivannikov Institute for System Programming, and the Harvard Medical School-affiliated Schepens Eye Research Institute have developed a neural network capable of recognizing retinal tissues during the process of their differentiation in a dish. Unlike humans, the algorithm achieves this without the need to modify cells, making the method suitable for growing retinal tissue for developing cell replacement therapies to treat blindness and conducting research into new drugs. In multicellular organisms, the cells making up different organs and tissues are not the same. They have distinct functions and properties, acquired in the course of development. They start out the same, as so-called stem cells, which have the potential to become any kind of cell the mature organism incorporates.

Multi-scale Genomic Inference using Biologically Annotated Neural Networks


With the emergence of large-scale genomic datasets, there is a unique opportunity to integrate machine learning approaches as standard tools within genome-wide association (GWA) studies. Unfortunately, while machine learning methods have been shown to account for nonlinear data structures and exhibit greater predictive power over classic linear models, these same algorithms have also become criticized as "black box" techniques. Here, we present biologically annotated neural networks (BANNs), a novel probabilistic framework that makes machine learning fully amenable for GWA applications. BANNs are feedforward models with partially connected architectures that are based on biological annotations. This setup yields a fully interpretable neural network where the input layer encodes SNP-level effects, and the hidden layer models the aggregated effects among SNP-sets.

A deep reinforcement learning framework to identify key players in complex networks


Network science is an academic field that aims to unveil the structure and dynamics behind networks, such as telecommunication, computer, biological and social networks. One of the fundamental problems that network scientists have been trying to solve in recent years entails identifying an optimal set of nodes that most influence a network's functionality, referred to as key players. Identifying key players could greatly benefit many real-world applications, for instance, enhancing techniques for the immunization of networks, as well as aiding epidemic control, drug design and viral marketing. Due to its NP-hard nature, however, solving this problem using exact algorithms with polynomial time complexity has proved highly challenging. Researchers at National University of Defense Technology in China, University of California, Los Angeles (UCLA), and Harvard Medical School (HMS) have recently developed a deep reinforcement learning (DRL) framework, dubbed FINDER, that could identify key players in complex networks more efficiently.

Abolish the #TechToPrisonPipeline


The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.

7 Best Eduonix E-Degrees with Certificates 2020 JA Directives


Are you looking for Best Eduonix E-Degrees with Certificate of Completion 2020? This E-Degrees offers affordable online certificates. These are structured online training courses with multiple comprehensive training, labs, quizzes, exams. This Premium Eduonix E-Degrees are high in standard to ensure the proper learning of any technology to the core. Get online training to become a Cybersecurity Expert with this complete E-Degree.

11 Best Online Statistics Courses and Tutorials 2020 JA Directives


Are you looking for the Best Online Statistics Courses? Get everything you'd want to know about descriptive and inferential statistics with these statistics training's. Learning statistics is a must for a data scientist. If you want to learn computer science, you will need to know the statistics as well. Do you know, What is the importance of statistics?

Assessing the information content of structural and protein–ligand interaction representations for the classification of kinase inhibitor binding modes via machine learning and active learning


For kinase inhibitors, X-ray crystallography has revealed different types of binding modes. Currently, more than 2000 kinase inhibitors with known binding modes are available, which makes it possible to derive and test machine learning models for the prediction of inhibitors with different binding modes. We have addressed this prediction task to evaluate and compare the information content of distinct molecular representations including protein–ligand interaction fingerprints (IFPs) and compound structure-based structural fingerprints (i.e., atom environment/fragment fingerprints). IFPs were designed to capture binding mode-specific interaction patterns at different resolution levels. Accurate predictions of kinase inhibitor binding modes were achieved with random forests using both representations.

Moment-Based Domain Adaptation: Learning Bounds and Algorithms Machine Learning

This thesis contributes to the mathematical foundation of domain adaptation as emerging field in machine learning. In contrast to classical statistical learning, the framework of domain adaptation takes into account deviations between probability distributions in the training and application setting. Domain adaptation applies for a wider range of applications as future samples often follow a distribution that differs from the ones of the training samples. A decisive point is the generality of the assumptions about the similarity of the distributions. Therefore, in this thesis we study domain adaptation problems under as weak similarity assumptions as can be modelled by finitely many moments.

HopGAT: Hop-aware Supervision Graph Attention Networks for Sparsely Labeled Graphs Machine Learning

Due to the cost of labeling nodes, classifying a node in a sparsely labeled graph while maintaining the prediction accuracy deserves attention. The key point is how the algorithm learns sufficient information from more neighbors with different hop distances. This study first proposes a hop-aware attention supervision mechanism for the node classification task. A simulated annealing learning strategy is then adopted to balance two learning tasks, node classification and the hop-aware attention coefficients, along the training timeline. Compared with state-of-the-art models, the experimental results proved the superior effectiveness of the proposed Hop-aware Supervision Graph Attention Networks (HopGAT) model. Especially, for the protein-protein interaction network, in a 40% labeled graph, the performance loss is only 3.9%, from 98.5% to 94.6%, compared to the fully labeled graph. Extensive experiments also demonstrate the effectiveness of supervised attention coefficient and learning strategies.

Saliency-based Weighted Multi-label Linear Discriminant Analysis Machine Learning

In this paper, we propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks. The proposed method is based on a probabilistic model for defining the weights of individual samples in a weighted multi-label LDA approach. Linear Discriminant Analysis is a classical statistical machine learning method, which aims to find a linear data transformation increasing class discrimination in an optimal discriminant subspace. Traditional LDA sets assumptions related to Gaussian class distributions and single-label data annotations. To employ the LDA technique in multi-label classification problems, we exploit intuitions coming from a probabilistic interpretation of class saliency to redefine the between-class and within-class scatter matrices. The saliency-based weights obtained based on various kinds of affinity encoding prior information are used to reveal the probability of each instance to be salient for each of its classes in the multi-label problem at hand. The proposed Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.