Goto

Collaborating Authors

Analyzing the Capacity of Distributed Vector Representations to Encode Spatial Information

arXiv.org Artificial Intelligence

Vector Symbolic Architectures belong to a family of related cognitive modeling approaches that encode symbols and structures in high-dimensional vectors. Similar to human subjects, whose capacity to process and store information or concepts in short-term memory is subject to numerical restrictions,the capacity of information that can be encoded in such vector representations is limited and one way of modeling the numerical restrictions to cognition. In this paper, we analyze these limits regarding information capacity of distributed representations. We focus our analysis on simple superposition and more complex, structured representations involving convolutive powers to encode spatial information. In two experiments, we find upper bounds for the number of concepts that can effectively be stored in a single vector.


Prediction of Drug Synergy by Ensemble Learning

arXiv.org Machine Learning

One of the promising methods for the treatment of complex diseases such as cancer is combinational therapy. Due to the combinatorial complexity, machine learning models can be useful in this field, where significant improvements have recently been achieved in determination of synergistic combinations. In this study, we investigate the effectiveness of different compound representations in predicting the drug synergy. On a large drug combination screen dataset, we first demonstrate the use of a promising representation that has not been used for this problem before, then we propose an ensemble on representation-model combinations that outperform each of the baseline models. 1 Scientific Background A drug combination is called synergistic if the effect of the drug combination on the reference cell is greater than the total effect taken from the administration of the individual drugs. If the opposite situation is observed, the drug combination is called antagonistic .


Backdoors in Neural Models of Source Code

arXiv.org Machine Learning

Deep neural networks are vulnerable to a range of adversaries. A particularly pernicious class of vulnerabilities are backdoors, where model predictions diverge in the presence of subtle triggers in inputs. An attacker can implant a backdoor by poisoning the training data to yield a desired target prediction on triggered inputs. We study backdoors in the context of deep-learning for source code.


Unsupervised Learning of Disentangled Representations from Video

arXiv.org Machine Learning

We present a new model DrNET that learns disentangled image representations from video. Our approach leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. The disentangled representation can be used for a range of tasks. For example, applying a standard LSTM to the time-vary components enables prediction of future frames. We evaluate our approach on a range of synthetic and real videos, demonstrating the ability to coherently generate hundreds of steps into the future.


Multiple Representations in Cognitive Architectures

AAAI Conferences

The widely demonstrated ability of humans to deal with multiple representations of information has a number of important implications for a proposed standard model of the mind (SMM). In this paper we outline four and argue that a SMM must incorporate (a) multiple representational formats and (b) meta-cognitive processes that operate on them. We then describe current approaches to extend cognitive architectures with visual-spatial representations, in part to illustrate the limitations of current architectures in relation to the implications we raise but also to identify the basis upon which a consensus about the nature of these additional representations can be agreed. We believe that addressing these implications and outlining a specification for multiple representations should be a key goal for those seeking to develop a standard model of the mind.