Goto

Collaborating Authors

 head direction




Appendix

Neural Information Processing Systems

Fitting T1-mGPLVM to the binned spike data, we found that the inferred latent state was highly correlated with the true head direction (Figure 5b). Here we make this connection more explicit. As described in the main text, the Lie algebrag of a groupG is a vector space tangent toG at its identity element. However,because the Lie algebra is isomorphic toRn, we have found it convenient in both our exposition and our implementation to work directly with the pair(Rn,ExpG), instead of(g,expG). We begin by noting thatSn is not a Lie group unlessn = 1 or n = 3, thus we can only apply the ReLie framework toS1 and S3.


Long-TailedClassificationbyKeepingtheGoodand RemovingtheBadMomentumCausalEffect

Neural Information Processing Systems

Therefore, long-tailed classification is the key to deep learning at scale. However, existing methods are mainly based on reweighting/re-sampling heuristics that lack a fundamental theory. In this paper, weestablish acausal inference framework,which notonlyunravelsthewhysof previous methods, but also derives a new principled solution.


Manifold GPLVMs for discovering non-Euclidean latent structure in neural data

Neural Information Processing Systems

A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables such as head direction, spatial location, upcoming movements, or mental spatial transformations. Often, these latent variables are internal constructs not directly accessible to the experimenter. Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation in an unsupervised way. In contrast to previous models which assume Euclidean latent spaces, we embrace the fact that latent states often belong to symmetric manifolds such as spheres, tori, or rotation groups of various dimensions. We therefore propose the manifold Gaussian process latent variable model (mGPLVM), where neural responses arise from (i) a shared latent variable living on a specific manifold, and (ii) a set of non-parametric tuning curves determining how each neuron contributes to the representation. Cross-validated comparisons of models with different topologies can be used to distinguish between candidate manifolds, and variational inference enables quantification of uncertainty. We demonstrate the validity of the approach on several synthetic datasets, as well as on calcium recordings from the ellipsoid body of Drosophila melanogaster and extracellular recordings from the mouse anterodorsal thalamic nucleus. These circuits are both known to encode head direction, and mGPLVM correctly recovers the ring topology expected from neural populations representing a single angular variable.



Supplementary Material: M M COWS: A Multimodal Dataset for Dairy Cattle Monitoring

Neural Information Processing Systems

This document provides additional details that complement the main paper. We discuss the steps used to synchronize and calibrate the visual data in Section A. Section B elaborates on the details of UWB localization, heading direction estimation, and obtaining the reference for lying behavior. We keep the order of figures, tables, and equations in numerical, and refer to them independently from the main paper unless explicitly stated otherwise. The paper checklist is attached as the final part of the main paper. We discuss additional details of processing the visual data and calibrating four camera views.



Manifold GPLVMs for discovering non-Euclidean latent structure in neural data

Neural Information Processing Systems

A common problem in neuroscience is to elucidate the collective neural representations of behaviorally important variables such as head direction, spatial location, upcoming movements, or mental spatial transformations. Often, these latent variables are internal constructs not directly accessible to the experimenter. Here, we propose a new probabilistic latent variable model to simultaneously identify the latent state and the way each neuron contributes to its representation in an unsupervised way. In contrast to previous models which assume Euclidean latent spaces, we embrace the fact that latent states often belong to symmetric manifolds such as spheres, tori, or rotation groups of various dimensions. We therefore propose the manifold Gaussian process latent variable model (mGPLVM), where neural responses arise from (i) a shared latent variable living on a specific manifold, and (ii) a set of non-parametric tuning curves determining how each neuron contributes to the representation. Cross-validated comparisons of models with different topologies can be used to distinguish between candidate manifolds, and variational inference enables quantification of uncertainty.


A*Net and NBFNet Learn Negative Patterns on Knowledge Graphs

Betz, Patrick, Stelzner, Nathanael, Meilicke, Christian, Stuckenschmidt, Heiner, Bartelt, Christian

arXiv.org Artificial Intelligence

In this technical report, we investigate the predictive performance differences of a rule-based approach and the GNN architectures NBFNet and A*Net with respect to knowledge graph completion. For the two most common benchmarks, we find that a substantial fraction of the performance difference can be explained by one unique negative pattern on each dataset that is hidden from the rule-based approach. Our findings add a unique perspective on the performance difference of different model classes for knowledge graph completion: Models can achieve a predictive performance advantage by penalizing scores of incorrect facts opposed to providing high scores for correct facts.