Goto

Collaborating Authors

 vesicle


Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse

Cornelius Schröder, Ben James, Leon Lagnado, Philipp Berens

Neural Information Processing Systems

Here, we develop an approximate Bayesian inference scheme for a fully stochastic, biophysically inspired model of glutamate release at the ribbon synapse, a highly specialized synapse found in different sensory systems. The model translates known structural features of the ribbon synapse into a set of stochastically coupled equations.



Dynamic Stochastic Synapses as Computational Units

Neural Information Processing Systems

In most neural network models, synapses are treated as static weights that change only on the slow time scales of learning. In fact, however, synapses are highly dynamic, and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. Changes in release probability represent one of the main mechanisms by which synaptic efficacy is modulated in neural circuits. We propose and investigate a simple model for dynamic stochastic synapses that can easily be integrated into common models for neural computation. We show through computer simulations and rigorous theoretical analysis that this model for a dynamic stochastic synapse increases computational power in a nontrivial way.


Autonomous Learning of Generative Models with Chemical Reaction Network Ensembles

Poole, William, Ouldridge, Thomas E., Gopalkrishnan, Manoj

arXiv.org Artificial Intelligence

Can a micron sized sack of interacting molecules autonomously learn an internal model of a complex and fluctuating environment? We draw insights from control theory, machine learning theory, chemical reaction network theory, and statistical physics to develop a general architecture whereby a broad class of chemical systems can autonomously learn complex distributions. Our construction takes the form of a chemical implementation of machine learning's optimization workhorse: gradient descent on the relative entropy cost function. We show how this method can be applied to optimize any detailed balanced chemical reaction network and that the construction is capable of using hidden units to learn complex distributions. This result is then recast as a form of integral feedback control. Finally, due to our use of an explicit physical model of learning, we are able to derive thermodynamic costs and trade-offs associated to this process.


New test uses nanotechnology, artificial intelligence to diagnose TB in children

#artificialintelligence

A new blood test developed by Tulane University researchers combines nanotechnology with artificial intelligence to diagnose tuberculosis (TB) in children in instances when the deadly disease might otherwise go undetected, according to a study in Nature Biomedical Engineering. Although the current test requires a sophisticated lab to perform, researchers are working to streamline it so it can be performed in the community and read with a smartphone. "TB is a disease found primarily in resource-limited areas, the ideal is to create a smartphone-based method that could be used at the point-of-care in these settings," said senior study author Tony Hu, PhD, Weatherhead Presidential Chair in Biotechnology Innovation at Tulane University. TB is the second most common cause of infectious disease death worldwide, having only recently been supplanted by COVID-19. The disease is particularly deadly in young children, especially those with HIV.


Origin of life from a maker's perspective -- focus on protocellular compartments in bottom-up synthetic biology

Ivanov, Ivan, Smoukov, Stoyan K., Nourafkan, Ehsan, Landfester, Katharina, Schwille, Petra

arXiv.org Artificial Intelligence

The origin of life is shrouded in mystery, with few surviving clues, obscured by evolutionary competition. Previous reviews have touched on the complementary approaches of top-down and bottom-up synthetic biology to augment our understanding of living systems. Here we point out the synergies between these fields, especially between bottom-up synthetic biology and origin of life research. We explore recent progress made in artificial cell compartmentation in line with the crowded cell, its metabolism, as well as cycles of growth and division, and how those efforts are starting to be combined. Though the complexity of current life is among its most striking characteristics, none of life's essential features require it, and they are unlikely to have emerged thus complex from the beginning. Rather than recovering the one true origin lost in time, current research converges towards reproducing the emergence of minimal life, by teasing out how complexity and evolution may arise from a set of essential components.


How to debug a synapse classifier with webKnossos

#artificialintelligence

What is your role at scalable minds? As a data scientist, I continuously work on our products, talk to collaborators about their needs, handle deadlines, and train machine learning models. This means sometimes training a classifier, debugging it iteratively and so on, and sometimes building data processing pipelines. What do you like most about your work here? The fact that we support researchers in neuroscience and life sciences, which is a meaningful purpose.


Neural Mesh: Introducing a Notion of Space and Conservation of Energy to Neural Networks

Beck, Jacob, Papakipos, Zoe

arXiv.org Artificial Intelligence

Neural networks are based on a simplified model of the brain. In this project, we wanted to relax the simplifying assumptions of a traditional neural network by making a model that more closely emulates the low level interactions of neurons. Like in an RNN, our model has a state that persists between time steps, so that the energies of neurons persist. However, unlike an RNN, our state consists of a 2 dimensional matrix, rather than a 1 dimensional vector, thereby introducing a concept of distance to other neurons within the state. In our model, neurons can only fire to adjacent neurons, as in the brain. Like in the brain, we only allow neurons to fire in a time step if they contain enough energy, or excitement. We also enforce a notion of conservation of energy, so that a neuron cannot excite its neighbors more than the excitement it already contained at that time step. Taken together, these two features allow signals in the form of activations to flow around in our network over time, making our neural mesh more closely model signals traveling through the brain the brain. Although our main goal is to design an architecture to more closely emulate the brain in the hope of having a correct internal representation of information by the time we know how to properly train a general intelligence, we did benchmark our neural mash on a specific task. We found that by increasing the runtime of the mesh, we were able to increase its accuracy without increasing the number of parameters.


The Nameless Mouse Behind the Largest-Ever Neural Network

WIRED

Once upon a time, there was a little black mouse. When he was nine months old, he died. After that, some men and women scooped out his tiny brain and sliced it into slices thinner than a whisker. Over the next few years, the men and women looked at all the slices very, very closely. Some of the parts, they realized, connected to other parts.


Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression

Vollgraf, Roland, Scholz, Michael, Meinertzhagen, Ian A., Obermayer, Klaus

Neural Information Processing Systems

Nonlinear filtering can solve very complex problems, but typically involve very time consuming calculations. Here we show that for filters that are constructed as a RBF network with Gaussian basis functions, a decomposition into linear filters exists, which can be computed efficiently in the frequency domain, yielding dramatic improvement in speed. We present an application of this idea to image processing. In electron micrograph images of photoreceptor terminals of the fruit fly, Drosophila, synaptic vesicles containing neurotransmitter should be detected and labeled automatically. We use hand labels, provided by human experts, to learn a RBF filter using Support Vector Regression with Gaussian kernels. We will show that the resulting nonlinear filter solves the task to a degree of accuracy, which is close to what can be achieved by human experts. This allows the very time consuming task of data evaluation to be done efficiently.