Goto

Collaborating Authors

 wiring diagram


From data to concepts via wiring diagrams

Lo, Jason, Jafari, Mohammadnima

arXiv.org Artificial Intelligence

A wiring diagram is a labeled directed graph that represents an abstract concept such as a temporal process. In this article, we introduce the notion of a quasi-skeleton wiring diagram graph, and prove that quasi-skeleton wiring diagram graphs correspond to Hasse diagrams. Using this result, we designed algorithms that extract wiring diagrams from sequential data. We used our algorithms in analyzing the behavior of an autonomous agent playing a computer game, and the algorithms correctly identified the winning strategies. We compared the performance of our main algorithm with two other algorithms based on standard clustering techniques (DBSCAN and agglomerative hierarchical), including when some of the data was perturbed. Overall, this article brings together techniques in category theory, graph theory, clustering, reinforcement learning, and data engineering.


Quantifying analogy of concepts via ologs and wiring diagrams

Lo, Jason

arXiv.org Artificial Intelligence

We build on the theory of ontology logs (ologs) created by Spivak and Kent, and define a notion of wiring diagrams. In this article, a wiring diagram is a finite directed labelled graph. The labels correspond to types in an olog; they can also be interpreted as readings of sensors in an autonomous system. As such, wiring diagrams can be used as a framework for an autonomous system to form abstract concepts. We show that the graphs underlying skeleton wiring diagrams form a category. This allows skeleton wiring diagrams to be compared and manipulated using techniques from both graph theory and category theory. We also extend the usual definition of graph edit distance to the case of wiring diagrams by using operations only available to wiring diagrams, leading to a metric on the set of all skeleton wiring diagrams. In the end, we give an extended example on calculating the distance between two concepts represented by wiring diagrams, and explain how to apply our framework to any application domain.


Category Theory for Autonomous Robots: The Marathon 2 Use Case

Aguado, Esther, Gómez, Virgilio, Hernando, Miguel, Rossi, Claudio, Sanz, Ricardo

arXiv.org Artificial Intelligence

Model-based systems engineering (MBSE) is a methodology that exploits system representation during the entire system life-cycle. The use of formal models has gained momentum in robotics engineering over the past few years. Models play a crucial role in robot design; they serve as the basis for achieving holistic properties, such as functional reliability or adaptive resilience, and facilitate the automated production of modules. We propose the use of formal conceptualizations beyond the engineering phase, providing accurate models that can be leveraged at runtime. This paper explores the use of Category Theory, a mathematical framework for describing abstractions, as a formal language to produce such robot models. To showcase its practical application, we present a concrete example based on the Marathon 2 experiment. Here, we illustrate the potential of formalizing systems -- including their recovery mechanisms -- which allows engineers to design more trustworthy autonomous robots. This, in turn, enhances their dependability and performance.


A Probabilistic Generative Model of Free Categories

Sennesh, Eli, Xu, Tom, Maruyama, Yoshihiro

arXiv.org Machine Learning

Applied category theory has recently developed libraries for computing with morphisms in interesting categories, while machine learning has developed ways of learning programs in interesting languages. Taking the analogy between categories and languages seriously, this paper defines a probabilistic generative model of morphisms in free monoidal categories over domain-specific generating objects and morphisms. The paper shows how acyclic directed wiring diagrams can model specifications for morphisms, which the model can use to generate morphisms. Amortized variational inference in the generative model then enables learning of parameters (by maximum likelihood) and inference of latent variables (by Bayesian inversion). A concrete experiment shows that the free category prior achieves competitive reconstruction performance on the Omniglot dataset.


Neuroscience's Existential Crisis - Issue 107: The Edge

Nautilus

On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard's campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard's high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement. Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.


An Existential Crisis in Neuroscience - Issue 94: Evolving

Nautilus

This week we are reprinting our top stories of 2020. This article first appeared online in our "Maps" issue in January, 2020. On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard's campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard's high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement. Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part.


This Brain-Inspired AI Self-Drives With Just 19 Neurons

#artificialintelligence

Recently, a team of researchers from MIT, Institute of Science and Technology Austria (IST Austria) and Technische Universität Wien (TU Wien) developed an AI system by combining brain-inspired neural computation principles and scalable deep learning architectures. The AI system is basically a brain-inspired intelligent agent that learns to control an autonomous vehicle directly from its camera inputs. The researchers discovered that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. One of the interesting facts of this research is that the AI agent is inspired by the neural computations known to happen in biological brains in order to achieve a remarkable degree of controllability. They took the inspiration from animals as small as the roundworms.


An Existential Crisis in Neuroscience - Issue 81: Maps

Nautilus

On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard's campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard's high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement. Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.


Mapping the Brain to Build Better Machines Quanta Magazine

#artificialintelligence

Take a three year-old to the zoo, and she intuitively knows that the long-necked creature nibbling leaves is the same thing as the giraffe in her picture book. That superficially easy feat is in reality quite sophisticated. The cartoon drawing is a frozen silhouette of simple lines, while the living animal is awash in color, texture, movement and light. It can contort into different shapes and looks different from every angle. Humans excel at this kind of task.


How to Map the Circuits That Define Us

#artificialintelligence

Marta Zlatic owns what could be the most tedious film collection ever. In her laboratory at the Janelia Research Campus in Ashburn, Virginia, the neuroscientist has stored more than 20,000 hours of black-and-white video featuring fruit-fly (Drosophila) larvae. The stars of these films are doing mundane maggoty things, such as wriggling and crawling about, but the footage is helping to answer one of the biggest questions in modern neuroscience: how the circuitry of the brain creates behavior. It's a major goal across the field: to work out how neurons wire up, how signals move through the networks and how these signals work together to pilot an animal around, to make decisions or -- in humans -- to express emotions and create consciousness. Even under the most humdrum conditions -- "normal lighting; no sensory cues; they're not hungry", says Zlatic -- her fly larvae can be made to perform 30 different actions, including retracting or turning their heads, or rolling.