For an animal to successfully feed, mate, and avoid danger, its brain must integrate incoming information from many sensory modalities, combine the information with previously stored knowledge about the world, and then send appropriate output commands to the muscles. The information in this process is highly spatial in nature, but it is not anchored to any one coordinate reference frame. For example, sensory data from a fingertip tell the animal about a point in space, but exactly which point in space depends on the position of the finger relative to the wrist and arm it is attached to, as well as on the actual location of the animal in the world. Similarly, for the information on the retina, the point depends on depth of field, position of the eye within the socket, position of the head relative to the body, and location of the animal in the world. To integrate this highly spatial information, the brain needs to transform between coordinate systems.
The positive effects of bright ambient light are well-known among humans, when it comes to improving cognitive function. A recent study reiterates the point by finding that dim lighting, by contrast, could negatively affect the functioning of the brain among diurnal species -- those that are awake through the day and sleep at night -- including humans.
Brain-Computer Interfaces can suffer from a large variance of the subject conditions withinand across sessions. For example vigilance fluctuations in the individual, variabletask involvement, workload etc. alter the characteristics of EEG signals and thus challenge a stable BCI operation. In the present work we aim to define features based on a variant of the common spatial patterns (CSP) algorithm that are constructed invariant with respect to such nonstationarities. We enforce invariance properties by adding terms to the denominator of a Rayleigh coefficient representation of CSP such as disturbance covariance matrices from fluctuations in visual processing. In this manner physiological prior knowledge can be used to shape the classification engine for BCI. As a proof of concept we present a BCI classifier that is robust to changes in the level of parietal α-activity. In other words, the EEG decoding still works when there are lapses in vigilance.
Cataloging the neuronal cell types that comprise circuitry of individual brain regions is a major goal of modern neuroscience and the BRAIN initiative. Single-cell RNA sequencing can now be used to measure the gene expression profiles of individual neurons and to categorize neurons based on their gene expression profiles. While the single-cell techniques are extremely powerful and hold great promise, they are currently still labor intensive, have a high cost per cell, and, most importantly, do not provide information on spatial distribution of cell types in specific regions of the brain. We propose a complementary approach that uses computational methods to infer the cell types and their gene expression profiles through analysis of brain-wide single-cell resolution in situ hybridization (ISH) imagery contained in the Allen Brain Atlas (ABA). We measure the spatial distribution of neurons labeled in the ISH image for each gene and model it as a spatial point process mixture, whose mixture weights are given by the cell types which express that gene. By fitting a point process mixture model jointly to the ISH images, we infer both the spatial point process distribution for each cell type and their gene expression profile. We validate our predictions of cell type-specific gene expression profiles using single cell RNA sequencing data, recently published for the mouse somatosensory cortex. Jointly with the gene expression profiles, cell features such as cell size, orientation, intensity and local density level are inferred per cell type.