Goto

Collaborating Authors

Deep Neural Network from Scratch in Python

#artificialintelligence

In this video we build on last week Multilayer perceptrons to allow for more flexibility in the architecture! However, we need to be careful about the layer of abstraction we put in place in order to facilitate the work of the user who want to simply fit and predict. Here we make use of the following three concept: Network, Layer and Neuron. These three components will be composed together to make a fully connected feedforward neural network neural network. For those who don't know a fully connected feedforward neural network is defined as follows (From Wikipedia): "A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from its descendant: recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network."


Dynamic Ensemble Modeling Approach to Nonstationary Neural Decoding in Brain-Computer Interfaces

Neural Information Processing Systems

Brain-computer interfaces (BCIs) have enabled prosthetic device control by decoding motor movements from neural activities. Neural signals recorded from cortex exhibit nonstationary property due to abrupt noises and neuroplastic changes in brain activities during motor control. Current state-of-the-art neural signal decoders such as Kalman filter assume fixed relationship between neural activities and motor movements, thus will fail if this assumption is not satisfied. We propose a dynamic ensemble modeling (DyEnsemble) approach that is capable of adapting to changes in neural signals by employing a proper combination of decoding functions. The DyEnsemble method firstly learns a set of diverse candidate models.


Recurrent linear models of simultaneously-recorded neural populations

Neural Information Processing Systems

Population neural recordings with long-range temporal structure are often best understood in terms of a shared underlying low-dimensional dynamical process. Advances in recording technology provide access to an ever larger fraction of the population, but the standard computational approaches available to identify the collective dynamics scale poorly with the size of the dataset. Here we describe a new, scalable approach to discovering the low-dimensional dynamics that underlie simultaneously recorded spike trains from a neural population. Our method is based on recurrent linear models (RLMs), and relates closely to timeseries models based on recurrent neural networks. We formulate RLMs for neural data by generalising the Kalman-filter-based likelihood calculation for latent linear dynamical systems (LDS) models to incorporate a generalised-linear observation process.


Photoshop goes Neural

#artificialintelligence

The main target group of Adobe Neural Filters is creative agencies, who want to benefit from ML regarding efficiency. Disclaimer: for the portrait experiments in this publication I am using images, generated by Artbreeder, which is based on StyleGAN/StyleGAN2. I tried "human photographies" and the results are convincing as well. The Filter "Skin Smoothing" (Featured Filter) will be surely popular among photographers, who would like to "brush up" their models digitally (I let it here neutrally, even if I rather like natural face features without retouching a lot). You can steer the Blureness and Smoothness of a photo.