Dong, Yihong, Peng, Ying, Yang, Muqiao, Lu, Songtao, Shi, Qingjiang

Deep neural networks have been shown as a class of useful tools for addressing signal recognition issues in recent years, especially for identifying the nonlinear feature structures of signals. However, this power of most deep learning techniques heavily relies on an abundant amount of training data, so the performance of classic neural nets decreases sharply when the number of training data samples is small or unseen data are presented in the testing phase. This calls for an advanced strategy, i.e., model-agnostic meta-learning (MAML), which is able to capture the invariant representation of the data samples or signals. In this paper, inspired by the special structure of the signal, i.e., real and imaginary parts consisted in practical time-series signals, we propose a Complex-valued Attentional MEta Learner (CAMEL) for the problem of few-shot signal recognition by leveraging attention and meta-learning in the complex domain. To the best of our knowledge, this is also the first complex-valued MAML that can find the first-order stationary points of general nonconvex problems with theoretical convergence guarantees. Extensive experiments results showcase the superiority of the proposed CAMEL compared with the state-of-the-art methods.

This Massachusetts Institute of Technology (MIT) recent paper shows a more than interesting approach towards how machines can get to understand and interpret the relationships between objects in a scene. As this and other recent studies reflect, the pain is latentâ€¦ deep learning models are getting very good at identifying objects in all kinds of scenes, however they can't understand the relationships of those objects with each other and the surrounding environment. Even simple relationships that are obvious for a human, like this is inside of this or that is on top of that, are very hard for widely used object detection and segmentation models. There is a growing number of use cases that will require this understanding. This evolution in the models will require new data for training.

Hi guys are you working on neural networks or deep learning models and came across activation functions? And wondering what activation function is and why do we even need to use them in our deep learning models. In this post we are going to talk about activation function and why we should use activation functions in our neural networks. If you are planning to work in deep learning & build models for image, video or text recognition, You must have very good understanding of activation function and why they are required. I am going to cover this topic in a very non-mathematical and non-technical way so that you can relate to it and build an intuition.

Hajij, Mustafa, Istvan, Kyle, Zamzmi, Ghada

Cell complexes are topological spaces constructed from simple blocks called cells. They generalize graphs, simplicial complexes, and polyhedral complexes that form important domains for practical applications. We propose a general, combinatorial, and unifying construction for performing neural network-type computations on cell complexes. Furthermore, we introduce inter-cellular message passing schemes, message passing schemes on cell complexes that take the topology of the underlying space into account. In particular, our method generalizes many of the most popular types of graph neural networks.

What are your thoughts on the topic? How likely do you think that a neural network model will eventually learn to reason and prove theorems like humans? The success of AlphaZero shows that it's possible for artificial neural network based agents to derive their own knowledge from a simple set of rules. However, they suffer from challenges in reinforcement learning: they are not very sample-efficient, and an RL agent that is capable of understanding mathematics has yet to be seen. After looking around many papers I think that there exists a general lack of ability to understand logic in ML models.