Goto

Collaborating Authors

 connection pattern


General-purpose Dataflow Model with Neuromorphic Primitives

Zhang, Weihao, Du, Yu, Li, Hongyi, Ma, Songchen, Zhao, Rong

arXiv.org Artificial Intelligence

Neuromorphic computing exhibits great potential to provide high-performance benefits in various applications beyond neural networks. However, a general-purpose program execution model that aligns with the features of neuromorphic computing is required to bridge the gap between program versatility and neuromorphic hardware efficiency. The dataflow model offers a potential solution, but it faces high graph complexity and incompatibility with neuromorphic hardware when dealing with control flow programs, which decreases the programmability and performance. Here, we present a dataflow model tailored for neuromorphic hardware, called neuromorphic dataflow, which provides a compact, concise, and neuromorphic-compatible program representation for control logic. The neuromorphic dataflow introduces "when" and "where" primitives, which restructure the view of control. The neuromorphic dataflow embeds these primitives in the dataflow schema with the plasticity inherited from the spiking algorithms. Our method enables the deployment of general-purpose programs on neuromorphic hardware with both programmability and plasticity, while fully utilizing the hardware's potential.


HyperS2V: A Framework for Structural Representation of Nodes in Hyper Networks

Liu, Shu, Lai, Cameron, Toriumi, Fujio

arXiv.org Artificial Intelligence

In contrast to regular (simple) networks, hyper networks possess the ability to depict more complex relationships among nodes and store extensive information. Such networks are commonly found in real-world applications, such as in social interactions. Learning embedded representations for nodes involves a process that translates network structures into more simplified spaces, thereby enabling the application of machine learning approaches designed for vector data to be extended to network data. Nevertheless, there remains a need to delve into methods for learning embedded representations that prioritize structural aspects. This research introduces HyperS2V, a node embedding approach that centers on the structural similarity within hyper networks. Initially, we establish the concept of hyper-degrees to capture the structural properties of nodes within hyper networks. Subsequently, a novel function is formulated to measure the structural similarity between different hyper-degree values. Lastly, we generate structural embeddings utilizing a multi-scale random walk framework. Moreover, a series of experiments, both intrinsic and extrinsic, are performed on both toy and real networks. The results underscore the superior performance of HyperS2V in terms of both interpretability and applicability to downstream tasks.


A Rigorous Analysis of Linsker-type Hebbian Learning

Neural Information Processing Systems

We propose a novel rigorous approach for the analysis of Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Heb(cid:173) bian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a'connection pattern') as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emer(cid:173) gence of various receptive fields, and predict precisely within which parameter regime the network will have the potential to develop a specially designated connection pattern. In particular, this ap(cid:173) proach exposes, for the first time, the crucial role played by the synaptic density functions, and provides a complete precise picture of the parameter space that defines the relationships among the different receptive fields.


Completing Networks by Learning Local Connection Patterns

Zhang, Zhang, Tao, Ruyi, Tao, Yongzai, Qi, Mingze, Zhang, Jiang

arXiv.org Artificial Intelligence

Network completion is a harder problem than link prediction because it does not only try to infer missing links but also nodes. Different methods have been proposed to solve this problem, but few of them employed structural information - the similarity of local connection patterns. In this paper, we propose a model named C-GIN to capture the local structural patterns from the observed part of a network based on the Graph Auto-Encoder framework equipped with Graph Isomorphism Network model and generalize these patterns to complete the whole graph. Experiments and analysis on synthetic and real-world networks from different domains show that competitive performance can be achieved by C-GIN with less information being needed, and higher accuracy compared with baseline prediction models in most cases can be obtained. We further proposed a metric "Reachable Clustering Coefficient(CC)" based on network structure. And experiments show that our model perform better on a network with higher Reachable CC.


Pre-Defined Sparse Neural Networks with Hardware Acceleration

Dey, Sourya, Huang, Kuan-Wen, Beerel, Peter A., Chugg, Keith M.

arXiv.org Machine Learning

As more data have become available, the size and complexity of neural network (NN)s has risen sharply with modern NNs containing millions or even billions of trainable parameters [1], [2]. These massive NNs come with the cost of large computational and storage demands. The current state of the art is to train large NNs on Graphical Processing Unit (GPU)s in the cloud - a process that can take days to weeks even on powerful GPUs [1]-[3] or similar programmable processorswith multiply-accumulate accelerators [4]. Once trained, the model can be used for inference which is less computationally intensive and is typically performed on more general purpose processors (i.e., Central Processing Unit (CPU)s). It is increasingly desirable to run inference, and even some retraining, on embedded processors which have limited resources for computation and storage. In this regard, model reduction has been identified as a key to NN acceleration by several prominent researchers [5]. This is generally performed post-training to reduce the memory requirements to store the model for inference - e.g., methods for quantization, compression, and grouping parameters [6]-[9]. Decreasing the time, computation, storage, and energy costs for training and inference is therefore a highly relevant goal.


Understanding Community Structure in Layered Neural Networks

Watanabe, Chihiro, Hiramatsu, Kaoru, Kashino, Kunio

arXiv.org Machine Learning

A layered neural network is now one of the most common choices for the prediction of high-dimensional practical data sets, where the relationship between input and output data is complex and cannot be represented well by simple conventional models. Its effectiveness is shown in various tasks, however, the lack of interpretability of the trained result by a layered neural network has limited its application area. In our previous studies, we proposed methods for extracting a simplified global structure of a trained layered neural network by classifying the units into communities according to their connection patterns with adjacent layers. These methods provided us with knowledge about the strength of the relationship between communities from the existence of bundled connections, which are determined by threshold processing of the connection ratio between pairs of communities. However, it has been difficult to understand the role of each community quantitatively by observing the modular structure. We could only know to which sets of the input and output dimensions each community was mainly connected, by tracing the bundled connections from the community to the input and output layers. Another problem is that the finally obtained modular structure is changed greatly depending on the setting of the threshold hyperparameter used for determining bundled connections. In this paper, we propose a new method for interpreting quantitatively the role of each community in inference, by defining the effect of each input dimension on a community, and the effect of a community on each output dimension. We show experimentally that our proposed method can reveal the role of each part of a layered neural network by applying the neural networks to three types of data sets, extracting communities from the trained network, and applying the proposed method to the community structure.


A Rigorous Analysis of Linsker-type Hebbian Learning

Feng, J., Pan, H., Roychowdhury, V. P.

Neural Information Processing Systems

We propose a novel rigorous approach for the analysis of Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a'connection pattern') as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emergence of various receptive fields, and predict precisely within which parameter regime the network will have the potential to develop a specially designated connection pattern. In particular, this approach exposes, for the first time, the crucial role played by the synaptic density functions, and provides a complete precise picture of the parameter space that defines the relationships among the different receptive fields. Our theoretical predictions are confirmed by numerical simulations.


A Rigorous Analysis of Linsker-type Hebbian Learning

Feng, J., Pan, H., Roychowdhury, V. P.

Neural Information Processing Systems

We propose a novel rigorous approach for the analysis of Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a'connection pattern') as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emergence of various receptive fields, and predict precisely within which parameter regime the network will have the potential to develop a specially designated connection pattern. In particular, this approach exposes, for the first time, the crucial role played by the synaptic density functions, and provides a complete precise picture of the parameter space that defines the relationships among the different receptive fields. Our theoretical predictions are confirmed by numerical simulations.


A Rigorous Analysis of Linsker-type Hebbian Learning

Feng, J., Pan, H., Roychowdhury, V. P.

Neural Information Processing Systems

His simulations have shown that for appropriate parameter regimes, several structured connection patterns (e.g., centre-surround and oriented afferent receptive fields (aRFs)) occur progressively as the Hebbian evolution of the weights is carried out layer by layer. The behavior of Linsker's model is determined by the underlying nonlinear dynamics which are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses.