Goto

Collaborating Authors

 dsn model




A Preliminaries

Neural Information Processing Systems

A.1 Preliminaries on Comparison of Exploration Space The size of exploration space for the layer-wise manner in SET algorithm is: null N When the corresponding exploration region of each group is defined as in Section 3.4, the exploration EEG2 datasets are shown in Figure 9, Figure 10 and Figure 11, respectively. The eNRF sizes shown in a sub-figure are from the first dynamic sparse CNN layer with a specific sparsity ratio (e.g. Figure 10: The statistics of eNRF sizes for the Daily Sport dataset.The eNRF sizes shown in a S = 50% (left), S = 70% (middle), and S = 90% (right)). Figure 11: The statistics of eNRF sizes for the EEG2 dataset.The eNRF sizes shown in a sub-figure To demonstrate the advantage of various eNRF coverage, we compare the performances in terms of test accuracy between DSN models with various eNRF coverage and optimal NRF. However, when the kernel size is oversized (e.g. As discussed in 4.5 and B.1, grouping the kernels in each dynamic sparse CNN layer The corresponding exploration regions for each group are shown in the red boxes.



Dynamic Sparse Network for Time Series Classification: Learning What to "see''

Xiao, Qiao, Wu, Boqian, Zhang, Yu, Liu, Shiwei, Pechenizkiy, Mykola, Mocanu, Elena, Mocanu, Decebal Constantin

arXiv.org Artificial Intelligence

The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC). However, the variation of signal scales across and within time series data, makes it challenging to decide on proper RF sizes for TSC. In this paper, we propose a dynamic sparse network (DSN) with sparse connections for TSC, which can learn to cover various RF without cumbersome hyper-parameters tuning. The kernels in each sparse layer are sparse and can be explored under the constraint regions by dynamic sparse training, which makes it possible to reduce the resource cost. The experimental results show that the proposed DSN model can achieve state-of-art performance on both univariate and multivariate TSC datasets with less than 50\% computational cost compared with recent baseline methods, opening the path towards more accurate resource-aware methods for time series analyses. Our code is publicly available at: https://github.com/QiaoXiao7282/DSN.


r/MachineLearning - Learning like humans with Deep Symbolic Networks

#artificialintelligence

Abstract: We introduce the Deep Symbolic Network (DSN) model, which aims at becoming the white-box version of Deep Neural Networks (DNN). The DSN model provides a simple, universal yet powerful structure, similar to DNN, to represent any knowledge of the world, which is transparent to humans. The conjecture behind the DSN model is that any type of real world objects sharing enough common features are mapped into human brains as a symbol. Those symbols are connected by links, representing the composition, correlation, causality, or other relationships between them, forming a deep, hierarchical symbolic network structure. Powered by such a structure, the DSN model is expected to learn like humans, because of its unique characteristics.