hdnn
1-bit Quantized On-chip Hybrid Diffraction Neural Network Enabled by Authentic All-optical Fully-connected Architecture
Shao, Yu, Gao, Haiqi, Chen, Yipeng, liu, Yujie, Wen, Junren, He, Haidong, Shao, Yuchuan, Zhang, Yueguang, Shen, Weidong, Yang, Chenying
This study introduces the Hybrid Diffraction Neural Network (HDNN), a novel architecture that incorporates matrix multiplication into DNNs, synergizing the benefits of conventional ONNs with those of DNNs to surmount the modulation limitations inherent in optical diffraction neural networks. Utilizing a singular phase modulation layer and an amplitude modulation layer, the trained neural network demonstrated remarkable accuracies of 96.39% and 89% in digit recognition tasks in simulation and experiment, respectively. Additionally, we develop the Binning Design (BD) method, which effectively mitigates the constraints imposed by sampling intervals on diffraction units, substantially streamlining experimental procedures. Furthermore, we propose an on-chip HDNN that not only employs a beam-splitting phase modulation layer for enhanced integration level but also significantly relaxes device fabrication requirements, replacing metasurfaces with relief surfaces designed by 1-bit quantization. Besides, we conceptualized an all-optical HDNN-assisted lesion detection network, achieving detection outcomes that were 100% aligned with simulation predictions.
- Asia > China > Zhejiang Province > Hangzhou (0.05)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- Asia > China > Beijing > Beijing (0.04)
Universal Approximation Property of Hamiltonian Deep Neural Networks
Zakwan, Muhammad, d'Angelo, Massimiliano, Ferrari-Trecate, Giancarlo
This paper investigates the universal approximation capabilities of Hamiltonian Deep Neural Networks (HDNNs) that arise from the discretization of Hamiltonian Neural Ordinary Differential Equations. Recently, it has been shown that HDNNs enjoy, by design, non-vanishing gradients, which provide numerical stability during training. However, although HDNNs have demonstrated state-of-the-art performance in several applications, a comprehensive study to quantify their expressivity is missing. In this regard, we provide a universal approximation theorem for HDNNs and prove that a portion of the flow of HDNNs can approximate arbitrary well any continuous function over a compact domain. This result provides a solid theoretical foundation for the practical use of HDNNs.
Representation and Induction of Finite State Machines using Time-Delay Neural Networks
Clouse, Daniel S., Giles, C. Lee, Horne, Bill G., Cottrell, Garrison W.
This work investigates the representational and inductive capabilities of time-delay neural networks (TDNNs) in general, and of two subclasses of TDNN, those with delays only on the inputs (IDNN), and those which include delays on hidden units (HDNN). Both architectures are capable of representing the same class of languages, the definite memory machine (DMM) languages, but the delays on the hidden units in the HDNN helps it outperform the IDNN on problems composed of repeated features over short time windows.
- North America > United States > New Jersey > Mercer County > Princeton (0.14)
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > California > San Diego County > La Jolla (0.05)
- (2 more...)
Representation and Induction of Finite State Machines using Time-Delay Neural Networks
Clouse, Daniel S., Giles, C. Lee, Horne, Bill G., Cottrell, Garrison W.
This work investigates the representational and inductive capabilities of time-delay neural networks (TDNNs) in general, and of two subclasses of TDNN, those with delays only on the inputs (IDNN), and those which include delays on hidden units (HDNN). Both architectures are capable of representing the same class of languages, the definite memory machine (DMM) languages, but the delays on the hidden units in the HDNN helps it outperform the IDNN on problems composed of repeated features over short time windows.
- North America > United States > New Jersey > Mercer County > Princeton (0.14)
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > California > San Diego County > La Jolla (0.05)
- (2 more...)
Representation and Induction of Finite State Machines using Time-Delay Neural Networks
Clouse, Daniel S., Giles, C. Lee, Horne, Bill G., Cottrell, Garrison W.
This work investigates the representational and inductive capabilities oftime-delay neural networks (TDNNs) in general, and of two subclasses of TDNN, those with delays only on the inputs (IDNN), and those which include delays on hidden units (HDNN). Both architectures arecapable of representing the same class of languages, the definite memory machine (DMM) languages, but the delays on the hidden units in the HDNN helps it outperform the IDNN on problems composed of repeated features over short time windows. 1 Introduction In this paper we consider the representational and inductive capabilities of timedelay neuralnetworks (TDNN) [Waibel et al., 1989] [Lang et al., 1990], also known as NNFIR [Wan, 1993]. A TDNN is a feed-forward network in which the set of inputs to any node i may include the output from previous layers not only in the current time step t, but from d earlier time steps as well. The activation function 404 D.S. Clouse, C. L Giles, B. G. Home and G. W. Cottrell for node i at time t in such a network is given by equation 1: TDNNs have been used in speech recognition [Waibel et al., 1989], and time series prediction [Wan, 1993]. In this paper we concentrate on the language induction problem.
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > New Jersey > Mercer County > Princeton (0.05)
- North America > United States > California > San Diego County > La Jolla (0.05)
- (2 more...)