Goto

Collaborating Authors

Reconfigurable Neural Net Chip with 32K Connections

Neural Information Processing Systems

We describe a CMOS neural net chip with a reconfigurable network architecture. It contains 32,768 binary, programmable connections arranged in 256 'building block' neurons. Several'building blocks' can be connected to form long neurons with up to 1024 binary connections or to form neurons with analog connections. Single-or multi-layer networks can be implemented with this chip. We have integrated this chip into a board system together with a digital signal processor and fast memory.


Reconfigurable Neural Net Chip with 32K Connections

Neural Information Processing Systems

H.P. Graf, R. Janow, D. Henderson, and R. Lee AT&T Bell Laboratories, Room 4G320, Holmdel, NJ 07733 Abstract We describe a CMOS neural net chip with a reconfigurable network architecture. Itcontains 32,768 binary, programmable connections arranged in 256 'building block' neurons. Several'building blocks' can be connected to form long neurons with up to 1024 binary connections or to form neurons with analog connections. Single-or multi-layer networks can be implemented withthis chip. We have integrated this chip into a board system together with a digital signal processor and fast memory.


Hardware/Software Co-Design for Spike Based Recognition

arXiv.org Artificial Intelligence

The practical applications based on recurrent spiking neurons are limited due to their non-trivial learning algorithms. The temporal nature of spiking neurons is more favorable for hardware implementation where signals can be represented in binary form and communication can be done through the use of spikes. This work investigates the potential of recurrent spiking neurons implementations on reconfigurable platforms and their applicability in temporal based applications. A theoretical framework of reservoir computing is investigated for hardware/software implementation. In this framework, only readout neurons are trained which overcomes the burden of training at the network level. These recurrent neural networks are termed as microcircuits which are viewed as basic computational units in cortical computation. This paper investigates the potential of recurrent neural reservoirs and presents a novel hardware/software strategy for their implementation on FPGAs. The design is implemented and the functionality is tested in the context of speech recognition application.