Problem Solving
Silicon Models for Auditory Scene Analysis
Lazzaro, John, Wawrzynek, John
We are developing special-purpose, low-power analog-to-digital converters for speech and music applications, that feature analog circuit models of biological audition to process the audio signal before conversion. This paper describes our most recent converter design, and a working system that uses several copies ofthe chip to compute multiple representations of sound from an analog input. This multi-representation system demonstrates the plausibility of inexpensively implementing an auditory scene analysis approach to sound processing. 1. INTRODUCTION The visual system computes multiple representations of the retinal image, such as motion, orientation, and stereopsis, as an early step in scene analysis. Likewise, the auditory brainstem computes secondary representations of sound, emphasizing properties such as binaural disparity, periodicity, and temporal onsets. Recent research in auditory scene analysis involves using computational models of these auditory brainstem representations in engineering applications. Computation is a major limitation in auditory scene analysis research: the complete auditoryprocessing system described in (Brown and Cooke, 1994) operates at approximately 4000 times real time, running under UNIX on a Sun SPARCstation 1. Standard approaches to hardware acceleration for signal processing algorithms could be used to ease this computational burden in a research environment; a variety of parallel, fixed-point hardware products would work well on these algorithms.
Cholinergic suppression of transmission may allow combined associative memory function and self-organization in the neocortex
Hasselmo, Michael E., Cekic, Milos
Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feed forward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feedforward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedforward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.
Silicon Models for Auditory Scene Analysis
Lazzaro, John, Wawrzynek, John
We are developing special-purpose, low-power analog-to-digital converters for speech and music applications, that feature analog circuit models of biological audition to process the audio signal before conversion. This paper describes our most recent converter design, and a working system that uses several copies ofthe chip to compute multiple representations of sound from an analog input. This multi-representation system demonstrates the plausibility of inexpensively implementing an auditory scene analysis approach to sound processing. 1. INTRODUCTION The visual system computes multiple representations of the retinal image, such as motion, orientation, and stereopsis, as an early step in scene analysis. Likewise, the auditory brainstem computes secondary representations of sound, emphasizing properties such as binaural disparity, periodicity, and temporal onsets. Recent research in auditory scene analysis involves using computational models of these auditory brainstem representations in engineering applications. Computation is a major limitation in auditory scene analysis research: the complete auditory processing system described in (Brown and Cooke, 1994) operates at approximately 4000 times real time, running under UNIX on a Sun SPARCstation 1. Standard approaches to hardware acceleration for signal processing algorithms could be used to ease this computational burden in a research environment; a variety of parallel, fixed-point hardware products would work well on these algorithms.
Cholinergic suppression of transmission may allow combined associative memory function and self-organization in the neocortex
Hasselmo, Michael E., Cekic, Milos
Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback withself-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feedforward synapses).A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedforward connectivity.During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.
Science and Engineering in Knowledge Representation and Reasoning
As a field, knowledge representation has often been accused of being off in a theoretical no-man's land, removed from, and largely unrelated to, the central issues in AI. This article argues that recent trends in KR instead demonstrate the benefits of the interplay between science and engineering, a lesson from which all AI could benefit. This article grew out of a survey talk on the Third International Conference on Knowledge Representation and Reasoning (KR-92) (Nebel, Rich, and Swartout 1992) that I presented at the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93).
Science and Engineering in Knowledge Representation and Reasoning
As a field, knowledge representation has often been accused of being off in a theoretical no-man's land, removed from, and largely unrelated to, the central issues in AI. This article argues that recent trends in KR instead demonstrate the benefits of the interplay between science and engineering, a lesson from which all AI could benefit. This article grew out of a survey talk on the Third International Conference on Knowledge Representation and Reasoning (KR-92) (Nebel, Rich, and Swartout 1992) that I presented at the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93).
Steps toward Formalizing Context
The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion.
Adaptive Problem-solving for Large-scale Scheduling Problems: A Case Study
Although most scheduling problems are NP-hard, domain specific techniques perform well in practice but are quite expensive to construct. In adaptive problem-solving solving, domain specific knowledge is acquired automatically for a general problem solver with a flexible control architecture. In this approach, a learning system explores a space of possible heuristic methods for one well-suited to the eccentricities of the given domain and problem distribution. In this article, we discuss an application of the approach to scheduling satellite communications. Using problem distributions based on actual mission requirements, our approach identifies strategies that not only decrease the amount of CPU time required to produce schedules, but also increase the percentage of problems that are solvable within computational resource limitations.
Programming CHIP for the IJCAI-95 Robot Competition
Firby, R. James, Prokopowicz, Peter N., Swain, Michael J., Kahn, Roger E., Franklin, David
The University of Chicago's robot, CHIP, is part of the Animate Agent Project, aimed at understanding the software architecture and knowledge representations needed to build a general-purpose robotic assistant. CHIP's strategy for the Office Cleanup event of the 1995 Robot Competition and Exhibition was to scan an entire area systematically and, as collectible objects were identified, pick them up and deposit them in the nearest appropriate receptacle. This article describes CHIP and its various systems and the ways in which these elements combined to produce an effective entry to the robot competition.
A model of the hippocampus combining self-organization and associative memory function
Hasselmo, Michael E., Schnell, Eric, Berke, Joshua, Barkai, Edi
A model of the hippocampus is presented which forms rapid self -organized representations of input arriving via the perforant path, performs recall of previous associations in region CA3, and performs comparison of this recall with afferent input in region CA 1. This comparison drives feedback regulation of cholinergic modulation to set appropriate dynamics for learning of new representations in region CA3 and CA 1. The network responds to novel patterns with increased cholinergic modulation, allowing storage of new self-organized representations, but responds to familiar patterns with a decrease in acetylcholine, allowing recall based on previous representations. This requires selectivity of the cholinergic suppression of synaptic transmission in stratum radiatum of regions CA3 and CAl, which has been demonstrated experimentally. 1 INTRODUCTION A number of models of hippocampal function have been developed (Burgess et aI., 1994; Myers and Gluck, 1994; Touretzky et al., 1994), but remarkably few simulations have addressed hippocampal function within the constraints provided by physiological and anatomical data. Theories of the function of specific subregions of the hippocampal formation often do not address physiological mechanisms for changing dynamics between learning of novel stimuli and recall of familiar stimuli.