Plotting

 Telecommunications


Neural Net Receivers in Multiple Access-Communications

Neural Information Processing Systems

The application of neural networks to the demodulation of spread-spectrum signals in a multiple-access environment is considered. This study is motivated in large part by the fact that, in a multiuser system, the conventional (matched filter) receiversuffers severe performance degradation as the relative powers of the interfering signals become large (the "near-far" problem). Furthermore, the optimum receiver, which alleviates the near-far problem, is too complex to be of practical use. Receivers based on multi-layer perceptrons are considered as a simple and robust alternative to the optimum solution.The optimum receiver is used to benchmark the performance of the neural net receiver; in particular, it is proven to be instrumental in identifying the decision regions of the neural networks. The back-propagation algorithm and a modified version of it are used to train the neural net. An importance sampling technique is introduced to reduce the number of simulations necessary to evaluate the performance of neural nets.


Contributors

AI Magazine

Tin Nguyen performed the work contained in the article "Knowledge Base Verification" while at Lockheed and is currently working for Bell Northern Research as a member of the research Deanne Pecora, a staff engineer with the Lockheed Artificial Intelligence Center, 2710 Sand Hill Road, Menlo Park, California 94025, is working on Rick Briggs, author of "Knowledge Representation and Inference in Sanskrit: A applying knowledge-based systems to Review of the First National Conference," is a senior engineer at Delfin Systems, real problems. She is a coauthor of 1349 Moffett Park Drive, Sunnyvale, California 94089. Briggs is currently working "Knowledge Base Verification." Walt Perkins, coauthor of IIKnowledge Base Verification" is a consulting scientist Lindley Darden, who wrote "Viewing the History of Science as Compiled Hindsight,lI with the Lockheed Artificial is an associate professor in the departments of philosophy and history and InteIligence Center, 2710 Sand Hill a member of the graduate faculty in the Committee on the History and Philosophy Road, Menlo Park, California 94025 of Science at the University of Maryland, College Park. She is currently and the principal developer of the serving in the second year of a halftime research appointment at the University Lockheed expert system. of Maryland Institute for Advanced Computer Studies.


Knowledge Acquisition in the Development of a Large Expert System

AI Magazine

This article discusses several effective techniques for expert system knowledge acquisition based on the techniques that were successfully used to develop the Central Office Maintenance Printout Analysis and Suggestion System (COMPASS). Knowledge acquisition is not a science, and expert system developers and experts must tailor their methodologies to fit their situation and the people involved. Developers of future expert systems should find a description of proven knowledge-acquisition techniques and an account of the experience of the COMPASS project in applying these techniques to be useful in developing their own knowledge-acquisition procedures.


Contributors to the Spring Issue of AI Magazine

AI Magazine

Tin Nguyen performed the work contained in the article "Knowledge Base Verification" while at Lockheed and is currently working for Bell Northern Research as a member of the research staff. Deanne Pecora, a staff engineer with the Lockheed Artificial Intelligence Center, 2710 Sand Hill Road, Menlo Park, California 94025, is working on Rick Brigs, author of "Knowledge Representation and Inference in Sanskrit: A applying knowledge-based systems to Review of the First National Conference," is a senior engineer at Delfin Systems, real problems. She is a coauthor of 1349 Moffett Park Drive, Sunnyvale, California 94089. Briggs is currently working'Knowledge Base Verification." Walt Perkins, coauthor of "Knowledge Base Verification" is a consulting scientist Lindley Darden, who wrote "Viewing the History of Science as Compiled Hindsight," with the Lockheed Artificial is an associate professor in the departments of philosophy and history and Intelligence Center, 2710 Sand Hill a member of the graduate faculty in the Committee on the History and Philosophy Road, Menlo Park, California 94025 of Science at the University of Maryland, College Park. She is currently and the principal developer of the serving in the second year of a halftime research appointment at the University Lockheed expert system. of Maryland Institute for Advanced Computer Studies. Her mailing address is Department of Philosophy, University of Maryland, College Park, Maryland David Prerau is a principal member of 20742. The primary responsibility is to lead the author of "The 1985 Workshop on Distributed Artificial Intelligence, he is currently development of major expert systems working in the area of distributed artificial intelligence and is organizing with high corporate payoff and impact.


Artificial Intelligence Research in Statistics

AI Magazine

The initial results from a few AI research projects in statistics have been quite interesting to statisticians: Feasibility demonstration systems have been built at Stanford University, AT-T bell Laboratories, and the University of Edinburgh. Several more design studies have been completed. A conference devoted to expert systems in statistics was sponsored by the Royal Statistical Society. On the other hand, statistic as a domain may be of particular interest to AI researchers, for it offers both tasks well suited to current AI capabilities and tasks requiring development of new AI techniques.


Artificial Intelligence Research at GTE Laboratories (Research in Progress)

AI Magazine

Located in the Massachusetts Route 128 high technology area, the five laboratories that comprise GTE Laboratories generate the ideas, products, systems, and services that provide technical leadership for GTE. The two laboratories which conduct artificial intelligence research are the Computer Science Laboratory (CSL) and the Fundamental Research Laboratory (FRL). Artificial Intelligence projects within the CSL are directed towards the research techniques used in expert systems, and their application to GTE products and services. AI projects within FRL have longer-term AI research goals.


Artificial Intelligence Research at GTE Laboratories (Research in Progress)

AI Magazine

GTE Laboratories is the central corporate research and development facility for the sixty subsidiaries of the worldwide GTE corporation. Located in the Massachusetts Route 128 high technology area, the five laboratories that comprise GTE Laboratories generate the ideas, products, systems, and services that provide technical leadership for GTE. The two laboratories which conduct artificial intelligence research are the Computer Science Laboratory (CSL) and the Fundamental Research Laboratory (FRL). Artificial Intelligence projects within the CSL are directed towards the research techniques used in expert systems, and their application to GTE products and services. AI projects within FRL have longer-term AI research goals.


Heuristics: Intelligent Search Strategies for Computer Problem Solving

Classics

Optical transport networks based on wavelength division multiplexing (WDM) are considered to be the most appropriate choice for future Internet backbone. On the other hand, future DOE networks are expected to have the ability to dynamically provision on-demand survivable services to suit the needs of various high performance scientific applications and remote collaboration. Since a failure in aWDMnetwork such as a cable cut may result in a tremendous amount of data loss, efficient protection of data transport in WDM networks is therefore essential. As the backbone network is moving towards GMPLS/WDM optical networks, the unique requirement to support DOE's sciencemore » mission results in challenging issues that are not directly addressed by existing networking techniques and methodologies. The objectives of this project were to develop cost effective protection and restoration mechanisms based on dedicated path, shared path, preconfigured cycle (p-cycle), and so on, to deal with single failure, dual failure, and shared risk link group (SRLG) failure, under different traffic and resource requirement models; to devise efficient service provisioning algorithms that deal with application specific network resource requirements for both unicast and multicast; to study various aspects of traffic grooming in WDM ring and mesh networks to derive cost effective solutions while meeting application resource and QoS requirements; to design various diverse routing and multi-constrained routing algorithms, considering different traffic models and failure models, for protection and restoration, as well as for service provisioning; to propose and study new optical burst switched architectures and mechanisms for effectively supporting dynamic services; and to integrate research with graduate and undergraduate education.


The Distributed Vehicle Monitoring Testbed: A Tool for Investigating Distributed Problem Solving Networks

AI Magazine

Cooperative distributed problem solving networks are distributed networks of semi-autonomous processing nodes that work together to solve a single problem. The Distributed Vehicle Monitoring Testbed is a flexible and fully-instrumented research tool for empirically evaluating alternative designs for these networks. The testbed simulates a class of a distributed knowledge-based problem solving systems operating on an abstracted version of a vehicle monitoring task. There are two important aspects to the testbed: (1.) it implements a novel generic architecture for distributed problems solving networks that exploits the use of sophisticated local node control and meta-level control to improve global coherence in network problem solving; (2.) it serves as an example of how a testbed can be engineered to permit the empirical exploration of design issues in knowledge AI systems. The testbed is capable of simulating different degrees of sophistication in problem solving knowledge and focus-of attention mechanisms, for varying the distribution and characteristics of error in its (simulated) input data, and for measuring the progress of problem solving. Node configuration and communication channel characteristics can also be independently varied in the simulated network.