Goto

Collaborating Authors

 Telecommunications


Reinforcement Learning for Dynamic Channel Allocation in Cellular Telephone Systems

Neural Information Processing Systems

In cellular telephone systems, an important problem is to dynamically allocate the communication resource (channels) so as to maximize service in a stochastic caller environment. This problem is naturally formulated as a dynamic programming problem and we use a reinforcement learning (RL) method to find dynamic channel allocation policies that are better than previous heuristic solutions. The policies obtained perform well for a broad variety of call traffic patterns.


Applied AI News

AI Magazine

Busey Bank (Champaign, Ill.) is using intelligent-agent technology to launch its Lloyds Bowmaker Motor Finance (Petersfield, U.K.) has implemented a The Philadelphia Stock Exchange care products, has developed a rulebased neural network-based system for credit (Philadelphia, Pa.) has adopted an multinational order-entry and scoring new loan applications. The company is system helps Lloyds determine whether increase the reliability and scalability using the system to process orders to accept a loan and gives the reasons of network-supported options-trading from its network of more than for its choice. The system uses an electronic facilities. The software will permit installed a rule-based expert system to camera to image the front face of letters, team members in different geographic manage the complexity of producing identify the destination address, locations to explore similar multisensory more than 20,000 new designs and and determine its delivery-point bar environments both independently 2.4 billion greeting cards annually. The company has completely reengineered its operation, converting an Telecommunications providers MCI Healthcare software developer HBO & antiquated job-shop operation into a (Washington, D.C.) and BT (London, Company (Atlanta, Ga.) is developing state-of-the-art cellular one.


Applied AI News

AI Magazine

The system generates traffic flow measurements that enable traffic operations centers to monitor traffic movement and better respond to accidents Wal-Mart Stores (Bentonville, Ark.) Tektronix (Wilsonville, Ore.), a and congestion. This system, which manage its automated storage and models for its computer-assisted includes fuzzy logic and neural network retrieval system. The systems will Mexico), a producer of metals, has Calif.) is using visualization and digital monitor satellite signals in near real implemented an intelligent system to prototyping software for vehicle time, alerting operators to out-of-tolerance improve its zinc yield. The advanced design and manufacturing within its conditions and the presence of control expert system provides operator new concurrent engineering system. The application was developed and virtual manufacturing.


Predictive Q-Routing: A Memory-based Reinforcement Learning Approach to Adaptive Traffic Control

Neural Information Processing Systems

The controllers usually have no or only very little prior knowledge of the environment. While only local communication between controllers is allowed, the controllers must cooperate among themselves to achieve the common, global objective. Finding the optimal routing policy in such a distributed manner is very difficult. Moreover, since the environment is non-stationary, the optimal policy varies with time as a result of changes in network traffic and topology.


Experiments with Neural Networks for Real Time Implementation of Control

Neural Information Processing Systems

This paper describes a neural network based controller for allocating capacity in a telecommunications network. This system was proposed in order to overcome a "real time" response constraint. Two basic architectures are evaluated: 1) a feedforward network-heuristic and; 2) a feedforward network-recurrent network. These architectures are compared against a linear programming (LP) optimiser as a benchmark. This LP optimiser was also used as a teacher to label the data samples for the feedforward neural network training algorithm. It is found that the systems are able to provide a traffic throughput of 99% and 95%, respectively, of the throughput obtained by the linear programming solution. Once trained, the neural network based solutions are found in a fraction of the time required by the LP optimiser.


Experiments with Neural Networks for Real Time Implementation of Control

Neural Information Processing Systems

This paper describes a neural network based controller for allocating capacity in a telecommunications network. This system was proposed in order to overcome a "real time" response constraint. Two basic architectures are evaluated: 1) a feedforward network-heuristic and; 2) a feedforward network-recurrent network. These architectures are compared against a linear programming (LP) optimiser as a benchmark. This LP optimiser was also used as a teacher to label the data samples for the feedforward neural network training algorithm. It is found that the systems are able to provide a traffic throughput of 99% and 95%, respectively, of the throughput obtained by the linear programming solution. Once trained, the neural network based solutions are found in a fraction of the time required by the LP optimiser.


Predictive Q-Routing: A Memory-based Reinforcement Learning Approach to Adaptive Traffic Control

Neural Information Processing Systems

The controllers usually have no or only very little prior knowledge of the environment. While only local communication between controllers is allowed, the controllers must cooperate among themselves to achieve the common, global objective. Finding the optimal routing policy in such a distributed manner is very difficult. Moreover, since the environment is non-stationary, the optimal policy varies with time as a result of changes in network traffic and topology.


Adaptive Problem-solving for Large-scale Scheduling Problems: A Case Study

Journal of Artificial Intelligence Research

Although most scheduling problems are NP-hard, domain specific techniques perform well in practice but are quite expensive to construct. In adaptive problem-solving solving, domain specific knowledge is acquired automatically for a general problem solver with a flexible control architecture. In this approach, a learning system explores a space of possible heuristic methods for one well-suited to the eccentricities of the given domain and problem distribution. In this article, we discuss an application of the approach to scheduling satellite communications. Using problem distributions based on actual mission requirements, our approach identifies strategies that not only decrease the amount of CPU time required to produce schedules, but also increase the percentage of problems that are solvable within computational resource limitations.


A Lagrangian Formulation For Optical Backpropagation Training In Kerr-Type Optical Networks

Neural Information Processing Systems

Behrman Physics Department Wichita State University Wichita, KS 67260-0032 Abstract A training method based on a form of continuous spatially distributed optical error back-propagation is presented for an all optical network composed of nondiscrete neurons and weighted interconnections. The all optical network is feed-forward and is composed of thin layers of a Kerrtype selffocusing/defocusing nonlinear optical material. The training method is derived from a Lagrangian formulation of the constrained minimization of the network error at the output. This leads to a formulation that describes training as a calculation of the distributed error of the optical signal at the output which is then reflected back through the device to assign a spatially distributed error to the internal layers. This error is then used to modify the internal weighting values.


A Lagrangian Formulation For Optical Backpropagation Training In Kerr-Type Optical Networks

Neural Information Processing Systems

A training method based on a form of continuous spatially distributed optical error back-propagation is presented for an all optical network composed of nondiscrete neurons and weighted interconnections. The all optical network is feed-forward and is composed of thin layers of a Kerrtype self focusing/defocusing nonlinear optical material. The training method is derived from a Lagrangian formulation of the constrained minimization of the network error at the output. This leads to a formulation that describes training as a calculation of the distributed error of the optical signal at the output which is then reflected back through the device to assign a spatially distributed error to the internal layers. This error is then used to modify the internal weighting values. Results from several computer simulations of the training are presented, and a simple optical table demonstration of the network is discussed.