Telecommunications
Decomposition of Reinforcement Learning for Admission Control of Self-Similar Call Arrival Processes
In multi-service communications networks, such as Asynchronous Transfer Mode (ATM) networks, resource control is of crucial importance for the network operator as well as for the users. The objective is to maintain the service quality while maximizing the operator's revenue. At the call level, service quality (Grade of Service) is measured in terms of call blocking probabilities, and the key resource to be controlled is bandwidth. Network routing and call admission control (CAC) are two such resource control problems. Markov decision processes offer a framework for optimal CAC and routing [1]. By modelling the dynamics of the network with traffic and computing control policies using dynamic programming [2], resource control is optimized. A standard assumption in such models is that calls arrive according to Poisson processes. This makes the models of the dynamics relatively simple. Although the Poisson assumption is valid for most user-initiated requests in communications networks, a number of studies [3, 4, 5] indicate that many types of arrival similar.
Analysis of Bit Error Probability of Direct-Sequence CDMA Multiuser Demodulators
We analyze the bit error probability of multiuser demodulators for directsequence binaryphase-shift-keying (DSIBPSK) CDMA channel with additive gaussian noise. The problem of multiuser demodulation is cast into the finite-temperature decoding problem, and replica analysis is applied toevaluate the performance of the resulting MPM (Marginal Posterior Mode)demodulators, which include the optimal demodulator and the MAP demodulator as special cases. An approximate implementation ofdemodulators is proposed using analog-valued Hopfield model as a naive mean-field approximation to the MPM demodulators, and its performance is also evaluated by the replica analysis. Results of the performance evaluationshows effectiveness of the optimal demodulator and the mean-field demodulator compared with the conventional one, especially inthe cases of small information bit rate and low noise level. 1 Introduction The CDMA (Code-Division-Multiple-Access) technique [1] is important as a fundamental technology of digital communications systems, such as cellular phones. The important applications includerealization of spread-spectrum multipoint-to-point communications systems, in which multiple users share the same communication channel.
Decomposition of Reinforcement Learning for Admission Control of Self-Similar Call Arrival Processes
In multi-service communications networks, such as Asynchronous Transfer Mode (ATM) networks, resource control is of crucial importance for the network operator as well as for the users. The objective is to maintain the service quality while maximizing the operator's revenue. At the call level, service quality (Grade of Service) is measured in terms of call blocking probabilities, and the key resource to be controlled is bandwidth. Network routing and call admission control (CAC) are two such resource control problems. Markov decision processes offer a framework for optimal CAC and routing [1]. By modelling thedynamics of the network with traffic and computing control policies using dynamic programming [2], resource control is optimized. A standard assumption in such models is that calls arrive according to Poisson processes. This makes the models of the dynamics relatively simple. Although the Poisson assumption is valid for most user-initiated requests in communications networks, a number of studies [3, 4, 5] indicate that many types of arrival processesin wide-area networks as well as in local area networks are statistically selfsimilar.
Experiments with Infinite-Horizon, Policy-Gradient Estimation
Baxter, J., Bartlett, P. L., Weaver, L.
In this paper, we present algorithms that perform gradient ascent of the average reward in a partially observable Markov decision process (POMDP). These algorithms are based on GPOMDP, an algorithm introduced in a companion paper (Baxter & Bartlett, this volume), which computes biased estimates of the performance gradient in POMDPs. The algorithm's chief advantages are that it uses only one free parameter beta, which has a natural interpretation in terms of bias-variance trade-off, it requires no knowledge of the underlying state, and it can be applied to infinite state, control and observation spaces. We show how the gradient estimates produced by GPOMDP can be used to perform gradient ascent, both with a traditional stochastic-gradient algorithm, and with an algorithm based on conjugate-gradients that utilizes gradient information to bracket maxima in line searches. Experimental results are presented illustrating both the theoretical results of (Baxter & Bartlett, this volume) on a toy problem, and practical aspects of the algorithms on a number of more realistic problems.
Low Power Wireless Communication via Reinforcement Learning
This paper examines the application of reinforcement learning to a wireless communicationproblem. The problem requires that channel utility be maximized while simultaneously minimizing battery usage. We present a solution to this multi-criteria problem that is able to significantly reducepower consumption. The solution uses a variable discount factor to capture the effects of battery usage. 1 Introduction Reinforcement learning (RL) has been applied to resource allocation problems in telecommunications, e.g.,channel allocation in wireless systems, network routing, and admission control in telecommunication networks [1,2, 8, 10]. These have demonstrated reinforcement learningcan find good policies that significantly increase the application reward within the dynamics of the telecommunication problems.
The 1999 Asia-Pacific Conference on Intelligent-Agent Technology
Intelligent-agent technology is one of the most exciting, active areas of research and development in computer science and information technology today. The First Asia-Pacific Conference on Intelligent- Agent Technology (IAT'99) attracted researchers and practitioners from diverse fields such as computer science, information systems, business, telecommunications, manufacturing, human factors, psychology, education, and robotics to examine the design principles and performance characteristics of various approaches in agent technologies and, hence, fostered the cross-fertilization of ideas on the development of autonomous agents and multiagent systems among different domains.