Results 1  10
of
284
Cognitive Radio: BrainEmpowered Wireless Communications
 IEEE J. Selected Areas in Comm
, 2005
"... Abstract—Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a softwaredefined radio, is defined as an intelligent wireless communication system that is aware of its environment ..."
Abstract

Cited by 543 (0 self)
 Add to MetaCart
Abstract—Cognitive radio is viewed as a novel approach for improving the utilization of a precious natural resource: the radio electromagnetic spectrum. The cognitive radio, built on a softwaredefined radio, is defined as an intelligent wireless communication system that is aware of its environment and uses the methodology of understandingbybuilding to learn from the environment and adapt to statistical variations in the input stimuli, with two primary objectives in mind: • highly reliable communication whenever and wherever needed; • efficient utilization of the radio spectrum. Following the discussion of interference temperature as a new metric for the quantification and management of interference, the paper addresses three fundamental cognitive tasks. 1) Radioscene analysis. 2) Channelstate estimation and predictive modeling. 3) Transmitpower control and dynamic spectrum management. This paper also discusses the emergent behavior of cognitive radio. Index Terms—Awareness, channelstate estimation and predictive modeling, cognition, competition and cooperation, emergent behavior, interference temperature, machine learning, radioscene analysis, rate feedback, spectrum analysis, spectrum holes, spectrum management, stochastic games, transmitpower control, water filling.
Equivariant Adaptive Source Separation
 IEEE Trans. on Signal Processing
, 1996
"... Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Eq ..."
Abstract

Cited by 381 (10 self)
 Add to MetaCart
Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence) . The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed ap...
An analysis of temporaldifference learning with function approximation
 IEEE Transactions on Automatic Control
, 1997
"... We discuss the temporaldifference learning algorithm, as applied to approximating the costtogo function of an infinitehorizon discounted Markov chain. The algorithm weanalyze updates parameters of a linear function approximator online, duringasingle endless trajectory of an irreducible aperiodi ..."
Abstract

Cited by 218 (7 self)
 Add to MetaCart
We discuss the temporaldifference learning algorithm, as applied to approximating the costtogo function of an infinitehorizon discounted Markov chain. The algorithm weanalyze updates parameters of a linear function approximator online, duringasingle endless trajectory of an irreducible aperiodic Markov chain with a finite or infinite state space. We present a proof of convergence (with probability 1), a characterization of the limit of convergence, and a bound on the resulting approximation error. Furthermore, our analysis is based on a new line of reasoning that provides new intuition about the dynamics of temporaldifference learning. In addition to proving new and stronger positive results than those previously available, we identify the significance of online updating and potential hazards associated with the use of nonlinear function approximators. First, we prove that divergence may occur when updates are not based on trajectories of the Markov chain. This fact reconciles positive and negative results that have been discussed in the literature, regarding the soundness of temporaldifference learning. Second, we present anexample illustrating the possibility of divergence when temporaldifference learning is used in the presence of a nonlinear function approximator.
ActorCritic Algorithms
 SIAM JOURNAL ON CONTROL AND OPTIMIZATION
, 2001
"... In this paper, we propose and analyze a class of actorcritic algorithms. These are twotimescale algorithms in which the critic uses temporal difference (TD) learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction based on in ..."
Abstract

Cited by 174 (1 self)
 Add to MetaCart
In this paper, we propose and analyze a class of actorcritic algorithms. These are twotimescale algorithms in which the critic uses temporal difference (TD) learning with a linearly parameterized approximation architecture, and the actor is updated in an approximate gradient direction based on information provided by the critic. We show that the features for the critic should ideally span a subspace prescribed by the choice of parameterization of the actor. We study actorcritic algorithms for Markov decision processes with general state and action spaces. We state and prove two results regarding their convergence.
On Contrastive Divergence Learning
"... Maximumlikelihood (ML) learning of Markov random fields is challenging because it requires estimates of averages that have an exponential number of terms. Markov chain Monte Carlo methods typically take a long time to converge on unbiased estimates, but Hinton (2002) showed that if the Markov ..."
Abstract

Cited by 82 (15 self)
 Add to MetaCart
Maximumlikelihood (ML) learning of Markov random fields is challenging because it requires estimates of averages that have an exponential number of terms. Markov chain Monte Carlo methods typically take a long time to converge on unbiased estimates, but Hinton (2002) showed that if the Markov chain is only run for a few steps, the learning can still work well and it approximately minimizes a di#erent function called "contrastive divergence" (CD). CD learning has been successfully applied to various types of random fields. Here, we study the properties of CD learning and show that it provides biased estimates in general, but that the bias is typically very small. Fast CD learning can therefore be used to get close to an ML solution and slow ML learning can then be used to finetune the CD solution.
Optimal Stopping of Markov Processes: Hilbert Space Theory, Approximation Algorithms, and an Application to Pricing HighDimensional Financial Derivatives
 IEEE Transactions on Automatic Control
, 1997
"... We develop a theory characterizing optimal stopping times for discretetime ergodic Markov processes with discounted rewards. The theory differs from prior work by its view of perstage and terminal reward functions as elements of a certain Hilbert space. In addition to a streamlined analysis establ ..."
Abstract

Cited by 78 (6 self)
 Add to MetaCart
We develop a theory characterizing optimal stopping times for discretetime ergodic Markov processes with discounted rewards. The theory differs from prior work by its view of perstage and terminal reward functions as elements of a certain Hilbert space. In addition to a streamlined analysis establishing existence and uniqueness of a solution to Bellman's equation, this approach provides an elegant framework for the study of approximate solutions. In particular, we propose a stochastic approximation algorithm that tunes weights of a linear combination of basis functions in order to approximate a value function. We prove that this algorithm converges (almost surely) and that the limit of convergence has some desirable properties. We discuss how variations on this line of analysis can be used to develop similar results for other classes of optimal stopping problems, including those involving independent increment processes, finite horizons, and twoplayer zerosum games. We illustrate...
Multichannel Blind Deconvolution: Fir Matrix Algebra And Separation Of Multipath Mixtures
, 1996
"... A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and mat ..."
Abstract

Cited by 74 (0 self)
 Add to MetaCart
A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and matrix algorithms for use in multichannel /multipath problems. Using abstract algebra/group theoretic concepts, information theoretic principles, and the Bussgang property, methods of single channel filtering and source separation of multipath mixtures are merged into a general FIR matrix framework. Techniques developed for equalization may be applied to source separation and vice versa. Potential applications of these results lie in neural networks with feedforward memory connections, wideband array processing, and in problems with a multiinput, multioutput network having channels between each source and sensor, such as source separation. Particular applications of FIR polynomial matrix alg...
On adaptive markov chain monte carlo algorithm
 BERNOULLI
, 2005
"... We look at adaptive MCMC algorithms that generate stochastic processes based on sequences of transition kernels, where each transition kernel is allowed to depend on the past of the process. We show under certain conditions that the generated stochastic process is ergodic, with appropriate stationar ..."
Abstract

Cited by 74 (25 self)
 Add to MetaCart
We look at adaptive MCMC algorithms that generate stochastic processes based on sequences of transition kernels, where each transition kernel is allowed to depend on the past of the process. We show under certain conditions that the generated stochastic process is ergodic, with appropriate stationary distribution. We then consider the Random Walk Metropolis (RWM) algorithm with normal proposal and scale parameter σ. We propose an adaptive version of this algorithm that sequentially adjusts σ using a RobbinsMonro type algorithm in order to nd the optimal scale parameter σopt as in Roberts et al. (1997). We show, under some additional conditions that this adaptive algorithm is ergodic and that σn, the sequence of scale parameter obtained converges almost surely to σopt. Our algorithm thus automatically determines and runs the optimal RWM scaling, with no manual tuning required. We close with a simulation example.
Optimization via simulation: a review
 Annals of Operations Research
, 1994
"... We review techniques for optimizing stochastic discreteevent systems via simulation. We discuss both the discrete parameter case and the continuous parameter case, but concentrate on the latter which has dominated most of the recent research in the area. For the discrete parameter case, we focus on ..."
Abstract

Cited by 67 (20 self)
 Add to MetaCart
We review techniques for optimizing stochastic discreteevent systems via simulation. We discuss both the discrete parameter case and the continuous parameter case, but concentrate on the latter which has dominated most of the recent research in the area. For the discrete parameter case, we focus on the techniques for optimization from a finite set: multiplecomparison procedures and rankingandselection procedures. For the continuous parameter case, we focus on gradientbased methods, including perturbation analysis, the likelihood ratio method, and frequency domain experimentation. For illustrative purposes, we compare and contrast the implementation of the techniques for some simple discreteevent systems such as the (s, S) inventory system and the GI/G/1 queue. Finally, we speculate on future directions for the field, particularly in the context of the rapid advances being made in parallel computing.
Adaptive Stochastic Approximation by the Simultaneous Perturbation Method
, 2000
"... Stochastic approximation (SA) has long been applied for problems of minimizing loss functions or root finding with noisy input information. As with all stochastic search algorithms, there are adjustable algorithm coefficients that must be specified, and that can have a profound effect on algorithm p ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Stochastic approximation (SA) has long been applied for problems of minimizing loss functions or root finding with noisy input information. As with all stochastic search algorithms, there are adjustable algorithm coefficients that must be specified, and that can have a profound effect on algorithm performance. It is known that choosing these coefficients according to an SA analog of the deterministic NewtonRaphson algorithm provides an optimal or nearoptimal form of the algorithm. However, directly determining the required Hessian matrix (or Jacobian matrix for root finding) to achieve this algorithm form has often been difficult or impossible in practice. This paper presents a general adaptive SA algorithm that is based on a simple method for estimating the Hessian matrix, while concurrently estimating the primary parameters of interest. The approach applies in both the gradientfree optimization (KieferWolfowitz) and rootfinding/stochastic gradientbased (RobbinsMonro) settings, and is based on the "simultaneous perturbation (SP)" idea introduced previously. The algorithm requires only a small number of loss function or gradient measurements per iterationindependent of the problem dimensionto adaptively estimate the Hessian and parameters of primary interest. Aside from introducing the adaptive SP approach, this paper presents practical implementation guidance, asymptotic theory, and a nontrivial numerical evaluation. Also included is a discussion and numerical analysis comparing the adaptive SP approach with the iterateaveraging approach to accelerated SA.