Results 1  10
of
12
Universal prediction of individual sequences
 IEEE Transactions on Information Theory
, 1992
"... AbstructThe problem of predicting the next outcome of an individual binary sequence using finite memory, is considered. The finitestate predictability of an infinite sequence is defined as the minimum fraction of prediction errors that can be made by any finitestate (FS) predictor. It is proved t ..."
Abstract

Cited by 158 (13 self)
 Add to MetaCart
AbstructThe problem of predicting the next outcome of an individual binary sequence using finite memory, is considered. The finitestate predictability of an infinite sequence is defined as the minimum fraction of prediction errors that can be made by any finitestate (FS) predictor. It is proved that this FS predictability can be attained by universal sequential prediction schemes. Specifically, an efficient prediction procedure based on the incremental parsing procedure of the LempelZiv data compression algorithm is shown to achieve asymptotically the FS predictability. Finally, some relations between compressibility and predictability are pointed out, and the predictability is proposed as an additional measure of the complexity of a sequence. Index TermsPredictability, compressibility, complexity, finitestate machines, Lempel Ziv algorithm.
Universal Portfolios
, 1996
"... We exhibit an algorithm for portfolio selection that asymptotically outperforms the best stock in the market. Let x i = (x i1 ; x i2 ; : : : ; x im ) t denote the performance of the stock market on day i ; where x ij is the factor by which the jth stock increases on day i : Let b i = (b i1 ; b i2 ..."
Abstract

Cited by 155 (4 self)
 Add to MetaCart
We exhibit an algorithm for portfolio selection that asymptotically outperforms the best stock in the market. Let x i = (x i1 ; x i2 ; : : : ; x im ) t denote the performance of the stock market on day i ; where x ij is the factor by which the jth stock increases on day i : Let b i = (b i1 ; b i2 ; : : : ; b im ) t ; b ij 0; P j b ij = 1 ; denote the proportion b ij of wealth invested in the jth stock on day i : Then S n = Q n i=1 b t i x i is the factor by which wealth is increased in n trading days. Consider as a goal the wealth S n = max b Q n i=1 b t x i that can be achieved by the best constant rebalanced portfolio chosen after the stock outcomes are revealed. It can be shown that S n exceeds the best stock, the Dow Jones average, and the value line index at time n: In fact, S n usually exceeds these quantities by an exponential factor. Let x 1 ; x 2 ; : : : ; be an arbitrary sequence of market vectors. It will be shown that the nonanticipating sequence ...
Universal prediction
 IEEE Transactions on Information Theory
, 1998
"... Abstract — This paper consists of an overview on universal prediction from an informationtheoretic perspective. Special attention is given to the notion of probability assignment under the selfinformation loss function, which is directly related to the theory of universal data compression. Both th ..."
Abstract

Cited by 136 (11 self)
 Add to MetaCart
Abstract — This paper consists of an overview on universal prediction from an informationtheoretic perspective. Special attention is given to the notion of probability assignment under the selfinformation loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings. Index Terms — Bayes envelope, entropy, finitestate machine, linear prediction, loss function, probability assignment, redundancycapacity, stochastic complexity, universal coding, universal prediction. I.
Universal Discrete Denoising: Known Channel
 IEEE Trans. Inform. Theory
, 2003
"... A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we pr ..."
Abstract

Cited by 79 (32 self)
 Add to MetaCart
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary and ergodic. Moreover, the algorithm is universal also in a semistochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise.
Universal filtering via prediction
 IEEE Trans. Inform. Theory
, 2007
"... We consider the filtering problem, where a finitealphabet individual sequence is corrupted by a discrete memoryless channel, and the goal is to causally estimate each sequence component based on the past and present noisy observations. We establish a correspondence between the filtering problem and ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We consider the filtering problem, where a finitealphabet individual sequence is corrupted by a discrete memoryless channel, and the goal is to causally estimate each sequence component based on the past and present noisy observations. We establish a correspondence between the filtering problem and the problem of prediction of individual sequences which leads to the following result: Given an arbitrary finite set of filters, there exists a filter which performs, with high probability, essentially as well as the best in the set, regardless of the underlying noiseless individual sequence. We use this relationship between the problems to derive a filter guaranteed of attaining the “finitestate filterability ” of any individual sequence by leveraging results from the prediction problem. 1
Asymptotic efficiency of simple decisions for the compound decision problem
"... Abstract: We consider the compound decision problem of estimating a vector of n parameters, known up to a permutation, corresponding to n independent observations, and discuss the difference between two symmetric classes of estimators. The first and larger class is restricted to the set of all permu ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract: We consider the compound decision problem of estimating a vector of n parameters, known up to a permutation, corresponding to n independent observations, and discuss the difference between two symmetric classes of estimators. The first and larger class is restricted to the set of all permutation invariant estimators. The second class is restricted further to simple symmetric procedures. That is, estimators such that each parameter is estimated by a function of the corresponding observation alone. We show that under mild conditions, the minimal total squared error risks over these two classes are asymptotically equivalent up to essentially O(1) difference.
during a substantial part of this investigation, iii TABLE OF CONTENTS
"... under Contract AF 18(600)83 monitored by the ..."
unknown title
, 802
"... Asymptotic efficiency of simple decisions for the compound decision problem ∗ ..."
Abstract
 Add to MetaCart
Asymptotic efficiency of simple decisions for the compound decision problem ∗
© Copyright 2006 HewlettPackard Development Company, L.P.Lower Limits of Discrete Universal Denoising
, 2006
"... denoising, universal algorithms, individual sequences, discrete memoryless channels In the spirit of results on universal compression, we compare the performance of universal denoisers on discrete memoryless channels to that of the best performance obtained by a kth order omniscient denoiser, namel ..."
Abstract
 Add to MetaCart
denoising, universal algorithms, individual sequences, discrete memoryless channels In the spirit of results on universal compression, we compare the performance of universal denoisers on discrete memoryless channels to that of the best performance obtained by a kth order omniscient denoiser, namely one that is tuned to the transmitted noiseless sequence. We show that the additional loss incurred in the worst case by any universal 1 k 2 denoiser on a lengthn sequence grows at least like cn − ⎛ ⎞ Ω ⎜ ⎟, where c is a constant depending on the channel parameters and the loss function. This shows that for fixed k the additional loss incurred by the Discrete Universal Denoiser (DUDE) derived by Weissman et al is no larger than a constant multiplicative factor. Furthermore we compare universal denoisers to denoisers that are aware of the distribution of the transmitted noiseless sequence. We show that, even for this weaker target loss, for any universal denoiser there exists some i.i.d. noiseless distribution whose optimum expected loss is lower than that incurred by the universal denoiser by n − Ω⎜