Results 1  10
of
10
Universal switching linear least squares prediction
 IEEE Trans. on Signal Processing
, 2007
"... Abstract—In this paper, we consider sequential regression of individual sequences under the squareerror loss. We focus on the class of switching linear predictors that can segment a given individual sequence into an arbitrary number of blocks within each of which a fixed linear regressor is appli ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we consider sequential regression of individual sequences under the squareerror loss. We focus on the class of switching linear predictors that can segment a given individual sequence into an arbitrary number of blocks within each of which a fixed linear regressor is applied. Using a competitive algorithm framework, we construct sequential algorithms that are competitive with the best linear regression algorithms for any segmenting of the data as well as the best partitioning of the data into any fixed number of segments, where both the segmenting of the data and the linear predictors within each segment can be tuned to the underlying individual sequence. The algorithms do not require knowledge of the data length or the number of piecewise linear segments used by the members of the competing class, yet can achieve the performance of the best member that can choose both the partitioning of the sequence as well as the best regressor within each segment. We use a transition diagram (F. M. J. Willems, 1996) to compete with an exponential number of algorithms in the class, using complexity that is linear in the data length. The regret with respect to the best member is (ln ()) per transition for not knowing the best transition times and (ln ()) for not knowing the best regressor within each segment, where is the data length. We construct lower bounds on the performance of any sequential algorithm, demonstrating a form of min–max optimality under certain settings. We also consider the case where the members are restricted to choose the best algorithm in each segment from a finite collection of candidate algorithms. Performance on synthetic and real data are given along with a Matlab implementation of the universal switching linear predictor. Index Terms—Piecewise continuous, prediction, transition diagram, universal. I.
Factor Graphs for Universal Portfolios
"... Abstract—We consider the sequential portfolio investment problem. Building on results in signal processing, machine learning, and other areas, we combine the insights of Cover and Ordentlich’s side information portfolio with those of Blum and Kalai’s transaction costs algorithm to construct one that ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the sequential portfolio investment problem. Building on results in signal processing, machine learning, and other areas, we combine the insights of Cover and Ordentlich’s side information portfolio with those of Blum and Kalai’s transaction costs algorithm to construct one that performs well under transaction costs while taking advantage of side information. We introduce factor graphs as a computational tool for analysis and design of universal (low regret) algorithms, and develop our algorithm with this insight. Finally, we demonstrate that, in contrast to other algorithms, our portfolio performs well over the full range of costs. Index Terms—universal, portfolio, investment, transaction costs, piecewise models, factor graph, sumproduct
Competitive Prediction Under Additive Noise
"... Abstract—In this correspondence, we consider sequential prediction of a realvalued individual signal from its past noisy samples, under square error loss. We refrain from making any stochastic assumptions on the generation of the underlying desired signal and try to achieve uniformly good performan ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—In this correspondence, we consider sequential prediction of a realvalued individual signal from its past noisy samples, under square error loss. We refrain from making any stochastic assumptions on the generation of the underlying desired signal and try to achieve uniformly good performance for any deterministic and arbitrary individual signal. We investigate this problem in a competitive framework, where we construct algorithms that perform as well as the best algorithm in a competing class of algorithms for each desired signal. Here, the best algorithm in the competition class can be tuned to the underlying desired clean signal even before processing any of the data. Three different frameworks under additive noise are considered: the class of a finite number of algorithms; the class of all th order linear predictors (for some fixed order); and finally the class of all switching th order linear predictors. Index Terms—Additive noise, competitive, real valued, sequential decisions, universal prediction. I.
Nonlinear Turbo Equalization Using Context Trees
"... AbstractIn this paper, we study adaptive nonlinear turbo equalization to model the nonlinear dependency of a linear minimum mean square error (MMSE) equalizer on soft information from the decoder. To accomplish this, we introduce piecewise linear models based on context trees that can adaptively c ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractIn this paper, we study adaptive nonlinear turbo equalization to model the nonlinear dependency of a linear minimum mean square error (MMSE) equalizer on soft information from the decoder. To accomplish this, we introduce piecewise linear models based on context trees that can adaptively choose both the partition regions as well as the equalizer coefficients in each region independently, with the computational complexity of a single adaptive linear equalizer. This approach is guaranteed to asymptotically achieve the performance of the best piecewise linear equalizer that can choose both its regions as well as its filter parameters based on observing the whole data sequence in advance.
Performance Analysis of Mixture Approaches and Tracking Performance of Adaptive Filter using Adaptive Neural Network
"... This paper mainly concentrates on different mixture structures which include affine and convex combinations of several parallel running adaptive filters. The mixture structures are investigated using their final MSE values and the tracking of the nonlinear system is done using an ANN model that upda ..."
Abstract
 Add to MetaCart
This paper mainly concentrates on different mixture structures which include affine and convex combinations of several parallel running adaptive filters. The mixture structures are investigated using their final MSE values and the tracking of the nonlinear system is done using an ANN model that updates the filter weights using nonlinear learning strategies(it uses stochastic gradient descent to update the filter weights based on MSE’s of mixture structures).the mixture structures greatly improve the convergence and performance of the of the constituent filters compared to traditional adaptive methods. The mixture structures employed in this paper can be applied to the constituent filters that employ different adaptation algorithms. We describe an adaptive neural network model that updates the weights of the filter using nonlinear methods.
Universal Noiseless Compression for Noisy Data
"... Abstract — We study universal compression for discrete data sequences that were corrupted by noise. We show that while, as expected, there exist many cases in which the entropy of these sequences increases from that of the original data, somewhat surprisingly and counterintuitively, universal codin ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — We study universal compression for discrete data sequences that were corrupted by noise. We show that while, as expected, there exist many cases in which the entropy of these sequences increases from that of the original data, somewhat surprisingly and counterintuitively, universal coding redundancy of such sequences cannot increase compared to the original data. We derive conditions that guarantee that this redundancy does not decrease asymptotically (in first order) from the original sequence redundancy in the stationary memoryless case. We then provide bounds on the redundancy for coding finite length (large) noisy blocks generated by stationary memoryless sources and corrupted by some specific memoryless channels. Finally, we propose a sequential probability estimation method that can be used to compress binary data corrupted by some noisy channel. While there is much benefit in using this method in compressing short blocks of noise corrupted data, the new method is more general and allows sequential compression of binary sequences for which the probability of a bit is known to be limited within any given interval (not necessarily between 0 and 1). Additionally, this method has many different applications, including, prediction, sequential channel estimation, and others. I.
UNIVERSAL PIECEWISE LINEAR REGRESSION OF INDIVIDUAL SEQUENCES: LOWER BOUND
"... We consider universal piecewise linear regression of real valued bounded sequences under the squared loss function. In this setting, we present a lower bound on the regret of a universal sequential piecewise linear regressor compared to the best piecewise linear regressor that has access to the enti ..."
Abstract
 Add to MetaCart
(Show Context)
We consider universal piecewise linear regression of real valued bounded sequences under the squared loss function. In this setting, we present a lower bound on the regret of a universal sequential piecewise linear regressor compared to the best piecewise linear regressor that has access to the entire sequence in advance. This lower bound are tight with the corresponding upper bounds, suggesting a minmax optimality of the sequential regressor, for every individual bounded sequence. Index Terms — Regression, piecewise linear, universal 1.
A Comprehensive Approach to Universal Piecewise Nonlinear Regression Based on Trees
"... ar ..."
(Show Context)