Results 1  10
of
16
Hidden Markov processes
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finite ..."
Abstract

Cited by 170 (3 self)
 Add to MetaCart
Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finitestate finitealphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximumlikelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finitestate channels, hidden Markov models, identifiability, Kalman filter, maximumlikelihood (ML) estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
Linear minimax regret estimation of deterministic parameters with bounded data uncertainties
 IEEE Trans. Signal Process
, 2004
"... Abstract—We develop a new linear estimator for estimating an unknown parameter vector x in a linear model in the presence of bounded data uncertainties. The estimator is designed to minimize the worstcase regret over all bounded data vectors, namely, the worstcase difference between the meansquar ..."
Abstract

Cited by 34 (26 self)
 Add to MetaCart
Abstract—We develop a new linear estimator for estimating an unknown parameter vector x in a linear model in the presence of bounded data uncertainties. The estimator is designed to minimize the worstcase regret over all bounded data vectors, namely, the worstcase difference between the meansquared error (MSE) attainable using a linear estimator that does not know the true parameters x and the optimal MSE attained using a linear estimator that knows x. We demonstrate through several examples that the minimax regret estimator can significantly increase the performance over the conventional leastsquares estimator, as well as several other leastsquares alternatives. Index Terms—Deterministic parameter estimation, linear estimation, mean squared error bounded data uncertainties estimation, minimax estimation, regret. I.
A competitive minimax approach to robust estimation in linear models
 Institute of Technology
"... Abstract—We consider the problem of estimating, in the presence of model uncertainties, a random vector x that is observed through a linear transformation H and corrupted by additive noise. We first assume that both the covariance matrix of x and the transformation H are not completely specified and ..."
Abstract

Cited by 25 (16 self)
 Add to MetaCart
Abstract—We consider the problem of estimating, in the presence of model uncertainties, a random vector x that is observed through a linear transformation H and corrupted by additive noise. We first assume that both the covariance matrix of x and the transformation H are not completely specified and develop the linear estimator that minimizes the worstcase meansquared error (MSE) across all possible covariance matrices and transformations H in the region of uncertainty. Although the minimax approach has enjoyed widespread use in the design of robust methods, we show that its performance is often unsatisfactory. To improve the performance over the minimax MSE estimator, we develop a competitive minimax approach for the case where H is known but the covariance of x is subject to uncertainties and seek the linear estimator that minimizes the worstcase regret, namely, the worstcase difference between the MSE attainable using a linear estimator, ignorant of the signal covariance, and the optimal MSE attained using a linear estimator that knows the signal covariance. The linear minimax regret estimator is shown to be equal to a minimum MSE (MMSE) estimator corresponding to a certain choice of signal covariance that depends explicitly on the uncertainty region. We demonstrate, through examples, that the minimax regret approach can improve the performance over both the minimax MSE approach and a “plug in ” approach, in which the estimator is chosen to be equal to the MMSE estimator with an estimated covariance matrix replacing the true unknown covariance. We then show that although the optimal minimax regret estimator in the case in which the signal and noise are jointly Gaussian is nonlinear, we often do not lose much by restricting attention to linear estimators. Index Terms—Covariance uncertainty, linear estimation, minimax mean squared error, regret, robust estimation. I.
Detection of Hiding in the Least Significant Bit
, 2003
"... We consider the problem of detecting hiding in the least significant bit (LSB) of images. Since the hiding rate is not known, this is a composite hypothesis testing problem. We show that under a mild condition on the host probability mass function (PMF), the optimal composite hypothesis testing prob ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We consider the problem of detecting hiding in the least significant bit (LSB) of images. Since the hiding rate is not known, this is a composite hypothesis testing problem. We show that under a mild condition on the host probability mass function (PMF), the optimal composite hypothesis testing problem is solved by a related optimal simple hypothesis testing problem. We then develop practical tests based on the optimal test and exhibit their superiority over Stegdetect, a popular steganalysis method used in practice.
Minimax Universal Decoding with an Erasure Option
 IEEE Trans. Inform. Theory
, 2007
"... Motivated by applications of rateless coding, decision feedback, and automatic repeat request (ARQ), we study the problem of universal decoding for unknown channels, in the presence of an erasure option. Specifically, we harness the competitive minimax methodology developed in earlier studies, in or ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Motivated by applications of rateless coding, decision feedback, and automatic repeat request (ARQ), we study the problem of universal decoding for unknown channels, in the presence of an erasure option. Specifically, we harness the competitive minimax methodology developed in earlier studies, in order to derive a universal version of Forney’s classical erasure/list decoder, which in the erasure case, optimally trades off between the probability of erasure and the probability of undetected error. The proposed universal erasure decoder guarantees universal achievability of a certain fraction ξ of the optimum error exponents of these probabilities (in a sense to be made precise in the sequel). A single–letter expression for ξ, which depends solely on the coding rate and the Neyman–Pearson threshold (to be defined), is provided. The example of the binary symmetric channel is studied in full detail, and some conclusions are drawn.
A Neyman–Pearson Approach to Universal Erasure and List Decoding
"... Abstract—When information is to be transmitted over an unknown, possibly unreliable channel, an erasure option at the decoder is desirable. Using constantcomposition random codes, we propose a generalization of Csiszár and Körner’s maximum mutual information (MMI) decoder with an erasure option for ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract—When information is to be transmitted over an unknown, possibly unreliable channel, an erasure option at the decoder is desirable. Using constantcomposition random codes, we propose a generalization of Csiszár and Körner’s maximum mutual information (MMI) decoder with an erasure option for discrete memoryless channels. The new decoder is parameterized by a weighting function that is designed to optimize the fundamental tradeoff between undetectederror and erasure exponents for a compound class of channels. The class of weighting functions may be further enlarged to optimize a similar tradeoff for list decoders—in that case, undetectederror probability is replaced with average number of incorrect messages in the list. Explicit solutions are identified. The optimal exponents admit simple expressions in terms of the spherepacking exponent, at all rates below capacity. For small erasure exponents, these expressions coincide with those derived by Forney (1968) for symmetric channels, using maximum a posteriori decoding. Thus, for those channels at least, ignorance of the channel law is inconsequential. Conditions for optimality of the Csiszár–Körner rule and of the simpler empiricalmutualinformation thresholding rule are identified. The error exponents are evaluated numerically for the binary symmetric channel. Index Terms—Constantcomposition codes, erasures, error exponents, list decoding, maximum mutual information (MMI) decoder, method of types, Neyman–Pearson hypothesis testing, random codes, sphere packing, universal decoding. I.
On optimum parameter modulation–estimation from a large deviations perspective, submitted to
 IEEE Trans. Inform. Theory
, 2012
"... We consider the problem of jointly optimum modulation and estimation of a real– valued random parameter, conveyed over an additive white Gaussian noise (AWGN) channel, where the performance metric is the large deviations behavior of the estimator, namely, the exponential decay rate (as a function of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We consider the problem of jointly optimum modulation and estimation of a real– valued random parameter, conveyed over an additive white Gaussian noise (AWGN) channel, where the performance metric is the large deviations behavior of the estimator, namely, the exponential decay rate (as a function of the observation time) of the probability that the estimation error would exceed a certain threshold. Our basic result is in providing an exact characterization of the fastest achievable exponential decay rate, among all possible modulator–estimator (transmitter–receiver) pairs, where the modulator is limited only in the signal power, but not in bandwidth. This exponential rate turns out to be given by the reliability function of the AWGN channel. We also discuss several ways to achieve this optimum performance, and one of them is based on quantization of the parameter, followed by optimum channel coding and modulation, which gives rise to a separation–based transmitter, if one views this setting from the perspective of joint source–channel coding. This is in spite of the fact that, in general, when error exponents are considered, the source–channel separation theorem does not hold true. We also discuss several observations, modifications and extensions of this result in several directions, including other channels, and the case of multidimensional parameter vectors. One of our findings concerning the latter, is that there is an abrupt threshold effect in the dimensionality of the parameter vector: below a certain critical dimension, the probability of excess estimation error may still decay exponentially, but beyond this value, it must converge to unity.
Universally Attainable ErrorExponents for RateConstrained Denoising of Noisy Sources
, 2002
"... Consider the problem of rateconstrained reconstruction of a finitealphabet discrete memoryless signal X (X1 , . . . , Xn ), based on a noisecorrupted observation sequence Z , which is the finitealphabet output of a Discrete Memoryless Channel (DMC) whose input is X . Suppose that there ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Consider the problem of rateconstrained reconstruction of a finitealphabet discrete memoryless signal X (X1 , . . . , Xn ), based on a noisecorrupted observation sequence Z , which is the finitealphabet output of a Discrete Memoryless Channel (DMC) whose input is X . Suppose that there is some uncertainty in the source distribution, in the channel characteristics, or in both. Equivalently, suppose that the distribution of the pairs (X i , Z i ), rather than completely being known, is only known to belong to a set #. Suppose further that the relevant performance criterion is the probability of excess distortion, i.e., letting ) denote the reconstruction, we are interested in the behavior of P # , where # is a (normalized) block distortion induced by a singleletter distortion measure and P # denotes the probability measure corresponding to the case where (X i , Z i ) #, # #.
Order Estimation and Model Selection
, 2003
"... reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimat ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
reason why source coding concepts and techniques have become a standard tool in the area. This chapter presents four kinds of results: a rst very general consistency result in a Bayesian setting provides hints about the ideal penalties that could be used in penalized maximum likelihood order estimation. Then we provide a general construction for strongly consistent order estimators based on universal coding arguments. The third main result reports a recent tour de force by Csiszar and Shields (2000) who show that the Bayesian Information Criterion provides a strongly consistent Markov order estimator. We conclude by presenting a general framework for analyzing the Bahadur eciency of order estimation procedures following the line Gassiat and Boucheron (to appear). LRI UMR 8623 CNRS, Universite ParisSud Mathematiques, Universite ParisSud 2.1 Model Order Identi cation: what is it about ? In the preceding chapters, we have been concerned with inference problems in HMMs where th
On Joint Track Initiation and Parameter Estimation under Measurement Origin Uncertainty
"... The problem of joint detection and estimation for track initiation under measurement origin uncertainty is studied. The two wellknown approaches, namely the maximum likelihood estimator with probabilistic data association (MLPDA) and the multiple hypotheses tracking (MHT) via multiframe assignment ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The problem of joint detection and estimation for track initiation under measurement origin uncertainty is studied. The two wellknown approaches, namely the maximum likelihood estimator with probabilistic data association (MLPDA) and the multiple hypotheses tracking (MHT) via multiframe assignment, are characterized as special cases of the generalized likelihood ratio test (GLRT) and their performance limits indicated. A new detection scheme based on the optimal gating is proposed and the associated parameter estimation scheme modified within the MLPDA framework. A simplified example shows the effectiveness of the new algorithm in detection performance under heavy clutter. Extension of the results to state estimation with measurement origin uncertainty is also discussed with emphasis on joint detection and recursive state estimation. Manuscript received September 12, 2003; revised February 12, 2004; released for publication March 5, 2004.