Results 1  10
of
127
Universal Discrete Denoising: Known Channel
 IEEE Trans. Inform. Theory
, 2003
"... A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we pr ..."
Abstract

Cited by 79 (32 self)
 Add to MetaCart
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary and ergodic. Moreover, the algorithm is universal also in a semistochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise.
Simulationbased computation of information rates for channels with memory
 IEEE TRANS. INFORM. THEORY
, 2006
"... The information rate of finitestate source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum–product recursion on the joint source/channel trellis. This method is extended to compute up ..."
Abstract

Cited by 54 (11 self)
 Add to MetaCart
The information rate of finitestate source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum–product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finitestate approximations. Further upper and lower bounds can be computed by reducedstate methods.
Pairwise Markov chains
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract. The restoration of a hidden process X from an observed process Y is often performed in the framework of hidden Markov chains (HMC). HMC have been recently generalized to triplet Markov chains (TMC). In the TMC model one introduces a third random chain U and assumes that the triplet T = (X, ..."
Abstract

Cited by 48 (25 self)
 Add to MetaCart
Abstract. The restoration of a hidden process X from an observed process Y is often performed in the framework of hidden Markov chains (HMC). HMC have been recently generalized to triplet Markov chains (TMC). In the TMC model one introduces a third random chain U and assumes that the triplet T = (X, U, Y) is a Markov chain (MC). TMC generalize HMC but still enable the development of efficient Bayesian algorithms for restoring X from Y. This paper lists some recent results concerning TMC; in particular, we recall how TMC can be used to model hidden semiMarkov Chains or deal with nonstationary HMC.
RHODES,J.A.(2009). Identifiability of parameters in latent structure models with many observed variables
 Ann. Statist
"... While hidden class models of various types arise in many statistical applications, it is often difficult to establish the identifiability of their parameters. Focusing on models in which there is some structure of independence of some of the observed variables conditioned on hidden ones, we demonstr ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
While hidden class models of various types arise in many statistical applications, it is often difficult to establish the identifiability of their parameters. Focusing on models in which there is some structure of independence of some of the observed variables conditioned on hidden ones, we demonstrate a general approach for establishing identifiability utilizing algebraic arguments. A theorem of J. Kruskal for a simple latentclass model with finite state space lies at the core of our results, though we apply it to a diverse set of models. These include mixtures of both finite and nonparametric product distributions, hidden Markov models and random graph mixture models, and lead to a number of new results and improvements to old ones. In the parametric setting, this approach indicates that for such models, the classical definition of identifiability is typically too strong. Instead generic identifiability holds, which implies that the set of nonidentifiable parameters has measure zero, so that parameter inference is still meaningful. In particular, this sheds light on the properties of finite mixtures of Bernoulli products, which have been used for decades despite being known to have nonidentifiable parameters. In the nonparametric setting, we again obtain identifiability only when certain restrictions are placed on the distributions that are mixed, but we explicitly describe the conditions. 1. Introduction. Statistical
hybrid Markov/semiMarkov chains
 Computational Statistics and Data Analysis
, 2005
"... Models that combine Markovian states with implicit geometric state occupancy distributions and semiMarkovian states with explicit state occupancy distributions, are investigated. This type of model retains the flexibility of hidden semiMarkov chains for the modeling of short or medium size homogen ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Models that combine Markovian states with implicit geometric state occupancy distributions and semiMarkovian states with explicit state occupancy distributions, are investigated. This type of model retains the flexibility of hidden semiMarkov chains for the modeling of short or medium size homogeneous zones along sequences but also enables the modeling of long zones with Markovian states. The forwardbackward algorithm, which in particular enables to implement efficiently the Estep of the EM algorithm, and the Viterbi algorithm for the restoration of the most likely state sequence are derived. It is also shown that macrostates, i.e. seriesparallel networks of states with common observation distribution, are not a valid alternative to semiMarkovian states but may be useful at a more macroscopic level to combine Markovian states with semiMarkovian states. This statistical modeling approach is illustrated by the analysis of branching and flowering patterns in plants.
On the Optimality of Symbol by Symbol Filtering and Denoising
, 2003
"... We consider the problem of optimally recovering a finitealphabet discretetime stochastic process {X t } from its noisecorrupted observation process {Z t }. In general, the optimal estimate of X t will depend on all the components of {Z t } on which it can be based. We characterize nontrivial s ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We consider the problem of optimally recovering a finitealphabet discretetime stochastic process {X t } from its noisecorrupted observation process {Z t }. In general, the optimal estimate of X t will depend on all the components of {Z t } on which it can be based. We characterize nontrivial situations (i.e., beyond the case where (X t , Z t ) are independent) for which optimum performance is attained using "symbol by symbol" operations (a.k.a.
Schemes for BiDirectional Modeling of Discrete Stationary Sources
, 2005
"... Adaptive models are developed to deal with bidirectional modeling of unknown discrete stationary sources, which can be generally applied to statistical inference problems such as noncausal universal discrete denoising that exploits bidirectional dependencies. Efficient algorithms for constructing ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
Adaptive models are developed to deal with bidirectional modeling of unknown discrete stationary sources, which can be generally applied to statistical inference problems such as noncausal universal discrete denoising that exploits bidirectional dependencies. Efficient algorithms for constructing those models are developed and implemented. Denoising is a primary focus of the application of those models, and we compare their performance to that of the DUDE algorithm [1] for universal discrete denoising.
Asymptotics of the entropy rate for a hidden Markov process
 J. Stat. Phys
, 2005
"... Abstract. We calculate the Shannon entropy rate of a binary Hidden Markov Process (HMP), of given transition rate and noise ɛ (emission), as a series expansion in ɛ. The first two orders are calculated exactly. We then evaluate, for finite histories, simple upperbounds of Cover and Thomas. Surprisi ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract. We calculate the Shannon entropy rate of a binary Hidden Markov Process (HMP), of given transition rate and noise ɛ (emission), as a series expansion in ɛ. The first two orders are calculated exactly. We then evaluate, for finite histories, simple upperbounds of Cover and Thomas. Surprisingly, we find that for a fixed order k and history of n steps, the bounds become independent of n for large enough n. This observation is the basis of a conjecture, that the upperbound obtained for n ≥ (k + 3)/2 gives the exact entropy rate for any desired order k of ɛ. 1 Introduction and Statement of Results Let X = {Xn}n≥1 be a first order stationary Markov process over a binary alphabet, with a symmetric transition matrix P ≡ Pab given by P00 = P11 = p = 1 − P01 = 1 − P10, where Pab = Pr(Xn = bXn−1 = a), ∀a, b ∈ {0, 1}. Consider also a Bernoulli
Relaxed statistical model for speech enhancement and a priori SNR estimation
 IEEE Trans. Speech Audio Process
, 2005
"... Abstract—In this paper, we propose a statistical model for speech enhancement that takes into account the timecorrelation between successive speech spectral components. It retains the simplicity associated with the Gaussian statistical model, and enables the extension of existing algorithms to nonc ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Abstract—In this paper, we propose a statistical model for speech enhancement that takes into account the timecorrelation between successive speech spectral components. It retains the simplicity associated with the Gaussian statistical model, and enables the extension of existing algorithms to noncausal estimation. The sequence of speech spectral variances is a random process, which is generally correlated with the sequence of speech spectral magnitudes. Causal and noncausal estimators for the a priori SNR are derived in agreement with the model assumptions and the estimation of the speech spectral components. We show that a special case of the causal estimator degenerates to a “decisiondirected ” estimator with a timevarying frequencydependent weighting factor. Experimental results demonstrate the improved performance of the proposed algorithms. Index Terms—Parameter estimation, sequential estimation, spectral analysis, speech enhancement, timefrequency analysis. I.