Results 1  10
of
255
Simulationbased computation of information rates for channels with memory
 IEEE TRANS. INFORM. THEORY
, 2006
"... The information rate of finitestate source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum–product recursion on the joint source/channel trellis. This method is extended to compute up ..."
Abstract

Cited by 105 (11 self)
 Add to MetaCart
The information rate of finitestate source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum–product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finitestate approximations. Further upper and lower bounds can be computed by reducedstate methods.
Universal Discrete Denoising: Known Channel
 IEEE Trans. Inform. Theory
, 2003
"... A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we pr ..."
Abstract

Cited by 100 (33 self)
 Add to MetaCart
(Show Context)
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary and ergodic. Moreover, the algorithm is universal also in a semistochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise.
Identifiability of parameters in latent structure models with many observed variables
 ANN. STATIST
, 2009
"... While hidden class models of various types arise in many statistical applications, it is often difficult to establish the identifiability of their parameters. Focusing on models in which there is some structure of independence of some of the observed variables conditioned on hidden ones, we demonstr ..."
Abstract

Cited by 79 (11 self)
 Add to MetaCart
While hidden class models of various types arise in many statistical applications, it is often difficult to establish the identifiability of their parameters. Focusing on models in which there is some structure of independence of some of the observed variables conditioned on hidden ones, we demonstrate a general approach for establishing identifiability utilizing algebraic arguments. A theorem of J. Kruskal for a simple latentclass model with finite state space lies at the core of our results, though we apply it to a diverse set of models. These include mixtures of both finite and nonparametric product distributions, hidden Markov models and random graph mixture models, and lead to a number of new results and improvements to old ones. In the parametric setting, this approach indicates that for such models, the classical definition of identifiability is typically too strong. Instead generic identifiability holds, which implies that the set of nonidentifiable parameters has measure zero, so that parameter inference is still meaningful. In particular, this sheds light on the properties of finite mixtures of Bernoulli products, which have been used for decades despite being known to have nonidentifiable parameters. In the nonparametric setting, we again obtain identifiability only when certain restrictions are placed on the distributions that are mixed, but we explicitly describe the conditions.
Audio source separation with a single sensor
 IEEE Trans. on Audio, Speech and Language Processing
, 2006
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
(Show Context)
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Pairwise Markov chains
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract. The restoration of a hidden process X from an observed process Y is often performed in the framework of hidden Markov chains (HMC). HMC have been recently generalized to triplet Markov chains (TMC). In the TMC model one introduces a third random chain U and assumes that the triplet T = (X, ..."
Abstract

Cited by 60 (28 self)
 Add to MetaCart
Abstract. The restoration of a hidden process X from an observed process Y is often performed in the framework of hidden Markov chains (HMC). HMC have been recently generalized to triplet Markov chains (TMC). In the TMC model one introduces a third random chain U and assumes that the triplet T = (X, U, Y) is a Markov chain (MC). TMC generalize HMC but still enable the development of efficient Bayesian algorithms for restoring X from Y. This paper lists some recent results concerning TMC; in particular, we recall how TMC can be used to model hidden semiMarkov Chains or deal with nonstationary HMC.
What HMMs can do
, 2002
"... Since their inception over thirty years ago, hidden Markov models (HMMs) have have become the predominant methodology for automatic speech recognition (ASR) systems — today, most stateoftheart speech systems are HMMbased. There have been a number of ways to explain HMMs and to list their capabil ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
Since their inception over thirty years ago, hidden Markov models (HMMs) have have become the predominant methodology for automatic speech recognition (ASR) systems — today, most stateoftheart speech systems are HMMbased. There have been a number of ways to explain HMMs and to list their capabilities, each of these ways having both advantages and disadvantages. In an effort to better understand what HMMs can do, this tutorial analyzes HMMs by exploring a novel way in which an HMM can be defined, namely in terms of random variables and conditional independence assumptions. We prefer this definition as it allows us to reason more throughly about the capabilities of HMMs. In particular, it is possible to deduce that there are, in theory at least, no theoretical limitations to the class of probability distributions representable by HMMs. This paper concludes that, in search of a model to supersede the HMM for ASR, we should rather than trying to correct for HMM limitations in the general case, new models should be found based on their potential for better parsimony, computational requirements, and noise insensitivity.
Optimal error exponents in hidden Markov models order estimation
 IEEE Trans. Inf. Theory
, 2003
"... Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture es ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the estimation of the number of hidden states (the order) of a discretetime finitealphabet hidden Markov model (HMM). The estimators we investigate are related to codebased order estimators: penalized maximumlikelihood (ML) estimators and penalized versions of the mixture estimator introduced by Liu and Narayan. We prove strong consistency of those estimators without assuming any a priori upper bound on the order and smaller penalties than previous works. We prove a version of Stein’s lemma for HMM order estimation and derive an upper bound on underestimation exponents. Then we prove that this upper bound can be achieved by the penalized ML estimator and by the penalized mixture estimator. The proof of the latter result gets around the elusive nature of the ML in HMM by resorting to largedeviation techniques for empirical processes. Finally, we prove that for any consistent HMM order estimator, for most HMM, the overestimation exponent is null. Index Terms—Composite hypothesis testing, error exponents, generalized likelihood ratio testing, hidden Markov model (HMM), large deviations, order estimation, Stein’s lemma.
Relaxed statistical model for speech enhancement and a priori SNR estimation
 IEEE Trans. Speech Audio Process
, 2005
"... Abstract—In this paper, we propose a statistical model for speech enhancement that takes into account the timecorrelation between successive speech spectral components. It retains the simplicity associated with the Gaussian statistical model, and enables the extension of existing algorithms to nonc ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a statistical model for speech enhancement that takes into account the timecorrelation between successive speech spectral components. It retains the simplicity associated with the Gaussian statistical model, and enables the extension of existing algorithms to noncausal estimation. The sequence of speech spectral variances is a random process, which is generally correlated with the sequence of speech spectral magnitudes. Causal and noncausal estimators for the a priori SNR are derived in agreement with the model assumptions and the estimation of the speech spectral components. We show that a special case of the causal estimator degenerates to a “decisiondirected ” estimator with a timevarying frequencydependent weighting factor. Experimental results demonstrate the improved performance of the proposed algorithms. Index Terms—Parameter estimation, sequential estimation, spectral analysis, speech enhancement, timefrequency analysis. I.
Largescale multiple testing under dependence
 J ROY STAT SOC B
, 2009
"... Summary. The paper considers the problem of multiple testing under dependence in a compound decision theoretic framework. The observed data are assumed to be generated from an underlying twostate hidden Markov model.We propose oracle and asymptotically optimal datadriven procedures that aim to mini ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
Summary. The paper considers the problem of multiple testing under dependence in a compound decision theoretic framework. The observed data are assumed to be generated from an underlying twostate hidden Markov model.We propose oracle and asymptotically optimal datadriven procedures that aim to minimize the false nondiscovery rate FNR subject to a constraint on the false discovery rate FDR. It is shown that the performance of a multipletesting procedure can be substantially improved by adaptively exploiting the dependence structure among hypotheses, and hence conventional FDR procedures that ignore this structural information are inefficient. Both theoretical properties and numerical performances of the procedures proposed are investigated. It is shown that the procedures proposed control FDR at the desired level, enjoy certain optimality properties and are especially powerful in identifying clustered nonnull cases. The new procedure is applied to an influenzalike illness surveillance study for detecting the timing of epidemic periods.