Results 21  30
of
589
Multichannel Blind Identification: From Subspace to Maximum Likelihood Methods
 Proc. IEEE
, 1998
"... this paper is to review developments in blind channel identification and estimation within the estimation theoretical framework. We have paid special attention to the issue of identifiability, which is at the center of all blind channel estimation problems. Various existing algorithms are classified ..."
Abstract

Cited by 79 (2 self)
 Add to MetaCart
this paper is to review developments in blind channel identification and estimation within the estimation theoretical framework. We have paid special attention to the issue of identifiability, which is at the center of all blind channel estimation problems. Various existing algorithms are classified into the momentbased and the maximum likelihood (ML) methods. We further divide these algorithms based on the modeling of the input signal. If input is assumed to be random with prescribed statistics (or distributions), the corresponding blind channel estimation schemes are considered to be statistical. On the other hand, if the source does not have a statistical description, or although the source is random but the statistical properties of the source are not exploited, the corresponding estimation algorithms are classified as deterministic. Fig. 2 shows a map for different classes of algorithms and the organization of the paper.
Decoding ErrorCorrecting Codes via Linear Programming
, 2003
"... Abstract. Errorcorrecting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original informa ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
Abstract. Errorcorrecting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full errorcorrecting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an errorcorrecting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new “LP decoders ” have tight combinatorial characterizations of decoding success that can be used to analyze errorcorrecting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the
RateDistortion Optimized Mode Selection for Very Low Bit Rate Video Coding and the Emerging H.263 Standard
, 1995
"... This paper addresses the problem of encoder optimization in a macroblockbased multimode video compression system. An efficient solution is proposed in which, for a given image region, the optimum combination of macroblock modes and the associated mode parameters are jointly selected so as to minim ..."
Abstract

Cited by 76 (12 self)
 Add to MetaCart
This paper addresses the problem of encoder optimization in a macroblockbased multimode video compression system. An efficient solution is proposed in which, for a given image region, the optimum combination of macroblock modes and the associated mode parameters are jointly selected so as to minimize the overall distortion for a given bitrate budget. Conditions for optimizing the encoder operation are derived within a rateconstrained product code framework using a Lagrangian formulation. The instantaneous rate of the encoder is controlled by a single Lagrange multiplier that makes the method amenable to mobile wireless networks with timevarying capacity. When rate and distortion dependencies are introduced between adjacent blocks (as is the case when the motion vectors are differentially encoded and/or overlapped block motion compensation is employed), the ensuing encoder complexity is surmounted using dynamic programming. Due to the generic nature of the algorithm, it can be succ...
Analysis of Low Density Codes and Improved Designs Using Irregular Graphs
, 1998
"... In [6], Gallager introduces a family of codes based on sparse bipartite graphs, which he calls lowdensity paritycheck codes. He suggests a natural decoding algorithm for these codes, and proves a good bound on the fraction of errors that can be corrected. As the codes that Gallager builds are deri ..."
Abstract

Cited by 75 (12 self)
 Add to MetaCart
In [6], Gallager introduces a family of codes based on sparse bipartite graphs, which he calls lowdensity paritycheck codes. He suggests a natural decoding algorithm for these codes, and proves a good bound on the fraction of errors that can be corrected. As the codes that Gallager builds are derived from regular graphs, we refer to them as regular codes. Following the general approach introduced in [7] for the design and analysis of erasure codes, we consider errorcorrecting codes based on random irregular bipartite graphs, which we call irregular codes. We introduce tools based on linear programming for designing linear time irregular codes with better errorcorrecting capabilities than possible with regular codes. For example, the decoding algorithm for the rate 1/2 regular codes of Gallager can provably correct up to 5.17% errors asymptotically, whereas we have found irregular codes for which our decoding algorithm can provably correct up to 6.27% errors asymptotically. We incl...
Sphinx4: A flexible open source framework for speech recognition
, 2004
"... Sphinx4 is a flexible, modular and pluggable framework to help foster new innovations in the core research of hidden Markov model (HMM) speech recognition systems. The design of Sphinx4 is based on patterns that have emerged from the design of past systems as well as new requirements based on area ..."
Abstract

Cited by 71 (0 self)
 Add to MetaCart
Sphinx4 is a flexible, modular and pluggable framework to help foster new innovations in the core research of hidden Markov model (HMM) speech recognition systems. The design of Sphinx4 is based on patterns that have emerged from the design of past systems as well as new requirements based on areas that researchers currently want to explore. To exercise this framework, and to provide researchers with a “researchready” system, Sphinx4 also includes several implementations of both simple and stateoftheart techniques. The framework and the implementations are all freely available via open source.
Coupled hidden Markov models for modeling interacting processes
, 1997
"... We present methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. We introduce a deterministic O(T (CN) 2 ) approximation for maxi ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
We present methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. We introduce a deterministic O(T (CN) 2 ) approximation for maximum a posterior (MAP) state estimation which enables fast classification and parameter estimation via expectation maximization. An "Nheads" dynamic programming algorithm samples from the highest probability paths through a compact state trellis, minimizing an upper bound on the cross entropy with the full (combinatoric) dynamic programming problem. The complexity is O(T (CN) 2 ) for C chains of N states apiece observing T data points, compared with O(TN 2C ) for naive (Cartesian product), exact (state clustering), and stochastic (Monte Carlo) methods applied to the same inference problem. In several experiments examining training time, model likelihoods, classification accuracy, and ro...
Joint Selection of Source and Channel Rate for VBR Video Transmission under ATM Policing Constraints
 IEEE Journal on Selected Areas in Communications
, 1997
"... VBR transmission of video over ATM networks has long been said to provide sub stantial benefits, both in terms of network utilization and video quality, when compared with conventional CBR approaches. However, realistic VBR transmission environments will certainly impose constraints on the rate ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
VBR transmission of video over ATM networks has long been said to provide sub stantial benefits, both in terms of network utilization and video quality, when compared with conventional CBR approaches. However, realistic VBR transmission environments will certainly impose constraints on the rate that each source can submit to the network.
Tree Consistency and Bounds on the Performance of the MaxProduct Algorithm and Its Generalizations
, 2002
"... Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on gr ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles.
Subwordbased Approaches for Spoken Document Retrieval
, 2000
"... This thesis explores approaches to the problem of spoken document retrieval (SDR), which is the task of automatically indexing and then retrieving relevant items from a large collection of recorded speech messages in response to a user specified natural language text query. We investigate the use of ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
This thesis explores approaches to the problem of spoken document retrieval (SDR), which is the task of automatically indexing and then retrieving relevant items from a large collection of recorded speech messages in response to a user specified natural language text query. We investigate the use of subword unit representations for SDR as an alternative to words generated by either keyword spotting or continuous speech recognition. Our investigation is motivated by the observation that wordbased retrieval approaches face the problem of either having to know the keywords to search for a priori, or requiring a very large recognition vocabulary in order to cover the contents of growing and diverse message collections. The use of subword units in the recognizer constrains the size of the vocabulary needed to cover the language; and the use of subword units as indexing terms allows for the detection of new userspecified query terms during retrieval. Four
UltraSummarization: A Statistical Approach to Generating Highly Condensed NonExtractive Summaries
 In SIGIR99
, 1999
"... Using current extractive summarization techniques, it is impossible to produce a coherent document summary shorter than a single sentence, or to produce a summary that conforms to particular stylistic constraints. Ideally, one would prefer to understand the document, and to generate an appropriate s ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
Using current extractive summarization techniques, it is impossible to produce a coherent document summary shorter than a single sentence, or to produce a summary that conforms to particular stylistic constraints. Ideally, one would prefer to understand the document, and to generate an appropriate summary directly from the results of that understanding. Absent a comprehensive natural language understanding system, an approximation must be used. This paper presents an alternative statistical model of a summarization process, which jointly applies statistical models of the term selection and term ordering process to produce brief coherent summaries in a style learned from a training corpus. 1 Introduction Summarization is one of the most important capabilities required in writing. Effective summarization, like effective writing, is neither easy nor innate; rather, it is a skill that is developed through instruction and practice [Hidi and Anderson, 1986; Hooper et al., 1994] . Generating...