Results 1  10
of
14
On the Optimality of Solutions of the MaxProduct Belief Propagation Algorithm in Arbitrary Graphs
, 2001
"... Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the gra ..."
Abstract

Cited by 187 (15 self)
 Add to MetaCart
Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixedpoint yields the most probable a posteriori (MAP) values of the unobserved variables given the observed ones. Recently, good
Correctness of Local Probability Propagation in Graphical Models with Loops
, 2000
"... This article analyzes the behavior of local propagation rules in graphical models with a loop. ..."
Abstract

Cited by 180 (9 self)
 Add to MetaCart
This article analyzes the behavior of local propagation rules in graphical models with a loop.
Tree Consistency and Bounds on the Performance of the MaxProduct Algorithm and Its Generalizations
, 2002
"... Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on gr ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles.
On The Effective Weights Of Pseudocodewords For Codes Defined On Graphs With Cycles
 In Codes, systems and graphical models
"... The behavior of an iterative decoding algorithm for a code defined on a graph with cycles and a given decoding schedule is characterized by a cyclefree computation tree. The pseudocodewords of such a tree are the words that satisfy all tree constraints; pseudocodewords govern decoding performance. ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
The behavior of an iterative decoding algorithm for a code defined on a graph with cycles and a given decoding schedule is characterized by a cyclefree computation tree. The pseudocodewords of such a tree are the words that satisfy all tree constraints; pseudocodewords govern decoding performance. Wiberg [12] determined the effective weight of pseudocodewords for binary codewords on an AWGN channel. This paper extends Wiberg's formula for AWGN channels to nonbinary codes, develops similar results for BSC and BEC channels, and gives upper and lower bounds on the effective weight. The 16state tailbiting trellis of the Golay code [2] is used for examples. Although in this case no pseudocodeword is found with effective weight less than the minimum Hamming weight of the Golay code on an AWGN channel, it is shown by example that the minimum effective pseudocodeword weight can be less than the minimum codeword weight.
Turbo Factor Analysis
 In Adv. Neural Information Processing Systems 12
, 1999
"... this paper, we explore methods that infer independent factors, so we focus on inferring the factor means and factor variances, ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
this paper, we explore methods that infer independent factors, so we focus on inferring the factor means and factor variances,
Evaluation of the Low Frame Error Rate Performance of LDPC Codes Using Importance Sampling
"... Abstract — We present an importance sampling method for the evaluation of the low frame error rate (FER) performance of LDPC codes under iterative decoding. It relies on a combinatorial characterization of absorbing sets, which are the dominant cause of decoder failure in the low FER region. The bia ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Abstract — We present an importance sampling method for the evaluation of the low frame error rate (FER) performance of LDPC codes under iterative decoding. It relies on a combinatorial characterization of absorbing sets, which are the dominant cause of decoder failure in the low FER region. The biased density in the importance sampling scheme is a meanshifted version of the original Gaussian density, which is suitably centered between a codeword and a dominant absorbing set. This choice of biased density yields an unbiased estimator for the FER with a variance lower by several orders of magnitude than the standard Monte Carlo estimator. Using this importance sampling scheme in software, we obtain good agreement with the experimental results obtained from a fast hardware emulator of the decoder. I.
Graphbased iterative decoding algorithms for parityconcatenated trellis codes
 IEEE TRANS. ON INFORM. THEORY
, 2001
"... In this paper, we construct parityconcatenated trellis codes in which a trellis code is used as the inner code and a simple paritycheck code is used as the outer code. From the Tanner–Wiberg–Loeliger (TWL) graph representation, several iterative decoding algorithms can be derived. However, since t ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper, we construct parityconcatenated trellis codes in which a trellis code is used as the inner code and a simple paritycheck code is used as the outer code. From the Tanner–Wiberg–Loeliger (TWL) graph representation, several iterative decoding algorithms can be derived. However, since the graph of the parityconcatenated code contains many short cycles, the conventional minsum and sumproduct algorithms cannot achieve nearoptimal decoding. After some simple modifications, we obtain nearoptimal iterative decoders. The modifications include either a) introducing a normalization operation in the minsum and sum–product algorithms or b) cutting the short cycles which arise in the iterative Viterbi algorithm (IVA). After modification, all three algorithms can achieve nearoptimal performance, but the IVA has the least average complexity. We also show that asymptotically maximumlikelihood (ML) decoding and a posteriori probability (APP) decoding can be achieved using iterative decoders with only two iterations. Unfortunately, this asymptotic behavior is only exhibited when the bitenergytonoise ratio is above the cutoff rate. Simulation results show that with trellis shaping, iterative decoding can perform within 1.2 dB of the Shannon limit at a bit error rate (BER) of R IH S for a block size of 20 000 symbols. For a block size of 200 symbols, iterative decoding can perform within 2.1 dB of the Shannon limit.
Signal space characterization of iterative decoding
 IEEE Trans. Inf. Theory
, 2001
"... Abstract—By tracing the flow of computations in the iterative decoders for low density parity check codes, we are able to formulate a signalspace view for a finite number of iterations in a finitelength code. On a Gaussian channel, maximum a posteriori codeword decoding (or “maximum likelihood dec ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—By tracing the flow of computations in the iterative decoders for low density parity check codes, we are able to formulate a signalspace view for a finite number of iterations in a finitelength code. On a Gaussian channel, maximum a posteriori codeword decoding (or “maximum likelihood decoding”) decodes to the codeword signal that is closest to the channel output in Euclidean distance. In contrast, we show that iterative decoding decodes to the “pseudosignal ” that has highest correlation with the channel output. The set of pseudosignals corresponds to “pseudocodewords”, only a vanishingly small number of which correspond to codewords. We show that some pseudocodewords cause decoding errors, but that there are also pseudocodewords that frequently correct the deleterious effects of other pseudocodewords. I.
On the Representation of Codes in Forney Graphs
 in Codes, Graphs, and Systems, R.E. Blahut and R. Koetter (Editors
, 2000
"... We investigate the representation of codes in graphical models. In particular, we use the notion of a trellis formation on a Forney graph to visualize the structure of a code on a given graph. We focus on the question of whether a trellis formation contains mergeable vertices and whether the descrip ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We investigate the representation of codes in graphical models. In particular, we use the notion of a trellis formation on a Forney graph to visualize the structure of a code on a given graph. We focus on the question of whether a trellis formation contains mergeable vertices and whether the description of a code in terms of local behaviors on the Forney graph can be made smaller. Necessary and sufficient conditions for mergeability are given leading to a polynomial time algorithm that decides if a given trellis formation contains mergeable vertices. One of our main tools is a duality theorem by Forney, for which we give a short proof in the context of binary codes.
Tanner graphs for group block codes and lattices: construction and complexity
 IEEE Trans. Inform. Theory
, 2001
"... Abstract—We develop a Tanner graph (TG) construction for an Abelian group block code with arbitrary alphabets at different coordinates, an important application of which is the representation of the label code of a lattice. The construction is based on the modular linear constraints imposed on the c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—We develop a Tanner graph (TG) construction for an Abelian group block code with arbitrary alphabets at different coordinates, an important application of which is the representation of the label code of a lattice. The construction is based on the modular linear constraints imposed on the code symbols by a set of generators for the dual code. As a necessary step toward the construction of a TG for,we devise an efficient algorithm for finding a generating set for.In the process, we develop a construction for lattices based on an arbitrary Abelian group block code, called generalized Construction A (GCA), and explore relationships among a group code, its GCA lattice, and their duals. We also study the problem of finding lowcomplexity TGs for Abelian group block codes and lattices, and derive tight lower bounds on the labelcode complexity of lattices. It is shown that for many important lattices, the minimal label codes which achieve the lower bounds cannot be supported by cyclefree Tanner graphs. Index Terms—Dual code, generalized Construction A, group