Results 1  10
of
118
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 704 (15 self)
 Add to MetaCart
This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1minimization problem (‖x‖ℓ1:= i xi) min g∈R n ‖y − Ag‖ℓ1 provided that the support of the vector of errors is not too large, ‖e‖ℓ0: = {i: ei ̸= 0}  ≤ ρ · m for some ρ> 0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work [5]. Finally, underlying the success of ℓ1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.
MAP estimation via agreement on trees: Messagepassing and linear programming
, 2002
"... We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of treestructured distributions, we obtain an upper bound ..."
Abstract

Cited by 140 (7 self)
 Add to MetaCart
We develop and analyze methods for computing provably optimal maximum a posteriori (MAP) configurations for a subclass of Markov random fields defined on graphs with cycles. By decomposing the original distribution into a convex combination of treestructured distributions, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is tight if and only if all the tree distributions share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original distribution. Next we develop two approaches to attempting to obtain tight upper bounds: (a) a treerelaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds; and (b) a treereweighted maxproduct messagepassing algorithm that is related to but distinct from the maxproduct algorithm. In this way, we establish a connection between a certain LP relaxation of the modefinding problem, and a reweighted form of the maxproduct (minsum) messagepassing algorithm.
MAP estimation via agreement on (hyper)trees: Messagepassing and linear programming approaches
 IEEE Transactions on Information Theory
, 2002
"... We develop an approach for computing provably exact maximum a posteriori (MAP) configurations for a subclass of problems on graphs with cycles. By decomposing the original problem into a convex combination of treestructured problems, we obtain an upper bound on the optimal value of the original ..."
Abstract

Cited by 111 (10 self)
 Add to MetaCart
(Show Context)
We develop an approach for computing provably exact maximum a posteriori (MAP) configurations for a subclass of problems on graphs with cycles. By decomposing the original problem into a convex combination of treestructured problems, we obtain an upper bound on the optimal value of the original problem (i.e., the log probability of the MAP assignment) in terms of the combined optimal values of the tree problems. We prove that this upper bound is met with equality if and only if the tree problems share an optimal configuration in common. An important implication is that any such shared configuration must also be a MAP configuration for the original problem. Next we present and analyze two methods for attempting to obtain tight upper bounds: (a) a treereweighted messagepassing algorithm that is related to but distinct from the maxproduct (minsum) algorithm; and (b) a treerelaxed linear program (LP), which is derived from the Lagrangian dual of the upper bounds. Finally, we discuss the conditions that govern when the relaxation is tight, in which case the MAP configuration can be obtained. The analysis described here generalizes naturally to convex combinations of hypertreestructured distributions.
Graphcover decoding and finitelength analysis of messagepassing iterative decoding of LDPC codes
 IEEE TRANS. INFORM. THEORY
, 2005
"... The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are comp ..."
Abstract

Cited by 68 (12 self)
 Add to MetaCart
The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graphcover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graphcover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and messagepassing iterative decoding. Namely, on the one hand it turns out that graphcover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like messagepassing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graphcover decoding can serve as a model to explain the behavior of messagepassing iterative decoding. Understanding the behavior of graphcover decoding is tantamount to understanding
Lagrangian relaxation for MAP estimation in graphical models
 in: 45th Annual Allerton Conference on Communication, Control and Computing
, 2007
"... Abstract — We develop a general framework for MAP estimation in discrete and Gaussian graphical models using Lagrangian relaxation techniques. The key idea is to reformulate an intractable estimation problem as one defined on a more tractable graph, but subject to additional constraints. Relaxing th ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
Abstract — We develop a general framework for MAP estimation in discrete and Gaussian graphical models using Lagrangian relaxation techniques. The key idea is to reformulate an intractable estimation problem as one defined on a more tractable graph, but subject to additional constraints. Relaxing these constraints gives a tractable dual problem, one defined by a thin graph, which is then optimized by an iterative procedure. When this iterative optimization leads to a consistent estimate, one which also satisfies the constraints, then it corresponds to an optimal MAP estimate of the original model. Otherwise there is a “duality gap”, and we obtain a bound on the optimal solution. Thus, our approach combines convex optimization with dynamic programming techniques applicable for thin graphs. The popular treereweighted maxproduct (TRMP) method may be seen as solving a particular class of such relaxations, where the intractable graph is relaxed to a set of spanning trees. We also consider relaxations to a set of small induced subgraphs, thin subgraphs (e.g. loops), and a connected tree obtained by “unwinding ” cycles. In addition, we propose a new class of multiscale relaxations that introduce “summary ” variables. The potential benefits of such generalizations include: reducing or eliminating the “duality gap ” in hard problems, reducing the number or Lagrange multipliers in the dual problem, and accelerating convergence of the iterative optimization procedure. I.
Analysis of absorbing sets and fully absorbing sets of arraybased LDPC codes
 IEEE TRANS. ON INFORMATION THEORY
, 2008
"... The class of lowdensity paritycheck (LDPC) codes is attractive, since such codes can be decoded using practical messagepassing algorithms, and their performance is known to approach the Shannon limits for suitably large blocklengths. For the intermediate blocklengths relevant in applications, how ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
The class of lowdensity paritycheck (LDPC) codes is attractive, since such codes can be decoded using practical messagepassing algorithms, and their performance is known to approach the Shannon limits for suitably large blocklengths. For the intermediate blocklengths relevant in applications, however, many LDPC codes exhibit a socalled “error floor”, corresponding to a significant flattening in the curve that relates signaltonoise ratio (SNR) to the bit error rate (BER) level. Previous work has linked this behavior to combinatorial substructures within the Tanner graph associated with an LDPC code, known as (fully) absorbing sets. These fully absorbing sets correspond to a particular type of nearcodewords or trapping sets that are stable under bitflipping operations, and exert the dominant effect on the low BER behavior of structured LDPC codes. This paper provides a detailed theoretical analysis of these (fully) absorbing sets for the class of Cp,γ arraybased LDPC codes, including the characterization of all minimal (fully) absorbing sets for the arraybased LDPC codes for γ =2, 3, 4, and moreover, it provides the development of techniques to enumerate them exactly. Theoretical results of this type provide a foundation for predicting and extrapolating the error floor behavior of LDPC codes.
Adaptive Linear Programming Decoding
, 2006
"... The ability of linear programming (LP) decoding to detect failures, and its potential for improvement by the addition of new constraints, motivates the use of an adaptive approach in selecting the constraints for the underlying LP problem. In this paper, we show that the application of such adaptiv ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
The ability of linear programming (LP) decoding to detect failures, and its potential for improvement by the addition of new constraints, motivates the use of an adaptive approach in selecting the constraints for the underlying LP problem. In this paper, we show that the application of such adaptive methods can significantly reduce the complexity of the LP decoding algorithm, which, in the standard formulation, is exponential in the maximum row weight of the paritycheck matrix. We further show that adaptively adding new constraints, e.g. by combining parity checks, can provide large gains in LP decoder performance.
R.Koetter, Towards LowComplexity LinearProgramming Decoding
 Proc. 4th Int. Symposium on Turbo Codes and Related Topics
"... ..."
(Show Context)
PseudoCodeword Analysis of Tanner Graphs from Projective and Euclidean Planes
, 2006
"... In order to understand the performance of a code under maximumlikelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one’s attention needs to be shifted to the pseudocodewords, in part ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
In order to understand the performance of a code under maximumlikelihood (ML) decoding, one studies the codewords, in particular the minimal codewords, and their Hamming weights. In the context of linear programming (LP) decoding, one’s attention needs to be shifted to the pseudocodewords, in particular to the minimal pseudocodewords, and their pseudoweights. In this paper we investigate some families of codes that have good properties under LP decoding, namely certain families of lowdensity paritycheck (LDPC) codes that are derived from projective and Euclidean planes: we study the structure of their minimal pseudocodewords and give lower bounds on their pseudoweight.
Guessing Facets: Polytope Structure and Improved LP Decoder
, 2009
"... We investigate the structure of the polytope underlying the linear programming (LP) decoder introduced by Feldman, Karger, and Wainwright. We first show that for expander codes, every fractional pseudocodeword always has at least a constant fraction of nonintegral bits. We then prove that for expan ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We investigate the structure of the polytope underlying the linear programming (LP) decoder introduced by Feldman, Karger, and Wainwright. We first show that for expander codes, every fractional pseudocodeword always has at least a constant fraction of nonintegral bits. We then prove that for expander codes, the active set of any fractional pseudocodeword is smaller by a constant fraction than that of any codeword. We further exploit these geometrical properties to devise an improved decoding algorithm with the same order of complexity as LP decoding that provably performs better. The method is very simple: it first applies ordinary LP decoding, and when it fails, it proceeds by guessing facets of the polytope, and then resolving the linear program on these facets. While the LP decoder succeeds only if the ML codeword has the highest likelihood over all pseudocodewords, we prove that the proposed algorithm, when applied to suitable expander codes, succeeds unless there exists a certain number of pseudocodewords, all adjacent to the ML codeword on the LP decoding polytope, and with higher likelihood than the ML codeword. We then describe an extended algorithm, still with polynomial complexity, that succeeds as long as there are at most polynomially many pseudocodewords above the ML codeword.