Results 1  10
of
155
Factor Graphs and the SumProduct Algorithm
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple c ..."
Abstract

Cited by 1289 (68 self)
 Add to MetaCart
A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple computational rule, the sumproduct algorithm operates in factor graphs to computeeither exactly or approximatelyvarious marginal functions by distributed messagepassing in the graph. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sumproduct algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative "turbo" decoding algorithm, Pearl's belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform algorithms.
Extrinsic information transfer functions: model and erasure channel properties
 IEEE Transactions on Information Theory
, 2004
"... ..."
(Show Context)
An Introduction to Factor Graphs
 IEEE SIGNAL PROCESSING MAG., JAN. 2004
, 2004
"... A large variety of algorithms in coding, signal processing, and artificial intelligence may be viewed as instances of the summaryproduct algorithm (or belief/probability ..."
Abstract

Cited by 129 (34 self)
 Add to MetaCart
A large variety of algorithms in coding, signal processing, and artificial intelligence may be viewed as instances of the summaryproduct algorithm (or belief/probability
Irregular Repeat Accumulate Codes
, 2000
"... In this paper we will introduce an ensemble of codes called irregular repeataccumulate (IRA) codes. IRA codes are a generalization of the repeataccumulate codes introduced in [1], and as such have a natural linear time encoding algorithm. We shall prove that on the binary erasure channel, IRA code ..."
Abstract

Cited by 123 (1 self)
 Add to MetaCart
In this paper we will introduce an ensemble of codes called irregular repeataccumulate (IRA) codes. IRA codes are a generalization of the repeataccumulate codes introduced in [1], and as such have a natural linear time encoding algorithm. We shall prove that on the binary erasure channel, IRA codes can be decoded reliably in linear time, using iterater] sumproduct decoding,a# ra#SJ a#SJ8T a#SJ8 close tocha#T36 ca pa#J464 Asimila# resulta#u ea#S to be true on the AWGN channel, although we have no proof of this. We illustrate our results with numerical and experimenta# examples.
Decoding ErrorCorrecting Codes via Linear Programming
, 2003
"... Abstract. Errorcorrecting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original informa ..."
Abstract

Cited by 82 (6 self)
 Add to MetaCart
Abstract. Errorcorrecting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full errorcorrecting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an errorcorrecting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new “LP decoders ” have tight combinatorial characterizations of decoding success that can be used to analyze errorcorrecting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the
Design methods for irregular repeat accumulate codes
, 2002
"... We optimize the randomlike ensemble of Irregular Repeat Accumulate (IRA) codes for binaryinput symmetric channels in the large blocklength limit. Our optimization technique is based on approximating the Evolution of the Densities (DE) of the messages exchanged by the BeliefPropagation (BP) messa ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
(Show Context)
We optimize the randomlike ensemble of Irregular Repeat Accumulate (IRA) codes for binaryinput symmetric channels in the large blocklength limit. Our optimization technique is based on approximating the Evolution of the Densities (DE) of the messages exchanged by the BeliefPropagation (BP) messagepassing decoder by a onedimensional dynamical system. In this way, the code ensemble optimization can be solved by linear programming. We propose four such DE approximation methods, and compare the performance of the obtained code ensembles over the binary symmetric channel (BSC) and the binaryantipodal input additive white Gaussian channel (BIAWGNC). Our results clearly identify the best among the proposed methods and show that the IRA codes obtained by these methods are competitive with respect to the bestknown irregular LowDensity ParityCheck codes (LDPC). In view of this and the very simple encoding structure of IRA codes, they emerge as attractive design choices.
Analysis of lowdensity paritycheck codes for the GilbertElliott channel
 IEEE TRANS. INF. THEORY
, 2005
"... Density evolution analysis of lowdensity paritycheck (LDPC) codes in memoryless channels is extended to the Gilbert–Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum–product algorithm (S ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Density evolution analysis of lowdensity paritycheck (LDPC) codes in memoryless channels is extended to the Gilbert–Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum–product algorithm (SPA) is used to perform LDPC decoding jointly with channelstate detection. Density evolution results show (and simulation results confirm) that such decoders provide a significantly enlarged region of successful decoding within the GE parameter space, compared with decoders that do not exploit the channel memory. By considering a variety of ways in which a GE channel may be degraded, it is shown how knowledge of the decoding behavior at a single point of the GE parameter space may be extended to a larger region within the space, thereby mitigating the large complexity needed in using density evolution to explore the parameter space pointbypoint. Using the GE channel as a straightforward example, we conclude that analysis of estimation decoding for LDPC codes is feasible in channels with memory, and that such analysis shows large potential gains.
Coding theorems for turbo code ensembles
 IEEE Trans. Inf. Theory
, 2002
"... Abstract—This paper is devoted to a Shannontheoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are “good ” in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper is devoted to a Shannontheoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are “good ” in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a positive number 0 such that for any binaryinput memoryless channel whose Bhattacharyya noise parameter is less than 0, the average maximumlikelihood (ML) decoder block error probability approaches zero, at least as fast as, where is the “interleaver gain ” exponent defined by Benedetto et al. in 1996. Index Terms—Bhattacharyya parameter, coding theorems, maximumlikelihood decoding (MLD), turbo codes, union bound. I.
Joint noncoherent demodulation and decoding for the block fading channel: a practical framework for approaching Shannon capacity
 IEEE Transactions on Communications
, 2003
"... Abstract—This paper contains a systematic investigation of practical coding strategies for noncoherent communication over fading channels, guided by explicit comparisons with informationtheoretic benchmarks. Noncoherent reception is interpreted as joint data and channel estimation, assuming that th ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
(Show Context)
Abstract—This paper contains a systematic investigation of practical coding strategies for noncoherent communication over fading channels, guided by explicit comparisons with informationtheoretic benchmarks. Noncoherent reception is interpreted as joint data and channel estimation, assuming that the channel is time varying and a priori unknown. We consider iterative decoding for a serial concatenation of a standard binary outer channel code with an inner modulation code amenable to noncoherent detection. For an information rate of about 1/2 bit per channel use, the proposed scheme, using a quaternary phaseshift keying (QPSK) alphabet, provides performance within 1.6–1.7 dB of Shannon capacity for the block fading channel, and is about 2.5–3 dB superior to standard differential demodulation in conjunction with an outer channel code. We also provide capacity computations for noncoherent communication using standard phaseshift keying (PSK) and quadrature amplitude modulation (QAM) alphabets, comparing these with the capacity with unconstrained input provides guidance as to the choice of constellation as a function of the signaltonoise ratio. These results imply that QPSK suffices to approach the unconstrained capacity for the relatively low information and fading rates considered in our performance evaluations, but that QAM is superior to PSK for higher information or fading rates, motivating further research into efficient noncoherent coded modulation with QAM alphabets. Index Terms—Capacity, coding, fading channels, noncoherent detection, wireless communications. I.
YASS: yet another steganographic scheme that resists blind steganalysis
 in 9th Int. Workshop on Info. Hiding
, 2007
"... Abstract. A new, simple, approach for active steganography is proposed in this paper that can successfully resist recent blind steganalysis methods, in addition to surviving distortion constrained attacks. We present Yet Another Steganographic Scheme (YASS), a method based on embedding data in rando ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
(Show Context)
Abstract. A new, simple, approach for active steganography is proposed in this paper that can successfully resist recent blind steganalysis methods, in addition to surviving distortion constrained attacks. We present Yet Another Steganographic Scheme (YASS), a method based on embedding data in randomized locations so as to disable the selfcalibration process (such as, by cropping a few pixel rows and/or columns to estimate the cover image features) popularly used by blind steganalysis schemes. The errors induced in the embedded data due to the fact that the stego signal must be advertised in a specific format such as JPEG, are dealt with by the use of erasure and error correcting codes. For the presented JPEG steganograhic scheme, it is shown that the detection rates of recent blind steganalysis schemes are close to random guessing, thus confirming the practical applicability of the proposed technique. We also note that the presented steganography framework, of hiding in randomized locations and using a coding framework to deal with errors, is quite simple yet very generalizable. Key words: data hiding, error correcting codes, steganalysis, steganography, supervised learning. 1