Results 1  10
of
119
Extrinsic information transfer functions: A model and two properties
 in Proc. Conf. Info. Sci. Syst. (CISS
, 2002
"... ..."
Asynchronous physicallayer network coding,” technical report. Available: http://arxiv.org/abs/1105.3144
"... Abstract—A key issue in physicallayer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrierphase offset. ..."
Abstract

Cited by 100 (11 self)
 Add to MetaCart
(Show Context)
Abstract—A key issue in physicallayer network coding (PNC) is how to deal with the asynchrony between signals transmitted by multiple transmitters. That is, symbols transmitted by different transmitters could arrive at the receiver with symbol misalignment as well as relative carrierphase offset. A second important issue is how to integrate channel coding with PNC to achieve reliable communication. This paper investigates these two issues and makes the following contributions: 1) We propose and investigate a general framework for decoding at the receiver based on belief propagation (BP). The framework can effectively deal with symbol and phase asynchronies while incorporating channel coding at the same time. 2) For unchannelcoded PNC, we show that for BPSK and QPSK modulations, our BP method can significantly reduce the asynchrony penalties compared with prior methods. 3) For QPSK unchannelcoded PNC, with a half symbol offset between the transmitters, our BP method can drastically reduce the performance penalty due to phase asynchrony, from more than 6 dB to no more than 1 dB. 4) For channelcoded PNC, with our BP method, both symbol and phase asynchronies actually improve the system performance compared with the perfectly synchronous case. Furthermore, the performance spread due to different combinations of symbol and phase offsets between the transmitters in channelcoded PNC is only around 1 dB. The implication of 3) is that if we could control the symbol arrival times at the receiver, it would be advantageous to deliberately introduce a half symbol offset in unchannelcoded PNC. The implication of 4) is that when channel coding is used, symbol and phase asynchronies are not major performance concerns in PNC. Index Terms—Physicallayer network coding, network coding, synchronization. I.
Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing
, 2009
"... The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Abstract

Cited by 80 (10 self)
 Add to MetaCart
(Show Context)
The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of anndimensional vector “decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum meansquared error estimation. The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero normregularized estimation. In the case of lasso estimation the scalar estimator reduces to a softthresholding operator, and for zero normregularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationallytractable method for exactly computing various performance metrics including meansquared error and sparsity pattern recovery probability.
SoftInput SoftOutput Lattice Sphere Decoder for Linear Channels
 Proc. of the IEEE GLOBECOM’03
, 2003
"... Soft output detection for signals transmitted on linear channels is investigated. A particular emphasis is made for signal detection on multiple antenna channels. The a posteriori information at the detector output is evaluated from a shifted spherical list of point candidates. The spherical list is ..."
Abstract

Cited by 49 (11 self)
 Add to MetaCart
(Show Context)
Soft output detection for signals transmitted on linear channels is investigated. A particular emphasis is made for signal detection on multiple antenna channels. The a posteriori information at the detector output is evaluated from a shifted spherical list of point candidates. The spherical list is centered on the maximum likelihood point, which has the great advantage of stabilizing the list size. Thus, the sphere radius is selected in order to control the list size and to cope with the boundaries of the finite multiple antenna constellation. Our new soft output sphere decoder is then applied to the computation of constrained channel capacity and to the iterative detection of a coded transmission. For example, we achieved a signaltonoise ratio at 1.25dB from capacity limit on a 44 MIMO channel with 16QAM modulation and a 4state rate 1/2 parallel turbo code.
Compressive Imaging using Approximate Message Passing and a
 MarkovTree Prior, Proc. Asilomar Conf. on Signals, Systems, and Computers
, 2010
"... Abstract—We propose a novel algorithm for compressive imaging that exploits both the sparsity and persistence across scales found in the 2D wavelet transform coefficients of natural images. Like other recent works, we model wavelet structure using a hidden Markov tree (HMT) but, unlike other works, ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
Abstract—We propose a novel algorithm for compressive imaging that exploits both the sparsity and persistence across scales found in the 2D wavelet transform coefficients of natural images. Like other recent works, we model wavelet structure using a hidden Markov tree (HMT) but, unlike other works, ours is based on loopy belief propagation (LBP). For LBP, we adopt a recently proposed “turbo ” message passing schedule that alternates between exploitation of HMT structure and exploitation of compressivemeasurement structure. For the latter, we leverage Donoho, Maleki, and Montanari’s recently proposed approximate message passing (AMP) algorithm. Experiments on a large image database show that our turbo LBP approach maintains stateoftheart reconstruction performance at half the complexity. 1 I.
Design methods for irregular repeat accumulate codes
, 2002
"... We optimize the randomlike ensemble of Irregular Repeat Accumulate (IRA) codes for binaryinput symmetric channels in the large blocklength limit. Our optimization technique is based on approximating the Evolution of the Densities (DE) of the messages exchanged by the BeliefPropagation (BP) messa ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
(Show Context)
We optimize the randomlike ensemble of Irregular Repeat Accumulate (IRA) codes for binaryinput symmetric channels in the large blocklength limit. Our optimization technique is based on approximating the Evolution of the Densities (DE) of the messages exchanged by the BeliefPropagation (BP) messagepassing decoder by a onedimensional dynamical system. In this way, the code ensemble optimization can be solved by linear programming. We propose four such DE approximation methods, and compare the performance of the obtained code ensembles over the binary symmetric channel (BSC) and the binaryantipodal input additive white Gaussian channel (BIAWGNC). Our results clearly identify the best among the proposed methods and show that the IRA codes obtained by these methods are competitive with respect to the bestknown irregular LowDensity ParityCheck codes (LDPC). In view of this and the very simple encoding structure of IRA codes, they emerge as attractive design choices.
Random Sparse Linear Systems Observed Via Arbitrary Channels: A Decoupling Principle
"... Abstract—This paper studies the problem of estimating the vector input to a sparse linear transformation based on the observation of the output vector through a bank of arbitrary independent channels. The linear transformation is drawn randomly from an ensemble with mild regularity conditions. The c ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper studies the problem of estimating the vector input to a sparse linear transformation based on the observation of the output vector through a bank of arbitrary independent channels. The linear transformation is drawn randomly from an ensemble with mild regularity conditions. The central result is a decoupling principle in the largesystem limit. That is, the optimal estimation of each individual symbol in the input vector is asymptotically equivalent to estimating the same symbol through a scalar additive Gaussian channel, where the aggregate effect of the interfering symbols is tantamount to a degradation in the signaltonoise ratio. The degradation is determined from a recursive formula related to the score function of the conditional probability distribution of the noisy channel. A sufficient condition is provided for belief propagation (BP) to asymptotically produce the a posteriori probability distribution of each input symbol given the output. This paper extends the authors ’ previous decoupling result for Gaussian channels to arbitrary channels, which was based on an earlier work of Montanari and Tse. Moreover, a rigorous justification is provided for the generalization of some results obtained via statical physics methods. I.
Analysis of lowdensity paritycheck codes for the GilbertElliott channel
 IEEE TRANS. INF. THEORY
, 2005
"... Density evolution analysis of lowdensity paritycheck (LDPC) codes in memoryless channels is extended to the Gilbert–Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum–product algorithm (S ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
Density evolution analysis of lowdensity paritycheck (LDPC) codes in memoryless channels is extended to the Gilbert–Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum–product algorithm (SPA) is used to perform LDPC decoding jointly with channelstate detection. Density evolution results show (and simulation results confirm) that such decoders provide a significantly enlarged region of successful decoding within the GE parameter space, compared with decoders that do not exploit the channel memory. By considering a variety of ways in which a GE channel may be degraded, it is shown how knowledge of the decoding behavior at a single point of the GE parameter space may be extended to a larger region within the space, thereby mitigating the large complexity needed in using density evolution to explore the parameter space pointbypoint. Using the GE channel as a straightforward example, we conclude that analysis of estimation decoding for LDPC codes is feasible in channels with memory, and that such analysis shows large potential gains.
Approximate message passing with consistent parameter estimation and applications to sparse learning,” arXiv:1207.3859 [cs.IT
, 2012
"... We consider the estimation of an i.i.d. vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We consider the estimation of an i.i.d. vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector x. Our method can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linearnonlinear cascade models in dynamical systems and neural spiking processes. We prove that for large i.i.d. Gaussian transform matrices the asymptotic componentwise behavior of the adaptive GAMP algorithm is predicted by a simple set of scalar state evolution equations. This analysis shows that the adaptive GAMP method can yield asymptotically consistent parameter estimates, which implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of complex linearnonlinear models with provable guarantees. 1