Results 11  20
of
33
Sensing Reality and Communicating Bits: A Dangerous Liaison
, 2006
"... [Is digital communication sufficient for sensor networks?] The successful design of sensor network architectures depends crucially on the structure of the sampling, observation, and communication processes. One of the most fundamental questions concerns the sufficiency of discrete approximations in ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
[Is digital communication sufficient for sensor networks?] The successful design of sensor network architectures depends crucially on the structure of the sampling, observation, and communication processes. One of the most fundamental questions concerns the sufficiency of discrete approximations in time, space, and amplitude. More explicitly, to capture the spatiotemporal variations of the underlying signals, when is it sufficient to build sensor network systems that work with discretetime andspace representations? And can the underlying amplitude variations of interest be observed at the highest possible fidelity if the sensors quantize their observations, assuming that quantization is done in the most sophisticated fashion, exploiting the principles of (ideal) distributed source coding? The former can be rephrased as the question of whether there is a spatiotemporal sampling theorem for typical data sets in sensor networks. This question has a positive answer in many cases of interest, based on the physics of the processes to be observed. The latter can be expressed as the question of whether there is a
Reduceddimension linear transform coding of correlated signasl in networks
 IEEE Trans. Sig. Processing
, 2012
"... ar ..."
An optimizer’s approach to stochastic control problems with nonclassical information structure
 in Proceedings of the IEEE Conference on Decision and Control
, 2012
"... We present a general optimizationbased framework for stochastic control problems with nonclassical information structures. We cast these problems equivalently as optimization problems on joint distributions. The resulting problems are necessarily nonconvex. Our approach to solving them is through ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We present a general optimizationbased framework for stochastic control problems with nonclassical information structures. We cast these problems equivalently as optimization problems on joint distributions. The resulting problems are necessarily nonconvex. Our approach to solving them is through convex relaxation. We solve the instance solved by Bansal and Başar [2] with a particular application of this approach that uses the data processing inequality for constructing the convex relaxation. Using certain fdivergences, we obtain a new, larger set of inverse optimal cost functions for such problems. Insights are obtained on the relation between the structure of cost functions and of convex relaxations for inverse optimal control. I. MOTIVATION AND CONTRIBUTION This paper concerns the following stochastic control problem with nonclassical information structure: minimize J(γ0, γ1) = E κ(S,X, Y, Ŝ)
Powerconstrained bandwidthreduction sourcechannel mappings for fading channels
 in Proc. of the 26th Bien. Symp. on Comm
, 2012
"... Abstract—We consider the transmission of a memoryless Gaussian source over a powerconstrained Rayleigh fading channel with additive white Gaussian noise. We propose the use of lowdelay joint sourcechannel mappings and consider optimizing the nonparametric mappings through an iterative process. A ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We consider the transmission of a memoryless Gaussian source over a powerconstrained Rayleigh fading channel with additive white Gaussian noise. We propose the use of lowdelay joint sourcechannel mappings and consider optimizing the nonparametric mappings through an iterative process. A design algorithm for joint sourcechannel mapping is proposed and numerically evaluated for 2:1, 3:1, and 4:1 bandwidth reductions. Parametric mappings are also studied. We assume three cases of fading knowledge; in the case of presence of channel state information at both encoder and decoder, optimal power allocation is solved for the parametric mappings in terms of fading gains and average power constraint. It is shown that the proposed nonparametric and parametric mappings, which have a nonlinear structure, achieve a graceful and robust performance and considerably surmount the saturation effect of linear systems. Archimedes ’ spiral was recently considered in [8] for fading channel. In this work, we use nonparametric and parametric mappings under different bandwidth reduction ratios. The case of bandwidth reduction/expansion over additive white Gaussian noise (AWGN) channels was studied in [9]–[11]. In [9], [10], the approach used is based on mapping the output of a vector quantizer to a specific point in a channel signal set. A direct sourcechannel mapping approach, however, was considered in [11]. Sourcechannel mappings for the relay and the MAC channels were studied in [12], [13]. Our system, that uses a nonlinear direct sourcechannel mappings over fading channel, is shown to overcome the performance saturation which is unavoidable when using linear systems and achieves a graceful performance. The rest of the paper is organized I.
Source Fidelity over Fading Channels: Performance of Erasure and Scalable Codes
"... Abstract—We consider the transmission of a Gaussian source through a block fading channel. Assuming each block is decoded independently, the received distortion depends on the tradeoff between quantization accuracy and probability of outage. Namely, higher quantization accuracy requires a higher cha ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—We consider the transmission of a Gaussian source through a block fading channel. Assuming each block is decoded independently, the received distortion depends on the tradeoff between quantization accuracy and probability of outage. Namely, higher quantization accuracy requires a higher channel code rate, which increases the probability of outage. We first treat an outage as an erasure, and evaluate the received mean distortion with erasure coding across blocks as a function of the code length. We then evaluate the performance of scalable, or multiresolution coding in which coded layers are superimposed within a coherence block, and the layers are sequentially decoded. Both the rate and power allocated to each layer are optimized. In addition to analyzing the performance with a finite number of layers, we evaluate the mean distortion at high SignaltoNoise Ratios as the number of layers becomes infinite. As the block length of the erasure code increases to infinity, the received distortion converges to a deterministic limit, which is less than the mean distortion with an infinitelayer scalable coding scheme. However, for the same standard deviation in received distortion, infinite layer scalable coding performs slightly better than erasure coding, and with much less decoding delay. Index Terms—Sourcechannel coding, scalable coding, fading channel, broadcast channel, rate distortion. I.
Source coding and transmission under common knowledge constraints
 in Proc. UCSD Workshop on Information Theory and Applications
, 2008
"... Abstract — This work studies problems of source coding under the requirement that the encoder can produce an exact copy of the compressed source constructed by the decoder. This requirement, termed here as a common knowledge constraint, is satisfied automatically in ratedistortion theory for single ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This work studies problems of source coding under the requirement that the encoder can produce an exact copy of the compressed source constructed by the decoder. This requirement, termed here as a common knowledge constraint, is satisfied automatically in ratedistortion theory for single sources. However, in the common formulation of problems of lossy source coding with side information at the decoder (the WynerZiv problem), distributed source coding, and joint sourcechannel coding for networks, the destination can exploit the information it receives in a manner that cannot be exactly reproduced at the sender side. Some applications, like the transmission of sensitive medical information, may require that both sides – the sender and the receiver – will share a common version of the compressed data, for the purpose of future discussions or consulting. The purpose of this work is to study the implications of common knowledge constraints on the achievable rates in scenarios of lossy source coding. A single letter characterization of the rate distortion function is developed, for the problem of source coding with side information at the decoder, under a common knowledge constraint. Implications of this constraint on problems of joint source channel coding for the degraded broadcast channel are studied. Specifically, it is shown that in this setup, a scheme based on separation achieves optimal distortions. Index terms – Broadcast channel, common knowledge, hierarchical coding, joint source channel coding, source coding with side information, successive refinement, WynerZiv problem. I.
Cooperative Strategies for the HalfDuplex Gaussian Parallel Relay Channel: Simultaneous Relaying versus Successive Relaying
, 2008
"... This study investigates the problem of communication for a network composed of two halfduplex parallel relays with additive white Gaussian noise. Two protocols, i.e., Simultaneous and Successive relaying, associated with two possible relay orderings are proposed. The simultaneous relaying protocol ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This study investigates the problem of communication for a network composed of two halfduplex parallel relays with additive white Gaussian noise. Two protocols, i.e., Simultaneous and Successive relaying, associated with two possible relay orderings are proposed. The simultaneous relaying protocol is based on Dynamic Decode and Forward (DDF) scheme. For the successive relaying protocol: (i) a NonCooperative scheme based on the Dirty Paper Coding (DPC), and (ii) a Cooperative scheme based on the Block Markov Encoding (BME) are considered. Furthermore, the composite scheme of employing BME at one relay and DPC at another always achieves a better rate when compared to the Cooperative scheme. A “SimultaneousSuccessive Relaying based on Dirty paper coding scheme ” (SSRD) is also proposed. The optimum ordering of the relays and hence the capacity of the halfduplex Gaussian parallel relay channel in the low and high signaltonoise ratio (SNR) scenarios is derived. In the low SNR scenario, it is revealed that under certain conditions for the channel coefficients, the ratio of the achievable rate of the simultaneous relaying based on DDF to the cutset bound tends to be 1. On the other hand, as SNR goes to infinity, it is proved that successive relaying, based on the DPC, asymptotically achieves the capacity of the network.
TO CODE OR NOT TO CODE
, 2002
"... de nationalité suisse et originaire de Zurich (ZH) et Lucerne (LU) acceptée sur proposition du jury: ..."
Abstract
 Add to MetaCart
de nationalité suisse et originaire de Zurich (ZH) et Lucerne (LU) acceptée sur proposition du jury:
1 A Likelihood Based Multiple Access for Estimation in Sensor Networks
"... Abstract — In a wireless sensor network the nodes collect independent observations about a nonrandom parameter θ to be estimated, and deliver informations to a fusion center (FC) by transmitting suitable waveforms through a common Multiple Access Channel (MAC). The FC implements some appropriate fus ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — In a wireless sensor network the nodes collect independent observations about a nonrandom parameter θ to be estimated, and deliver informations to a fusion center (FC) by transmitting suitable waveforms through a common Multiple Access Channel (MAC). The FC implements some appropriate fusion rule and outputs the final estimate of θ. We introduce a new access/estimation scheme, here referred to as LBMA (Likelihood Based Multiple Access), and prove it to be asymptotically efficient in the limit of increasingly large number of sensors n, when the used bandwidth is allowed to scale as W ∼ n α, 0.5 < α < 1. The proposed approach is easy to implement, and simply relies upon the very basic property that the loglikelihood is additive for independent observations, and upon the fact that the (noiseless) output of the MAC is just the sum of its inputs. Thus, the optimal fusion rule is automatically implemented by the MAC itself. I.