Results 1  10
of
71
An InformationTheoretic Model for Steganography
, 1998
"... An informationtheoretic model for steganography with passive adversaries is proposed. The adversary's task of distinguishing between an innocentcover message C and a modified message S containing a secret part is interpreted as a hypothesis testing problem. The security of a steganographic system i ..."
Abstract

Cited by 194 (3 self)
 Add to MetaCart
An informationtheoretic model for steganography with passive adversaries is proposed. The adversary's task of distinguishing between an innocentcover message C and a modified message S containing a secret part is interpreted as a hypothesis testing problem. The security of a steganographic system is quantified in terms of the relative entropy (or discrimination) between PC and PS . Several secure steganographic schemes are presented in this model; one of them is a universal information hiding scheme based on universal data compression techniques that requires no knowledge of the covertext statistics.
Optimal distributed detection strategies for wireless sensor networks
 in 42nd Annual Allerton Conf. on Commun., Control and Comp
, 2004
"... We study optimal distributed detection strategies for wireless sensor networks under the assumption of spatially and temporally i.i.d. observations at the sensor nodes. Each node computes a local statistic and communicates it to a decision center over a noisy channel. The performance of centralized ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
We study optimal distributed detection strategies for wireless sensor networks under the assumption of spatially and temporally i.i.d. observations at the sensor nodes. Each node computes a local statistic and communicates it to a decision center over a noisy channel. The performance of centralized detection (noisefree channel) serves as a benchmark. We address the following fundamental question: under what network resource constraints can distributed detection achieve the same error exponent as centralized detection? Two types of constraints are considered: 1) transmission power constraints at the nodes, and 2) the communication channel between the nodes and the decision center. Two types of channels are studied: 1) a parallel access channel (PAC) consisting of dedicated AWGN channels between the nodes and the decision center, and 2) an AWGN multiple access channel (MAC). We show that for intelligent sensors (with knowledge of observation statistics) analog communication of local likelihood ratios (soft decisions) over the MAC is asymptotically optimal (for large number of nodes) when each node can communicate with a constant power. Motivated by this result, we propose an optimal distributed detection strategy for dumb sensors (oblivious of observation statistics) based on the method of types. In this strategy, each node appropriately quantizes its temporal observation data and communicates its type or histogram to the decision center. It is shown that typebased distributed detection over the MAC is also asymptotically optimal with an additional advantage: observation statistics are needed only at the decision center. Even under the more stringet total power constraint, it is shown that both soft decision and typefusion result in exponentially decaying error probability. 1
Random Codes: Minimum Distances and Error Exponents
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—Minimum distances, distance distributions, and error exponents on a binarysymmetric channel (BSC) are given for typical codes from Shannon’s random code ensemble and for typical codes from a random linear code ensemble. A typical random code of length and rate is shown to have minimum dist ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
Abstract—Minimum distances, distance distributions, and error exponents on a binarysymmetric channel (BSC) are given for typical codes from Shannon’s random code ensemble and for typical codes from a random linear code ensemble. A typical random code of length and rate is shown to have minimum distance @P A, where @ A is the Gilbert–Varshamov (GV) relative distance at rate, whereas a typical linear code (TLC) has minimum distance @ A. Consequently, a TLC has a better error exponent on a BSC at low rates, namely, the expurgated error exponent. Index Terms—Distance distributions, exponential error bounds, minimum distance, random codes, random linear codes, typical linear codes, typical random codes. I.
Lowcomplexity approaches to SlepianWolf nearlossless distributed data compression
 IEEE TRANS. INFORM. THEORY
, 2006
"... This paper discusses the Slepian–Wolf problem of distributed nearlossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “sourcesplitting” strategy that does not require common sources of ra ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
This paper discusses the Slepian–Wolf problem of distributed nearlossless compression of correlated sources. We introduce practical new tools for communicating at all rates in the achievable region. The technique employs a simple “sourcesplitting” strategy that does not require common sources of randomness at the encoders and decoders. This approach allows for pipelined encoding and decoding so that the system operates with the complexity of a single user encoder and decoder. Moreover, when this splitting approach is used in conjunction with iterative decoding methods, it produces a significant simplification of the decoding process. We demonstrate this approach for synthetically generated data. Finally, we consider the Slepian–Wolf problem when linear codes are used as syndromeformers and consider a linear programming relaxation to maximumlikelihood (ML) sequence decoding. We note that the fractional vertices of the relaxed polytope compete with the optimal solution in a manner analogous to that observed when the “minsum ” iterative decoding algorithm is applied. This relaxation exhibits the MLcertificate property: if an integral solution is found, it is the ML solution. For symmetric binary joint distributions, we show that selecting easily constructable “expander”style lowdensity parity check codes (LDPCs) as syndromeformers admits a positive error exponent and therefore provably good performance.
On universal types
 PROC. ISIT 2004
, 2004
"... We define the universal type class of a sequence x n, in analogy to the notion used in the classical method of types. Two sequences of the same length are said to be of the same universal (LZ) type if and only if they yield the same set of phrases in the incremental parsing of Ziv and Lempel (1978 ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
We define the universal type class of a sequence x n, in analogy to the notion used in the classical method of types. Two sequences of the same length are said to be of the same universal (LZ) type if and only if they yield the same set of phrases in the incremental parsing of Ziv and Lempel (1978). We show that the empirical probability distributions of any finite order of two sequences of the same universal type converge, in the variational sense, as the sequence length increases. Consequently, the normalized logarithms of the probabilities assigned by any kth order probability assignment to two sequences of the same universal type, as well as the kth order empirical entropies of the sequences, converge for all k. We study the size of a universal type class, and show that its asymptotic behavior parallels that of the conventional counterpart, with the LZ78 code length playing the role of the empirical entropy. We also estimate the number of universal types for sequences of length n, and show that it is of the form exp((1+o(1))γ n/log n) for a well characterized constant γ. We describe algorithms for enumerating the sequences in a universal type class, and for drawing a sequence from the class with uniform probability. As an application, we consider the problem of universal simulation of individual sequences. A sequence drawn with uniform probability from the universal type class of x n is an optimal simulation of x n in a well defined mathematical sense.
Mismatched decoding revisited: general alphabets, channels with memory, and the wideband limit
 IEEE Trans. Inform. Theory
, 2000
"... Abstract—The mismatch capacity of a channel is the highest rate at which reliable communication is possible over the channel with a given (possibly suboptimal) decoding rule. This quantity has been studied extensively for singleletter decoding rules over discrete memoryless channels (DMCs). Here we ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Abstract—The mismatch capacity of a channel is the highest rate at which reliable communication is possible over the channel with a given (possibly suboptimal) decoding rule. This quantity has been studied extensively for singleletter decoding rules over discrete memoryless channels (DMCs). Here we extend the study to memoryless channels with general alphabets and to channels with memory with possibly nonsingleletter decoding rules. We also study the wideband limit, and, in particular, the mismatch capacity per unit cost, and the achievable rates on an additivenoise spreadspectrum system with singleletter decoding and binary signaling. Index Terms—Capacity per unit cost, channels with memory, general alphabets, mismatched decoding, nearest neighbor decoding, spread spectrum. I.
Universal Fingerprinting: Capacity and RandomCoding Exponents
, 2008
"... This paper studies fingerprinting games in which the number of colluders and the collusion channel are unknown. The fingerprints are embedded into host sequences (representing signals to be protected) and provide the receiver with the capability to trace back pirated copies to the colluders. The col ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
This paper studies fingerprinting games in which the number of colluders and the collusion channel are unknown. The fingerprints are embedded into host sequences (representing signals to be protected) and provide the receiver with the capability to trace back pirated copies to the colluders. The colluders and the fingerprint embedder are subject to signal fidelity constraints. Our problem setup unifies the signaldistortion and BonehShaw formulations of fingerprinting. Several bounds on fingerprinting capacity have been presented in recent literature. This paper derives exact capacity formulas and presents a new randomized fingerprinting scheme with the following properties: (1) the receiver does not need to know the coalition size and collusion channel; (2) a tunable parameter ∆ trades off falsepositive and falsenegative error exponents; (3) the receiver provides a reliability metric for its decision; and (4) the scheme is capacityachieving when the falsepositive exponent ∆ tends to zero. A fundamental component of this scheme is the use of a “timesharing ” randomized sequence. The decoder is a minimum penalized equivocation decoder, where the significance of each candidate coalition is assessed relative to a threshold, and the penalty is proportional to coalition size. A much simpler threshold decoder that satisfies properties (1)—(3) above but not (4) is also given. Index Terms. Fingerprinting, traitor tracing, watermarking, data hiding, randomized codes, universal codes, method of types, maximum mutual information decoder, minimum equivocation decoder, channel coding with side information, capacity, error exponents, multiple access channels, model order selection.
Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions
, 2007
"... An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statisti ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and randomcoding exponents for perfectlysecure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. In some applications, steganographic communication may be disrupted by an active warden, modelled here by a compound discrete memoryless channel. The transmitter and warden are subject to distortion constraints. We address the potential loss in communication performance due to the perfectsecurity requirement. This loss is the same as the loss obtained under a weaker order1 steganographic requirement that would just require matching of firstorder
Information rates achievable with algebraic codes on quantum discrete memoryless channels
 IEEE Trans. Information Theory
, 2005
"... The highest information rate at which quantum errorcorrection schemes work reliably on a channel, which is called the quantum capacity, is proven to be lower bounded by the limit of the quantity termed coherent information maximized over the set of input density operators which are proportional to ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
The highest information rate at which quantum errorcorrection schemes work reliably on a channel, which is called the quantum capacity, is proven to be lower bounded by the limit of the quantity termed coherent information maximized over the set of input density operators which are proportional to the projections onto the code spaces of symplectic stabilizer codes. Quantum channels to be considered are those subject to independent errors and modeled as tensor products of copies of a completely positive linear map on a Hilbert space of finite dimension, and the codes that are proven to have the desired performance are symplectic stabilizer codes. On the depolarizing channel, this work’s bound is actually the highest possible rate at which symplectic stabilizer codes work reliably.
Collaborative sensing in a retail store using synchronous distributed jam signalling
 3rd International Conference on Pervasive Computing, volume 3468 of Lecture Notes in Computer Science
, 2005
"... Abstract. The retail store environment is a challenging application area for Pervasive Computing technologies. It has demanding base conditions due to the number and complexity of the interdependent processes involved. We present first results of an ongoing study with dmdrogerie markt, a large chem ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Abstract. The retail store environment is a challenging application area for Pervasive Computing technologies. It has demanding base conditions due to the number and complexity of the interdependent processes involved. We present first results of an ongoing study with dmdrogerie markt, a large chemist’s retailer, that indicate that supporting product monitoring tasks with novel pervasive technology is useful but still needs technical advances. Based on this study, we uncover problems that occur when using identification technology (such as RFID) for product monitoring. The individual identification struggles with data overload and inefficient channel access due to the high number of tags involved. We address these problems with the concept of Radio Channel Computing, combining approaches from information theory, such as the method of types and multiple access adder channels. We realise data preprocessing on the physical layer and significantly improve response time and scalability. With mathematical formulation, simulations and a real world implementation, we evaluate and prove the usefulness of the proposed system. 1