Results 1  10
of
28
Informationtheoretic analysis of information hiding
 IEEE Transactions on Information Theory
, 2003
"... Abstract—An informationtheoretic analysis of information hiding is presented in this paper, forming the theoretical basis for design of informationhiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermar ..."
Abstract

Cited by 269 (19 self)
 Add to MetaCart
Abstract—An informationtheoretic analysis of information hiding is presented in this paper, forming the theoretical basis for design of informationhiding systems. Information hiding is an emerging research area which encompasses applications such as copyright protection for digital media, watermarking, fingerprinting, steganography, and data embedding. In these applications, information is hidden within a host data set and is to be reliably communicated to a receiver. The host data set is intentionally corrupted, but in a covert way, designed to be imperceptible to a casual analysis. Next, an attacker may seek to destroy this hidden information, and for this purpose, introduce additional distortion to the data set. Side information (in the form of cryptographic keys and/or information about the host signal) may be available to the information hider and to the decoder. We formalize these notions and evaluate the hiding capacity, which upperbounds the rates of reliable transmission and quantifies the fundamental tradeoff between three quantities: the achievable informationhiding rates and the allowed distortion levels for the information hider and the attacker. The hiding capacity is the value of a game between the information hider and the attacker. The optimal attack strategy is the solution of a particular ratedistortion problem, and the optimal hiding strategy is the solution to a channelcoding problem. The hiding capacity is derived by extending the Gel’fand–Pinsker theory of communication with side information at the encoder. The extensions include the presence of distortion constraints, side information at the decoder, and unknown communication channel. Explicit formulas for capacity are given in several cases, including Bernoulli and Gaussian problems, as well as the important special case of small distortions. In some cases, including the last two above, the hiding capacity is the same whether or not the decoder knows the host data set. It is shown that many existing informationhiding systems in the literature operate far below capacity. Index Terms—Channel capacity, cryptography, fingerprinting, game theory, information hiding, network information theory,
To Code, or Not to Code: Lossy SourceChannel Communication Revisited
 IEEE TRANS. INFORM. THEORY
, 2003
"... What makes a sourcechannel communication system optimal? It is shown that in order to achieve an optimal costdistortion tradeoff, the source and the channel have to be matched in a probabilistic sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel ..."
Abstract

Cited by 160 (7 self)
 Add to MetaCart
What makes a sourcechannel communication system optimal? It is shown that in order to achieve an optimal costdistortion tradeoff, the source and the channel have to be matched in a probabilistic sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel conditional distribution, and the channel input cost function. Closedform necessary and sufficient expressions relating the above entities are given. This generalizes both the separationbased approach as well as the two wellknown examples of optimal uncoded communication. The condition of
The WynerZiv Problem with Multiple Sources
 IEEE Transactions on Information Theory
, 2002
"... We consider the problem of separately compressing multiple sources in a lossy fashion for a decoder that has access to side information. For the case of a single source, this problem has been completely solved by Wyner and Ziv. For the case of two sources, we establish an achievable rate region, ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
We consider the problem of separately compressing multiple sources in a lossy fashion for a decoder that has access to side information. For the case of a single source, this problem has been completely solved by Wyner and Ziv. For the case of two sources, we establish an achievable rate region, an inner bound to the rate region, and a partial converse. The partial converse applies to the case when the sources are conditionally independent given the side information, and it di#ers significantly from prior art in that it applies also to the symmetric case where all sources are encoded with respect to fidelity criteria. Moreover, we also show that in this special case, there is no di#erence between the minimum rate needed to encode the sources jointly, and the minimum sum rate needed for separate encoding.
A Framework for Evaluating the DataHiding Capacity of Image Sources
 IEEE Trans. on Image Processing
, 2002
"... An informationtheoretic model for image watermarking and data hiding is presented in this paper. Recent theoretical results are used to characterize the fundamental capacity limits of image watermarking and datahiding systems. Capacity is determined by the statistical model used for the host im ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
An informationtheoretic model for image watermarking and data hiding is presented in this paper. Recent theoretical results are used to characterize the fundamental capacity limits of image watermarking and datahiding systems. Capacity is determined by the statistical model used for the host image, by the distortion constraints on the data hider and the attacker, and by the information available to the data hider, to the attacker, and to the decoder. We consider autoregressive, blockDCT and wavelet statistical models for images and compute datahiding capacity for compressed and uncompressed hostimage sources. Closedform expressions are obtained under sparsemodel approximations. Models for geometric attacks and distortion measures that are invariant to such attacks are considered.
Universal Fingerprinting: Capacity and RandomCoding Exponents
, 2008
"... This paper studies fingerprinting games in which the number of colluders and the collusion channel are unknown. The fingerprints are embedded into host sequences (representing signals to be protected) and provide the receiver with the capability to trace back pirated copies to the colluders. The col ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
(Show Context)
This paper studies fingerprinting games in which the number of colluders and the collusion channel are unknown. The fingerprints are embedded into host sequences (representing signals to be protected) and provide the receiver with the capability to trace back pirated copies to the colluders. The colluders and the fingerprint embedder are subject to signal fidelity constraints. Our problem setup unifies the signaldistortion and BonehShaw formulations of fingerprinting. Several bounds on fingerprinting capacity have been presented in recent literature. This paper derives exact capacity formulas and presents a new randomized fingerprinting scheme with the following properties: (1) the receiver does not need to know the coalition size and collusion channel; (2) a tunable parameter ∆ trades off falsepositive and falsenegative error exponents; (3) the receiver provides a reliability metric for its decision; and (4) the scheme is capacityachieving when the falsepositive exponent ∆ tends to zero. A fundamental component of this scheme is the use of a “timesharing ” randomized sequence. The decoder is a minimum penalized equivocation decoder, where the significance of each candidate coalition is assessed relative to a threshold, and the penalty is proportional to coalition size. A much simpler threshold decoder that satisfies properties (1)—(3) above but not (4) is also given. Index Terms. Fingerprinting, traitor tracing, watermarking, data hiding, randomized codes, universal codes, method of types, maximum mutual information decoder, minimum equivocation decoder, channel coding with side information, capacity, error exponents, multiple access channels, model order selection.
Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions
, 2007
"... An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statisti ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and randomcoding exponents for perfectlysecure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. In some applications, steganographic communication may be disrupted by an active warden, modelled here by a compound discrete memoryless channel. The transmitter and warden are subject to distortion constraints. We address the potential loss in communication performance due to the perfectsecurity requirement. This loss is the same as the loss obtained under a weaker order1 steganographic requirement that would just require matching of firstorder
On capacity under receivedsignal constraints
 In Proc 2004 Allerton Conference
, 2004
"... In a world where different systems have to share the same spectrum, the received (interfering) power may be a more relevant constraint than the maximum transmit power. Motivated by such a spectrumsharing approach, this paper investigates the behavior of capacity under receivedpower constraints, mo ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
In a world where different systems have to share the same spectrum, the received (interfering) power may be a more relevant constraint than the maximum transmit power. Motivated by such a spectrumsharing approach, this paper investigates the behavior of capacity under receivedpower constraints, modeling for example the maximum interference that one system may inflict on another. The insight of the paper is that while in the pointtopoint case, transmit and receivedpower constraints are largely equivalent, they can lead to quite different conclusions in network cases, including relay networks, multiple access channels with dependent sources and feedback, and collaborative communication scenarios. 1
New Results on Steganographic Capacity
, 2004
"... This paper extends recent results on steganographic capacity. We derive capacity expressions for perfectlysecure steganographic systems. The warden may be passive, or active using a memoryless attack channel, or active using an arbitrarily varying channel. Neither encoder nor decoder know which cha ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
This paper extends recent results on steganographic capacity. We derive capacity expressions for perfectlysecure steganographic systems. The warden may be passive, or active using a memoryless attack channel, or active using an arbitrarily varying channel. Neither encoder nor decoder know which channel was selected by the warden. In some cases, the steganographic constraint does not result in any capacity loss. To achieve steganographic capacity, encoder and decoder generally need to share a secret codebook.
Capacity and randomcoding exponents for channel coding with side information
 IEEE Trans. Inform. Theory
, 2007
"... Capacity formulas and randomcoding exponents are derived for a generalized family of Gel’fandPinsker coding problems. These exponents yield asymptotic upper bounds on the achievable log probability of error. In our model, information is to be reliably transmitted through a noisy channel with finit ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Capacity formulas and randomcoding exponents are derived for a generalized family of Gel’fandPinsker coding problems. These exponents yield asymptotic upper bounds on the achievable log probability of error. In our model, information is to be reliably transmitted through a noisy channel with finite input and output alphabets and random state sequence, and the channel is selected by a hypothetical adversary. Partial information about the state sequence is available to the encoder, adversary, and decoder. The design of the transmitter is subject to a cost constraint. Two families of channels are considered: 1) compound discrete memoryless channels (CDMC), and 2) channels with arbitrary memory, subject to an additive cost constraint, or more generally to a hard constraint on the conditional type of the channel output given the input. Both problems are closely connected. The randomcoding exponent is achieved using a stacked binning scheme and a maximum penalized mutual information decoder, which may be thought of as an empirical generalized Maximum a Posteriori decoder. For channels with arbitrary memory, the randomcoding exponents are larger than their CDMC counterparts. Applications of this study include watermarking, data hiding, communication in presence of partially known interferers, and problems such as broadcast channels, all of which involve the fundamental idea of binning. Index terms: channel coding with side information, error exponents, arbitrarily varying channels, universal coding and decoding, randomized codes, MAP decoding, random binning, capacity, reliability function, method of types, watermarking, data hiding, broadcast channels. This research was supported by NSF under ITR grants CCR 0081268 and CCR 0325924.
A Neyman–Pearson Approach to Universal Erasure and List Decoding
"... Abstract—When information is to be transmitted over an unknown, possibly unreliable channel, an erasure option at the decoder is desirable. Using constantcomposition random codes, we propose a generalization of Csiszár and Körner’s maximum mutual information (MMI) decoder with an erasure option for ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Abstract—When information is to be transmitted over an unknown, possibly unreliable channel, an erasure option at the decoder is desirable. Using constantcomposition random codes, we propose a generalization of Csiszár and Körner’s maximum mutual information (MMI) decoder with an erasure option for discrete memoryless channels. The new decoder is parameterized by a weighting function that is designed to optimize the fundamental tradeoff between undetectederror and erasure exponents for a compound class of channels. The class of weighting functions may be further enlarged to optimize a similar tradeoff for list decoders—in that case, undetectederror probability is replaced with average number of incorrect messages in the list. Explicit solutions are identified. The optimal exponents admit simple expressions in terms of the spherepacking exponent, at all rates below capacity. For small erasure exponents, these expressions coincide with those derived by Forney (1968) for symmetric channels, using maximum a posteriori decoding. Thus, for those channels at least, ignorance of the channel law is inconsequential. Conditions for optimality of the Csiszár–Körner rule and of the simpler empiricalmutualinformation thresholding rule are identified. The error exponents are evaluated numerically for the binary symmetric channel. Index Terms—Constantcomposition codes, erasures, error exponents, list decoding, maximum mutual information (MMI) decoder, method of types, Neyman–Pearson hypothesis testing, random codes, sphere packing, universal decoding. I.