Results 1  10
of
23
Quantization Index Modulation: A Class of Provably Good Methods for Digital Watermarking and Information Embedding
 IEEE TRANS. ON INFORMATION THEORY
, 1999
"... We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing informationembedding rate, mini ..."
Abstract

Cited by 399 (12 self)
 Add to MetaCart
(Show Context)
We consider the problem of embedding one signal (e.g., a digital watermark), within another "host" signal to form a third, "composite" signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing informationembedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortioncompensated QIM (DCQIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is "provably good" against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular, it achieves provably better rate distortionrobustness tradeoffs than currently popular spreadspectrum and lowbit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DCQIM is optimal (capacityachieving) and regular QIM is nearoptimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and meansquareerrorconstrained attack channels that model privatekey watermarking applications.
The Gaussian Watermarking Game
, 2000
"... Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" shoul ..."
Abstract

Cited by 112 (9 self)
 Add to MetaCart
Watermarking models a copyright protection mechanism where an original source sequence or "covertext" is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data sequence or "stegotext" should be similar to the covertext) and robust (i.e., the extra information should be recoverable even if the stegotext is modified further, possibly by a malicious "attacker"). We compute the coding capacity of the watermarking game for a Gaussian covertext and squarederror distortions. Both the public version of the game (covertext known to neither attacker nor decoder) and the private version of the game (covertext unknown to attacker but known to decoder) are treated. While the capacity of the former cannot, of course, exceed the capacity of the latter, we show that the two are, in fact, identical. These capacities depend critically on whether the distortion constraints are required to be met in expectation or with probability one. In the former case the coding capacity is zero, whereas in the latter it coincides with the value of related zerosum dynamic mutual informations games of complete and perfect information. # Parts of this work were presented at the 2000 Conference on Information Sciences and Systems (CISS '00), Princeton University, Princeton, NJ, March 1517, 2000, and at the 2000 IEEE International Symposium on Information Theory (ISIT '00), Sorrento, Italy, June 2530, 2000.
On the Capacity Game of Public Watermarking Systems
 IEEE Trans. on Information Theory
, 2002
"... Watermarking codes are analyzed as a game between two players: an information hider, and a decoder, on the one hand, and an attacker on the other hand. The information hider is allowed to cause some tolerable level of distortion to the original data within which the message is hidden, and the result ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
Watermarking codes are analyzed as a game between two players: an information hider, and a decoder, on the one hand, and an attacker on the other hand. The information hider is allowed to cause some tolerable level of distortion to the original data within which the message is hidden, and the resulting distorted data can suffer some additional amount of distortion caused by an attacker who aims at erasing the message. Motivated by a worstcase approach, we assume that the attacker is informed of the hiding strategy taken by the information hider and the decoder, while they are uninformed of the attacking scheme. A singleletter expression for the capacity is found under the assumption that the covertext is drawn from a memoryless stationary source and its realization (side information) is available at the encoder only.
Identification in the Presence of Side Information with Application to Watermarking
, 2001
"... Watermarking codes are analyzed from an informationtheoretic viewpoint as identification codes with side information that is available at the transmitter only or at both ends. While the information hider embeds a secret message (watermark) in a covertext message (typically, text, image, sound, or v ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Watermarking codes are analyzed from an informationtheoretic viewpoint as identification codes with side information that is available at the transmitter only or at both ends. While the information hider embeds a secret message (watermark) in a covertext message (typically, text, image, sound, or video stream) within a certain distortion level, the attacker, modeled here as a memoryless channel, processes the resulting watermarked message (within limited additional distortion) in attempt to invalidate the watermark. In most applications of watermarking codes the decoder need not carry out full decoding, as in ordinary coded communication systems, but only to test whether a watermark at all exists and if so, whether it matches a particular hypothesized pattern. This fact motivates us to view the watermarking problem as an identification problem, where the original covertext source serves as side information. In most applications, this side information is available to the encoder only, bu...
Source coding and channel requirements for unstable processes
 IEEE Trans. Inf. Theory, Submitted, 2006. [Online]. Available: http://www.eecs.berkeley.edu/˜sahai/Papers/anytime.pdf
"... Our understanding of information in systems has been based on the foundation of memoryless processes. Extensions to stable Markov and autoregressive processes are classical. Berger proved a source coding theorem for the marginally unstable Wiener process, but the infinitehorizon exponentially unst ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
(Show Context)
Our understanding of information in systems has been based on the foundation of memoryless processes. Extensions to stable Markov and autoregressive processes are classical. Berger proved a source coding theorem for the marginally unstable Wiener process, but the infinitehorizon exponentially unstable case has been open since Gray’s 1970 paper. There were also no theorems showing what is needed to communicate such processes across noisy channels. In this work, we give a fixedrate sourcecoding theorem for the infinitehorizon problem of coding an exponentially unstable Markov process. The encoding naturally results in two distinct bitstreams that have qualitatively different QoS requirements for communicating over a noisy medium. The first stream captures the information that is accumulating within the nonstationary process and requires sufficient anytime reliability from the channel used to communicate the process. The second stream captures the historical information that dissipates within the process and is essentially classical. This historical information can also be identified with a natural stable counterpart to the unstable process. A converse demonstrating the fundamentally layered nature of unstable sources is given by means of informationembedding ideas.
A relationship between quantization and watermarking rates in the presence of Gaussian attacks, Institute for Systems Research technical
, 2001
"... Abstract—A system which embeds watermarks in ..."
Achievable error exponents for the private fingerprinting game
 IEEE Trans. Information Theory
, 2007
"... Fingerprinting systems in the presence of collusive attacks are analyzed as a game between a fingerprinter and a decoder, on the one hand, and a coalition of two or more attackers, on the other hand. The fingerprinter distributes, to different users, different fingerprinted copies of a host data (co ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Fingerprinting systems in the presence of collusive attacks are analyzed as a game between a fingerprinter and a decoder, on the one hand, and a coalition of two or more attackers, on the other hand. The fingerprinter distributes, to different users, different fingerprinted copies of a host data (covertext), drawn from a memoryless stationary source, embedded with different fingerprints. The coalition members create a forgery of the data while aiming at erasing the fingerprints in order not to be detected. Their action is modelled by a multiple access channel (MAC). We analyze the performance of two classes of decoders, associated with different kinds of error events. The decoder of the first class aims at detecting the entire coalition, whereas the second is satisfied with the detection of at least one member of the coalition. Both decoders have access to the original covertext data and observe the forgery in order to identify member/s of the coalition. Motivated by a worstcase approach, we assume that the coalition of attackers is informed of the hiding strategy taken by the fingerprinter and the decoder, while they are uninformed of the attacking scheme. Single letter expressions for the error exponents of the two kinds are obtained, a decoder that is optimal with respect to the two kinds of errors is introduced, and the worstcase attack channel is characterized. 1
Authentication with Distortion Criteria
 IEEE Transactions on Information Theory
, 2002
"... In a variety of applications, there is a need to authenticate a source that may have been degraded, transformed, edited, or otherwise modified, either intentionally or unintentionally. We develop a formulation of this problem, and identify and interpret the associated informationtheoretic perform ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
In a variety of applications, there is a need to authenticate a source that may have been degraded, transformed, edited, or otherwise modified, either intentionally or unintentionally. We develop a formulation of this problem, and identify and interpret the associated informationtheoretic performance limits. The results are illustrated through application to binary sources with Hamming distortion measures, and to Gaussian sources with quadratic distortion measures.
DIGITAL WATERMARKING, FINGERPRINTING AND COMPRESSION: An . . .
, 2002
"... The ease with which digital data can be duplicated and distributed over the media and the Internet has raised many concerns about copyright infringement. In many situations, multimedia data (e.g., images, music, movies, etc) are illegally circulated, thus violating intellectual property rights. In a ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
The ease with which digital data can be duplicated and distributed over the media and the Internet has raised many concerns about copyright infringement. In many situations, multimedia data (e.g., images, music, movies, etc) are illegally circulated, thus violating intellectual property rights. In an attempt to overcome this problem, watermarking has been suggested in the literature as the most effective means for copyright protection and authentication. Watermarking is the procedure whereby information (pertaining to owner and/or copyright) is embedded into host data, such that it is: (i) hidden, i.e., not perceptually visible; and (ii) recoverable, even after a (possibly malicious) degradation of the protected work. In this thesis, we prove some theoretical results that establish the fundamental limits of a general class of watermarking schemes. The main focus of this thesis is the problem of joint watermarking and compression of images, which can be briefly described as follows: due to bandwidth or storage constraints, a watermarked image is distributed in quantized form, using RQ bits per image dimension, and is subject to some additional degradation (possibly due to malicious attacks). The hidden message carries RW bits per
Data Hiding Capacity in the Presence of an Imperfectly Known Channel
 SPIE Proceedings of Security and Watermarking of Multimeida Contents II 4314
, 2001
"... We consider a data hiding channel in this paper that is not perfectly known by the encoder and the decoder. The imperfect knowledge could be due to the channel estimation error, timevarying active adversary etc. A mathematical model for this scenario is proposed. Many important attacks such as scal ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
We consider a data hiding channel in this paper that is not perfectly known by the encoder and the decoder. The imperfect knowledge could be due to the channel estimation error, timevarying active adversary etc. A mathematical model for this scenario is proposed. Many important attacks such as scaling, geometrical transformations etc. fall under the proposed mathematical model. Minimal assumptions are made regarding the probability distributions of the datahiding channel. Lower and upper bounds on the data hiding capacity are derived. It is shown that the popular additive Gaussian noise channel model may not su#ce in realworld scenarios; the capacity estimates using the additive Gaussian channel model tend to either over or underestimate the capacity under di#erent scenarios. Asymptotic value of the capacity as the signal to noise ratio becomes arbitrarily large is also given. Many existing data hiding capacity estimates are observed to be a special case of the formulas derived in this paper. We also observe that the proposed mathematical model can be applied to reallife applications such as data hiding in image/video. Theoretical results are further explained using numerical values.