Results 11  20
of
31
HighVolume Data Hiding In Images: Introducing Perceptual Criteria into Quantization Based Embedding
 IN PROC. ICASSP
, 2002
"... Informationtheoretic analyses for data hiding prescribe embedding the hidden data in the choice of quantizer for the host data. In this paper, we consider a suboptimal implementation of this prescription, with a view to hiding high volumes of data in images with low perceptual degradation. Our two ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Informationtheoretic analyses for data hiding prescribe embedding the hidden data in the choice of quantizer for the host data. In this paper, we consider a suboptimal implementation of this prescription, with a view to hiding high volumes of data in images with low perceptual degradation. Our two main findings are as follows: (a) In order to limit perceptual distortion while hiding large amounts of data, the hiding scheme must use perceptual criteria in addition to informationtheoretic guidelines. (b) By
Embedding Information in Grayscale Images
 Proc. 22nd Symp. Inform. Theory in the Benelux
"... Grayscale images can be represented as sequences of integervalued symbols. If such a symbol has alphabet {0; 1; · · · ; 2B − 1} it could be represented by B binary digits. To embed information in these symbols, we are allowed to distort them. The distortion measure that we consider here is mean ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Grayscale images can be represented as sequences of integervalued symbols. If such a symbol has alphabet {0; 1; · · · ; 2B − 1} it could be represented by B binary digits. To embed information in these symbols, we are allowed to distort them. The distortion measure that we consider here is mean squared error. In this setup there is a socalled “ratedistortion function ” that tells us what the largest embedding rate is, given a certain distortion level. First we determine this ratedistortion function. Next we compare the performance of LSB modulation to this ratedistortion function. Then several embedding codes are proposed: (i) codes that only affect the LSB of a symbol, (ii) socalled ternary codes in which a symbol can be changed by at most one, and (iii) codes that process a pair of symbols each time. 1 System description
Practical Watermarking Scheme Based on Wide Spread Spectrum and Game Theory
 Elsevier: Signal Processing  Image Communication
, 2002
"... In this paper, we consider the implementation of robust watermarking scheme for non i.i.d. Gaussian signals and distortion based on perceptual metrics. We consider this problem as a communication problem and formulated it as a game between an attacker and an embedder in order to establish its theore ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this paper, we consider the implementation of robust watermarking scheme for non i.i.d. Gaussian signals and distortion based on perceptual metrics. We consider this problem as a communication problem and formulated it as a game between an attacker and an embedder in order to establish its theoretical performance. We first show that known parallel Gaussian channels technique does not lead to valid practical implementation, and then propose a new scheme based on Wide Spread Spectrum and Side Information. Theoretical performances of this scheme are established and shown to be very close to the upper bound on capacity defined by Parallel Gaussian channels. Practical implementation of this scheme is then presented and influence of the di#erent parameters on performance is discussed. Finally, experimental results for image watermarking are presented and validate the proposed scheme.
Watermarking Based On Duality With Distributed Source Coding And Robust Optimization Principles
 In Proc. Int. Conf. on Image Processing
, 2000
"... Inspired by a recently proposedconstructive framework for the distributed source coding problem [1], we propose a powerful constructive approach to the watermarking problem, emphasizing the dual roles of distributed source coding with side information at the decoder and channel coding with side info ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Inspired by a recently proposedconstructive framework for the distributed source coding problem [1], we propose a powerful constructive approach to the watermarking problem, emphasizing the dual roles of distributed source coding with side information at the decoder and channel coding with side information at the encoder. In our framework, we explore various source and channel codes to close the gap on the achievable capacity of watermarking systems [2]. We propose two methods of solution, one which is based on optimal ratedistortion quantizers and the other basedon robust optimization and convex programming. The resulting watermarking schemes,when subjectedto additive white gaussiannoise (AWGN) attacks, achieve results which are comparable to or better than the best watermarking schemes in the literature. 1. INTRODUCTION Digital watermarking (data hiding) is an emerging research area that has received a considerable amount of attention in recent years. The basic idea behind digital...
A New Perspective For EmbeddingDetection Methods With Distortion Compensation And Thresholding . . .
 IN PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP
, 2003
"... In this paper, we analyze oblivious (blind) information hiding methods from a new perspective. In [1], Costa introduced a communications framework that also applies to oblivious information hiding. We present an alternate and equivalent framework by carrying out the channel dependent nature of the ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
In this paper, we analyze oblivious (blind) information hiding methods from a new perspective. In [1], Costa introduced a communications framework that also applies to oblivious information hiding. We present an alternate and equivalent framework by carrying out the channel dependent nature of the optimal encoder in a different manner. Within the proposed framework, decoder structure is simplified, although in effect it's a slender advantage compared to overall complexity. This interpretation provides a better connection between the analytical results and practical designs. We evaluate the practical embeddingdetection schemes employing scalar quantization procedures along with thresholding and distortion compensation types of processings from this perspective. Furthermore, we justify the assumptions for the optimality of the two types of processings.
The DataHiding Capacity of Image Sources
 IEEE TRANS. IMAGE PROC. SUBMITTED
, 2000
"... An informationtheoretic model for image watermarking and data hiding is proposed in this paper. Recent theoretical results are used to characterize the fundamental capacity limits of image watermarking and datahiding systems. Capacity is determined by the statistical model used for the host ima ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
An informationtheoretic model for image watermarking and data hiding is proposed in this paper. Recent theoretical results are used to characterize the fundamental capacity limits of image watermarking and datahiding systems. Capacity is determined by the statistical model used for the host image, by the distortion constraints on the data hider and the attacker, and by the information available to the data hider, to the attacker, and to the decoder. We consider autoregressive, blockDCT and wavelet statistical models for images and compute datahiding capacity for compressed and uncompressed hostimage sources. Closedform expressions are obtained under sparsemodel approximations. Models for geometric attacks and distortion measures that are invariant to such attacks are considered.
Wide Spread Spectrum Watermarking with Side Information and Interference Cancellation
, 2003
"... Nowadays, a popular method used for additive watermarking is wide spread spectrum. It consists in adding a spread signal into the host document. This signal is obtained by the sum of a set of carrier vectors, which are modulated by the bits to be embedded. To extract these embedded bits, weighted co ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Nowadays, a popular method used for additive watermarking is wide spread spectrum. It consists in adding a spread signal into the host document. This signal is obtained by the sum of a set of carrier vectors, which are modulated by the bits to be embedded. To extract these embedded bits, weighted correlations between the watermarked document and the carriers are computed. Unfortunately, even without any attack, the obtained set of bits can be corrupted due to the interference with the host signal (host interference) and also due to the interference with the others carriers (intersymbols interference (ISI) due to the nonorthogonality of the carriers). Some recent watermarking algorithms deal with host interference using side informed methods, but intersymbols interference problem is still open. In this paper, we deal with interference cancellation methods, and we propose to consider ISI as side information and to integrate it into the host signal. This leads to a great improvement of extraction performance in term of signaltonoise ratio and/or watermark robustness.
Hexagonal Quantizers are not Optimal for 2D Data Hiding
 In Proc. of SPIE Vol. 5020, Security and Watermarking of Multimedia Contents IV
, 2003
"... Data hiding using quantization has revealed as an effective way of taking into account side information at the encoder. When quantizing more than one host signal samples there are two choices: 1) using the Cartesian product of several onedimensional quantizers, as made in Scalar Costa Scheme (SCS); ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Data hiding using quantization has revealed as an effective way of taking into account side information at the encoder. When quantizing more than one host signal samples there are two choices: 1) using the Cartesian product of several onedimensional quantizers, as made in Scalar Costa Scheme (SCS); or 2) performing vectorial quantization. The second option seems better, as ratedistortion theory affirms that higher dimensional quantizers yield improved performance due to better spherepacking properties. Although the embedding problem does resemble that of ratedistortion, no attacks or host signal characteristics are usually considered when designing the quantizer in this way. We show that attacks worsen the performance of the a priori optimal lattice quantizer through a counterexample: the comparison under Gaussian distortion of hexagonal lattice quantization against bidimensional DistortionCompensated Quantized Projection (DCQP), a data hiding alternative based in quantizing a linear projection of the host signal. Apart from empirical comparisons, theoretical lower bounds on the probability of decoding error of hexagonal lattices under Gaussian host signal and attack are provided and compared to the already analyzed DCQP method.
ABOUT THE PERFORMANCE OF PRACTICAL DIRTY PAPER CODING SCHEMES IN GAUSSIAN MIMO BROADCAST CHANNELS
"... This paper describes a way of implementing DPC in a Gaussian MIMO broadcast channel. The outer encoder is based on a vector TCQ designed to possess certain ”good properties”. Simpler schemes such as the Tomlinson Harashima or Scalar Costa’s scheme are also considered by a way of comparison. The inne ..."
Abstract
 Add to MetaCart
(Show Context)
This paper describes a way of implementing DPC in a Gaussian MIMO broadcast channel. The outer encoder is based on a vector TCQ designed to possess certain ”good properties”. Simpler schemes such as the Tomlinson Harashima or Scalar Costa’s scheme are also considered by a way of comparison. The inner encoder is implemented through a vector version of the ZFDPC and the MMSEDPC. The BER performance of the DPC schemes is evaluated and compared to that of conventional interference cancellers (preZF, preMMSE). From simulation results the choices of the inner encoder, the outer encoder (THS/SCS/TCQ) and the interference cancellation technique (conventional or DPC) are discussed. 1.