Results 1 - 10
of
138
Anti-Collusion Fingerprinting for Multimedia
- IEEE Transactions on Signal Processing
, 2003
"... Digital fingerprinting is a technique for identifying users who might try to use multimedia content for unintended purposes, such as redistribution. These fingerprints are typically embedded into the content using watermarking techniques that are designed to be robust to a variety of attacks. A cost ..."
Abstract
-
Cited by 106 (28 self)
- Add to MetaCart
Digital fingerprinting is a technique for identifying users who might try to use multimedia content for unintended purposes, such as redistribution. These fingerprints are typically embedded into the content using watermarking techniques that are designed to be robust to a variety of attacks. A cost-e#ective attack against such digital fingerprints is collusion, where several di#erently marked copies of the same content are combined to disrupt the underlying fingerprints. In this paper, we investigate the problem of designing fingerprints that can withstand collusion and allow for the identification of colluders. We begin by introducing the collusion problem for additive embedding. We then study the e#ect that averaging collusion has upon orthogonal modulation. We introduce an e#cient detection algorithm for identifying the fingerprints associated with K colluders that requires log(n/K)) correlations for a group of n users. We next develop a fingerprinting scheme based upon code modulation that does not require as many basis signals as orthogonal modulation. We propose a new class of codes, called anti-collusion codes (ACC), which have the property that the composition of any subset of K or fewer codevectors is unique. Using this property, we can therefore identify groups of K or fewer colluders. We present a construction of binary-valued ACC under the logical AND operation that uses the theory of combinatorial designs and is suitable for both the on-o# keying and antipodal form of binary code modulation. In order to accommodate n users, our code construction requires only # n) orthogonal signals for a given number of colluders. We introduce four di#erent detection strategies that can be used with our ACC for identifying a suspect set of colluders. We demonstrate th...
Performance Analysis of Existing and New Methods for Data Hiding with Known-Host Information in Additive Channels
- PROCESSING, SPECIAL ISSUE ON SIGNAL PROCESSING FOR DATA HIDING IN DIGITAL MEDIA AND SECURE CONTENT DELIVERY
, 2002
"... A considerable amount of attention has been lately payed to a number of data hiding methods based in quantization, seeking to achieve in practice the results predicted by Costa for a channel with side information at the encoder. With the objective of filling a gap in the literature, this paper suppl ..."
Abstract
-
Cited by 63 (15 self)
- Add to MetaCart
A considerable amount of attention has been lately payed to a number of data hiding methods based in quantization, seeking to achieve in practice the results predicted by Costa for a channel with side information at the encoder. With the objective of filling a gap in the literature, this paper supplies a fair comparison between significant representatives of both this family of methods and the former spread-spectrum approaches that make use of near-optimal ML decoding; the comparison is based on measuring their probabilities of decoding error in the presence of channel distortions. Accurate analytical expressions and tight bounds for the probability of decoding error are given and validated by means of Monte Carlo simulations. For Dithered Modulation (DM) a novel technique that allows to obtain tighter bounds to the probability of error is presented. Within the new framework, the strong points and weaknesses of both methods are distinctly displayed. This comparative study allows us to propose a new technique named "Quantized Projection" (QP), which by adequately combining elements of those previous approaches, produces gains in performance.
Hiding digital watermarks using multiresolution wavelet transform
- IEEE Trans. Signal Proc
, 2001
"... Abstract—In this paper, an image accreditation technique by embedding digital watermarks in images is proposed. The proposed method for the digital watermarking is based on the wavelet transform. This is unlike most previous work, which used a random number of a sequence of bits as a watermark and w ..."
Abstract
-
Cited by 52 (3 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper, an image accreditation technique by embedding digital watermarks in images is proposed. The proposed method for the digital watermarking is based on the wavelet transform. This is unlike most previous work, which used a random number of a sequence of bits as a watermark and where the watermark can only be detected by comparing an experimental threshold value to determine whether a sequence of random signals is the watermark. The proposed approach embeds a watermark with visual recognizable patterns, such as binary, gray, or color image in images by modifying the frequency part of the images. In the proposed approach, an original image is decom-posed into wavelet coefficients. Then, multi-energy watermarking scheme based on the qualified significant wavelet tree (QSWT) is used to achieve the robustness of the watermarking. Unlike other watermarking techniques that use a single casting energy, QSWT adopts adaptive casting energy in different resolutions. The performance of the proposed watermarking is robust to a variety of signal distortions, such as JPEG, image cropping, sharpening, median filtering, and incorporating attacks. Index Terms—Digital watermark, discrete wavelet transform, image processing, JPEG compression, qualified significant wavelet tree. I.
Attack Modelling: Towards a Second Generation Watermarking Benchmark
- Signal Processing, Special Issue on Information Theoretic Issues in Digital Watermarking
, 2001
"... Digital image watermarking techniques for copyright protection have become increasingly robust. The best algorithms perform well against the now standard benchmark tests included in the Stirmark package. However the stirmark tests are limited since in general they do not properly model the watermark ..."
Abstract
-
Cited by 47 (4 self)
- Add to MetaCart
(Show Context)
Digital image watermarking techniques for copyright protection have become increasingly robust. The best algorithms perform well against the now standard benchmark tests included in the Stirmark package. However the stirmark tests are limited since in general they do not properly model the watermarking process and consequently are limited in their potential to removing the best watermarks. Here we propose a stochastic formulation of watermarking attacks using an estimationbased concept. The proposed attacks consist of two main stages: a) watermark or cover data estimation; b) modification of stego data aiming at disrupting the watermark detection and of resolving copyrights, taking into account the statistics of the embedded watermark and exploits features of human visual system.
A Novel Blind Multiple Watermarking Technique for Images
- IEEE Transactions on Circuits and Systems for Video Technology: Special Issue on Authentication, Copyright Protection and Information Hiding
, 2003
"... Three novel blind watermarking techniques are proposed to embed watermarks into digital images for different purposes. The watermarks are designed to be decoded or detected without the original images. The first one, called single watermark embedding (SWE), is used to embed a watermark bit sequence ..."
Abstract
-
Cited by 31 (0 self)
- Add to MetaCart
(Show Context)
Three novel blind watermarking techniques are proposed to embed watermarks into digital images for different purposes. The watermarks are designed to be decoded or detected without the original images. The first one, called single watermark embedding (SWE), is used to embed a watermark bit sequence into digital images using two secret keys. The second technique, called multiple watermark embedding (MWE), extends SWE to embed multiple watermarks simultaneously in the same watermark space while minimizing the watermark (distortion) energy. The third technique, called iterative watermark embedding (IWE), embeds watermarks into JPEG-compressed images. The iterative approach of IWE can prevent the potential removal of a watermark in the JPEG recompression process. Experimental results show that embedded watermarks using the proposed techniques can give good image quality and are robust in varying degree to JPEG compression, low-pass filtering, noise contamination, and print-and-scan.
Communication and information theory in watermarking: A survey
- SPIE Multimedia Systems and Applications IV
, 2001
"... This paper presents a review of some influential work in the area of digital watermarking using communications and information-theoretic analysis. After a brief introduction, some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided. ..."
Abstract
-
Cited by 23 (0 self)
- Add to MetaCart
This paper presents a review of some influential work in the area of digital watermarking using communications and information-theoretic analysis. After a brief introduction, some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided. Insights and potential future trends in the area of watermarking theory are discussed.
Optimum decoding and detection of multiplicative watermarks
- IEEE Transactions on Signal Processing
"... Abstract—This work addresses the problem of optimum decoding and detection of a multibit, multiplicative watermark hosted by Weibull-dis-tributed features: a situation which is classically encountered for image watermarking in the magnitude-of-DFT domain. As such, this work can be seen as an extensi ..."
Abstract
-
Cited by 21 (0 self)
- Add to MetaCart
(Show Context)
Abstract—This work addresses the problem of optimum decoding and detection of a multibit, multiplicative watermark hosted by Weibull-dis-tributed features: a situation which is classically encountered for image watermarking in the magnitude-of-DFT domain. As such, this work can be seen as an extension of the system described in a previous paper, where the same problem is addressed for the case of 1-bit watermarking. The the-oretical analysis is validated through Monte Carlo simulations. Although the structure of the optimum decoder/detector is derived in the absence of attacks, some experimental results are also presented, giving a measure of the overall robustness of the watermark when attacks are present. Index Terms—Multibit watermarking, multiplicative watermarking, op-timum decoding, watermark presence assessment. I.
Blind Newton Sensitivity Attack
- IEE Proceedings on Information Security 153
, 2006
"... Until now, the sensitivity attack was considered as a serious threat to the robustness and security of spread-spectrum-based schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. Nevertheless, it had not been used to remove the watermark from oth ..."
Abstract
-
Cited by 20 (2 self)
- Add to MetaCart
Until now, the sensitivity attack was considered as a serious threat to the robustness and security of spread-spectrum-based schemes, since it provides a practical method of removing watermarks with minimum attacking distortion. Nevertheless, it had not been used to remove the watermark from other watermarking algorithms, such as those which use side-information. Furthermore, the sensitivity attack has never been used to obtain falsely watermarked contents, also known as forgeries. In this paper a new version of the sensitivity attack based on a general formulation is proposed; this method does not require any knowledge about the detection function nor any other system parameter, but just the binary output of the detector, being suitable for attacking most known watermarking methods. The new approach is validated with experiments.
Hidden messages in heavy-tails: DCTdomain watermark detection using alpha-stable models
- IEEE Transactions on Multimedia
"... Abstract—This paper addresses issues that arise in copyright protection systems of digital images, which employ blind watermark verification structures in the discrete cosine transform (DCT) domain. First, we observe that statistical distributions with heavy algebraic tails, such as the alpha-stable ..."
Abstract
-
Cited by 18 (1 self)
- Add to MetaCart
(Show Context)
Abstract—This paper addresses issues that arise in copyright protection systems of digital images, which employ blind watermark verification structures in the discrete cosine transform (DCT) domain. First, we observe that statistical distributions with heavy algebraic tails, such as the alpha-stable family, are in many cases more accurate modeling tools for the DCT coefficients of JPEGanalyzed images than families with exponential tails such as the generalized Gaussian. Motivated by our modeling results, we then design a new processor for blind watermark detection using the Cauchy member of the alpha-stable family. The Cauchy distribution is chosen because it is the only non-Gaussian symmetric alphastable distribution that exists in closed form and also because it leads to the design of a nearly optimum detector with robust detection performance. We analyze the performance of the new detector in terms of the associated probabilities of detection and false alarm and we compare it to the performance of the generalized Gaussian detector by performing experiments with various test images. Index Terms—Alpha-stable distributions, discrete cosine transform, image watermarking, Neyman–Pearson detector, statistical modeling. I.