Results 1  10
of
19
Perfectly Secure Steganography: Capacity, Error Exponents, and Code Constructions
, 2007
"... An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statisti ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independently and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and randomcoding exponents for perfectlysecure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. In some applications, steganographic communication may be disrupted by an active warden, modelled here by a compound discrete memoryless channel. The transmitter and warden are subject to distortion constraints. We address the potential loss in communication performance due to the perfectsecurity requirement. This loss is the same as the loss obtained under a weaker order1 steganographic requirement that would just require matching of firstorder
Minimizing Additive Distortion in Steganography using SyndromeTrellis Codes
"... This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (nonbinary) embedding operation. Let every possible value of every stego element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover el ..."
Abstract

Cited by 13 (11 self)
 Add to MetaCart
This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (nonbinary) embedding operation. Let every possible value of every stego element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover element by this value. The total distortion is assumed to be a sum of perelement distortions. Both the payloadlimited sender (minimizing the total distortion while embedding a fixed payload) and the distortionlimited sender (maximizing the payload while introducing a fixed total distortion) are considered. Without any loss of performance, the nonbinary case is decomposed into several binary cases by replacing individual bits in cover elements. The binary case is approached using a novel syndromecoding scheme based on dual convolutional codes equipped with the Viterbi algorithm. This fast and very versatile solution achieves stateoftheart results in steganographic applications while having linear time and space complexity w.r.t. the number of cover elements. We report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel. Practical merit of this approach is validated by constructing and testing adaptive embedding schemes for digital images in raster and transform domains. Most current coding schemes used in steganography (matrix embedding, wet paper codes, etc.) and many new ones can be implemented using this framework.
PredictiveCodingBased Steganography and Modification for Enhanced Security
 IJCSNS International Journal of Computer Science and Network Security, vol.6 no. 3b
, 2006
"... The predictivecodingbased (PCB) steganography can embed a large amount of bits into the code stream of lossless compression with high imperceptibility. However, based on two elaborately chosen statistical features, the proposed steganalytic method can easily find the presence of a secret message w ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The predictivecodingbased (PCB) steganography can embed a large amount of bits into the code stream of lossless compression with high imperceptibility. However, based on two elaborately chosen statistical features, the proposed steganalytic method can easily find the presence of a secret message with small error probability. To enhance the scheme’s security, a modified one is proposed, which preserves the prediction errors ’ distribution by choosing the optimum adjustment parameter. Experimental results prove that the modified scheme can provide nearperfect security in Cachin’s definition and defeat the steganalytic method proposed by ourselves. Key words:
Joint fixedrate universal lossy coding and identification of continuousalphabet memoryless sources
 IEEE Trans. Inform. Theory
"... The problem of joint universal source coding and identification is considered in the setting of fixedrate lossy coding of continuousalphabet memoryless sources. For a wide class of bounded distortion measures, it is shown that any compactly parametrized family of R dvalued i.i.d. sources with abs ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The problem of joint universal source coding and identification is considered in the setting of fixedrate lossy coding of continuousalphabet memoryless sources. For a wide class of bounded distortion measures, it is shown that any compactly parametrized family of R dvalued i.i.d. sources with absolutely continuous distributions satisfying appropriate smoothness and Vapnik–Chervonenkis learnability conditions, admits a joint scheme for universal lossy block coding and parameter estimation, such that when the block length n tends to infinity, the overhead perletter rate and the distortion redundancies converge to zero as O(n −1 log n) and O ( � n −1 log n), respectively. Moreover, the active source can be determined at the decoder up to a ball of radius O ( � n −1 log n) in variational distance, asymptotically almost surely. The system has finite memory length equal to the block length, and can be thought of as blockwise application of a timeinvariant nonlinear filter with initial conditions determined from the previous block. Comparisons are presented with several existing schemes for universal vector quantization, which do not include parameter estimation explicitly, and an extension to unbounded distortion measures is outlined. Finally, finite mixture classes and exponential families are given as explicit examples of parametric sources admitting joint universal compression and modeling schemes of the kind studied here. Keywords: Learning, minimumdistance density estimation, twostage codes, universal vector quantization, Vapnik– Chervonenkis dimension. I.
SpreadSpectrum Watermark by Synthesizing Texture
 In PacificRim Conf on Multimedia, 2007
"... Abstract. Image watermarking is a mapping from watermark message to a set of image counterparts, where every version conveys the same meaning with the original image. Since textures that present single perceptual meaning have certain diversity, an intuitive idea of watermarking is to replace the tex ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Image watermarking is a mapping from watermark message to a set of image counterparts, where every version conveys the same meaning with the original image. Since textures that present single perceptual meaning have certain diversity, an intuitive idea of watermarking is to replace the texture region of an image with a similarlooking synthetic texture containing the watermark. We propose a spreadspectrum watermarking scheme by integrating existent work on texture extraction, segmentation and synthesis, and obtain suggestive results, including (1) the synthetic watermarks can resist adaptive Wiener filtering attack due to its power spectrum similar with the original image; (2) if using the spreadspectrum carrier which is designed elaborately according to the subspace spanned by the textures, hiding capacity can be improved by 20%, while effective hiding capacity under Wiener filtering attack can be doubled; (3) the proposed watermarking scheme also enlighten a sophisticate strategy for watermark attack. 1
Block QIM watermarking games
 IEEE Transactions on Information Forensics and Security
, 2006
"... While binning is a fundamental approach to blind data embedding and watermarking, an attacker may devise various strategies to reduce the effectiveness of practical binning schemes. The problem analyzed in this paper is design of worstcase noise distributions against Ldimensional lattice Quantizat ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
While binning is a fundamental approach to blind data embedding and watermarking, an attacker may devise various strategies to reduce the effectiveness of practical binning schemes. The problem analyzed in this paper is design of worstcase noise distributions against Ldimensional lattice Quantization Index Modulation (QIM) watermarking codes. The cost functions considered are (1) probability of error of the maximumlikelihood decoder, and (2) the more tractable Bhattacharyya upper bound on error probability, which is tight at low embedding rates. Both problems are addressed under the following constraints on the attacker’s strategy: the noise is independent of the marked signal, blockwise memoryless with block length L, and may not exceed a specified quadraticdistortion level. The embedder’s quadratic distortion is limited as well. Three strategies are considered for the embedder: optimization of the lattice inflation parameter (aka Costa parameter), dithering, and randomized lattice rotation. Critical in this analysis are the symmetry properties of QIM nested lattices and convexity properties of probability of error and related functionals of the noise distribution. We derive the minmax optimal embedding and attack strategies and obtain explicit solutions as well as numerical solutions for the worstcase noise. The role of the attacker’s memory is investigated; in particular, we demonstrate the remarkable effectiveness of impulsivenoise attacks as L increases. The formulation proposed in this paper is also used to evaluate the capacity of lattice QIM under worstnoise conditions.
DETECTION AND INFORMATIONTHEORETIC ANALYSIS OF STEGANOGRAPHY AND FINGERPRINTING
, 2006
"... The proliferation of multimedia and the advent of the Internet and other public networks have created many new applications of information hiding in multimedia security and forensics. This dissertation focuses on two of these application scenarios: steganography (and its counter problem, steganalysi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The proliferation of multimedia and the advent of the Internet and other public networks have created many new applications of information hiding in multimedia security and forensics. This dissertation focuses on two of these application scenarios: steganography (and its counter problem, steganalysis), and fingerprinting. First, from a detectiontheoretic perspective, we quantify the detectability of commonly used informationhiding techniques such as spread spectrum and distortioncompensated quantization index modulation, and also the detectability of blockbased steganography. We devise a practical steganalysis method that exploits the peculiar block structure of blockDCT image steganography. To cope with the twin difficulties of unknown image statistics and unknown steganographic codes, we explore image steganalysis based on supervised learning and build an optimized classifier that outperforms previously proposed image steganalysis methods. Then, from an informationtheoretic perspective, we derive the capacity and randomcoding error exponent of perfectly secure steganography and public fingerprinting. For both games, a randomized stackedbinning scheme and a matched maximum penalized mutual information decoder are used to achieve capacity and to realize a randomcoding error exponent that is strictly positive at all rates below capacity.
Exploring QIM based AntiCollusion Fingerprinting for Multimedia
"... Digital fingerprinting is an emerging technology to protect multimedia from unauthorized use by embedding a unique fingerprint signal into each user’s copy. A robust embedding algorithm is an important building block in order to make the fingerprint resilient to various distortions and collusion att ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Digital fingerprinting is an emerging technology to protect multimedia from unauthorized use by embedding a unique fingerprint signal into each user’s copy. A robust embedding algorithm is an important building block in order to make the fingerprint resilient to various distortions and collusion attacks. Spread spectrum embedding has been widely used for multimedia fingerprinting. In this paper, we explore another class of embedding methods – Quantization Index Modulation (QIM) for fingerprinting applications. We first employ Dither Modulation (DM) technique and extend it for embedding multiple symbols through a basic dither sequence design. We then develop a theoretical model and propose a new algorithm to improve the collusion resistance of the basic scheme. Simulation results show that the improvement algorithm enhances the collusion resistance, while there is still a performance gap with the existing spread spectrum based fingerprinting. We then explore coded fingerprinting based on spread transform dither modulation (STDM) embedding. Simulation results show that this coded STDM based fingerprinting has significant advantages over spread spectrum based fingerprinting under blind detection.
Controlling leakage of biometric information using dithering
 in Proc. EUSIPCO
"... Fuzzy extractors allow cryptographic keys to be generated from noisy, nonuniform biometric data. Fuzzy extractors can be used to authenticate a user to a server without storing her biometric data directly. However, in the Information Theoretic sense fuzzy extractors will leak information about the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Fuzzy extractors allow cryptographic keys to be generated from noisy, nonuniform biometric data. Fuzzy extractors can be used to authenticate a user to a server without storing her biometric data directly. However, in the Information Theoretic sense fuzzy extractors will leak information about the biometric data. We propose as alternative to use a fuzzy embedder which fuses an independently generated cryptographic key with biometric data. As fuzzy extractors, a fuzzy embedder can be used to authenticate a user without storing her biometric information or the cryptographic key on a server. A fuzzy embedder will leak in the Information Theoretic sense information about both the biometrics and the cryptographic key. While both types of leakage are important, information leakage of the biometric data is critical since the cryptographic key as opposed to biometric data can be renewed. We show that constructing fuzzy embedders which leak no information about the biometrics is theoretically possible. We present a construction which allows controlling the leakage of biometric information, but which requires a weak secret at the decoder called dither. If this secret is compromised the security of the construction will degrade gracefully. 1.
A Novel Steganography Algorithm for Hiding Text in Image using Five Modulus Method
"... The needs for steganographic techniques for hiding secret message inside images have been arise. This paper is to create a practical steganographic implementation to hide text inside grey scale images. The secret message is hidden inside the cover image using Five Modulus Method. The novel algorithm ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The needs for steganographic techniques for hiding secret message inside images have been arise. This paper is to create a practical steganographic implementation to hide text inside grey scale images. The secret message is hidden inside the cover image using Five Modulus Method. The novel algorithm is called (STFMM. FMM which consists of transforming all the pixels within the 5�5 window size into its corresponding multiples of 5. After that, the secret message is hidden inside the 5�5 window as a nonmultiples of 5. Since the modulus of nonmultiples of 5 are 1,2,3 and 4, therefore; if the reminder is one of these, then this pixel represents a secret character. The secret key that has to be sent is the window size. The main advantage of this novel algorithm is to keep the size of the cover image constant while the secret message increased in size. Peak signaltonoise ratio is captured for each of the images tested. Based on the PSNR value of each images, the stego image has high PSNR value. Hence this new steganography algorithm is very efficient to hide the data inside the image.