Results 1  10
of
111
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 700 (12 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Spacefrequency Quantization for Wavelet Image Coding
, 1997
"... Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical re ..."
Abstract

Cited by 160 (15 self)
 Add to MetaCart
Recently, a new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with treestructured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation [1, 2]. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization (zeroing out treestructured sets of wavelet coefficients) and the simplest form of scalar quantization (a single common uniform scalar quantizer applied to all nonzeroed coefficients), and we formalize the problem of optimizing their joint application and we develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive (often...
Manipulation and Compositing of MCDCT Compressed Video
, 1994
"... Many advanced video applications require manipulations of compressed video signals. Popular video manipulation functions include overlap (opaque or semitransparent), translation, scaling, linear filtering, rotation, and pixel multiplication. In this paper, we propose algorithms to manipulate compre ..."
Abstract

Cited by 102 (16 self)
 Add to MetaCart
(Show Context)
Many advanced video applications require manipulations of compressed video signals. Popular video manipulation functions include overlap (opaque or semitransparent), translation, scaling, linear filtering, rotation, and pixel multiplication. In this paper, we propose algorithms to manipulate compressed video in the compressed domain. Specifically, we focus on compression algorithms using the Discrete Cosine Transform (DCT) with or without Motion Compensation (MC). Compression systems of such kind include JPEG, Motion JPEG, MPEG, and H.261. We derive a complete set of algorithms for all aforementioned manipulation functions in the transform domain, in which video signals are represented by quantized transform coefficients. Due to a much lower data rate and the elimination of decompression/compression conversion, the transformdomain approach has great potential in reducing the computational complexity. The actual computational speedup depends on the specific manipulation functions and ...
DCTDomain Watermarking Techniques for Still Images: Detector Performance Analysis and a New Structure
 IEEE TRANS. ON IMAGE PROCESSING
, 2000
"... In this paper, a spreadspectrumlike discrete cosine transform domain (DCT domain) watermarking technique for copyright protection of still digital images is analyzed. The DCT is applied in blocks of 8 8 pixels as in the JPEG algorithm. The watermark can encode information to track illegal misuses. ..."
Abstract

Cited by 96 (3 self)
 Add to MetaCart
In this paper, a spreadspectrumlike discrete cosine transform domain (DCT domain) watermarking technique for copyright protection of still digital images is analyzed. The DCT is applied in blocks of 8 8 pixels as in the JPEG algorithm. The watermark can encode information to track illegal misuses. For flexibility purposes, the original image is not necessary during the ownership verification process, so it must be modeled by noise. Two tests are involved in the ownership verification stage: watermark decoding, in which the message carried by the watermark is extracted, and watermark detection, which decides whether a given image contains a watermark generated with a certain key. We apply generalized Gaussian distributions to statistically model the DCT coefficients of the original image and show how the resulting detector structures lead to considerable improvements in performance with respect to the correlation receiver, which has been widely considered in the literature and makes use of the Gaussian noise assumption. As a result of our work, analytical expressions for performance measures such as the probability of error in watermark decoding and probabilities of false alarm and detection in watermark detection are derived and contrasted with experimental results.
Statistical Analysis of Watermarking Schemes for Copyright Protection of Images
 PROCEEDINGS OF THE IEEE
, 1999
"... In this paper, we address the problem of the performance analysis of image watermarking systems that do not require the availability of the original image during ownership verification. We focus on a statistical approach to obtain models that can serve as a basis for the application of the decision ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
In this paper, we address the problem of the performance analysis of image watermarking systems that do not require the availability of the original image during ownership verification. We focus on a statistical approach to obtain models that can serve as a basis for the application of the decision theory to the design of efficient detector structures. Special attention is paid to the possible nonexistence of a statistical description of the original image. Different modeling approaches are proposed for the cases when such a statistical characterization is known and when it is not. Watermarks may encode a message, and the performance of the watermarking system is evaluated using as a measure the probability of false alarm, the probability of detection when the presence of the watermark is tested, and the probability of error when the information that it carries is extracted. Finally, the modeling techniques studied are applied to the analysis of two watermarking schemes, one of them defined in the spatial domain, and the other in the direct cosine transform (DCT) domain. The theoretical results are contrasted with empirical data obtained through experimentation covering several cases of interest. We show how choosing an appropriate statistical model for the original image can lead to considerable improvements in performance
Phase Watermarking Of Digital Images
, 1996
"... A watermark is an invisible mark placed on an image that can be detected when the image is compared with the original. This mark is designed to identify both the source of an image as well as its intended recipient. The mark should be tolerant to reasonable quality lossy compression of the image usi ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
A watermark is an invisible mark placed on an image that can be detected when the image is compared with the original. This mark is designed to identify both the source of an image as well as its intended recipient. The mark should be tolerant to reasonable quality lossy compression of the image using transform coding or vector quantization. Standard image processing operations such as low pass filtering, cropping, translation and rescaling should not remove the mark. Spread spectrum communication techniques and matrix transformations can be used together to design watermarks that are robust to tampering and are visually imperceptible. This paper discusses techniques for embedding such marks in grey scale digital images. It also proposes a novel phase based method of conveying the watermark information. In addition, the use of optimal detectors for watermark identification is also proposed. 1. WATERMARKING Zhao and Koch [1] investigated an approach to watermarking images based on the...
de Queiroz, “Identification of bitmap compression history: JPEG detection and quantizer estimation
 IEEE Trans. Image Process
, 2003
"... Abstract—Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG recompression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value. Index Terms—Artifact removal, image history, JPEG compression, quantizer estimation. I.
Supporting Image and Video Applications in a Multihop Radio Environment Using Path Diversity and Multiple Description Coding
 IEEE Transactions on Circuits and Systems for Video Technology
, 2002
"... ..."
(Show Context)
Image Coding By Block Prediction Of Multiresolution Subimages
 IEEE Transactions on Image Processing
"... The redundancy of the multiresolution representation has been clearly demonstrated in the case of fractal images, but it has not been fully recognized and exploited for general images. Recently, fractal block coders have exploited the selfsimilarity among blocks in images. In this work we devise ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
The redundancy of the multiresolution representation has been clearly demonstrated in the case of fractal images, but it has not been fully recognized and exploited for general images. Recently, fractal block coders have exploited the selfsimilarity among blocks in images. In this work we devise an image coder in which the causal similarity among blocks of different subbands in a multiresolution decomposition of the image is exploited. In a pyramid subband decomposition, the image is decomposed into a set of subbands which are localized in scale, orientation and space. The proposed coding scheme consists of predicting blocks in one subimage from blocks in lower resolution subbands with the same orientation. Although our prediction maps are of the same kind of those used in fractal block coders, which are based on an iterative mapping scheme, our coding technique does not impose any contractivity constraint on the block maps. This makes the decoding procedure very simple and...
Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding
 IEEE Transactions on Image Processing
, 2001
"... Abstract—The optimal predictors of a lifting scheme in the generaldimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row–column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The optimal predictors of a lifting scheme in the generaldimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row–column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptivelength postprocessing in the row–column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolutionbased lossless image coding. Index Terms—Arithmetic codes, image coding, wavelet transforms. I.