Results 1  10
of
78
Ratedistortion methods for image and video compression
 IEEE Signal Process. Mag. 1998
"... In this paper we provide an overview of ratedistortion (RD) based optimization techniques and their practical application to image and video coding. We begin with a short discussion of classical ratedistortion theory and then we show how in many practical coding scenarios, such as in standardsco ..."
Abstract

Cited by 222 (7 self)
 Add to MetaCart
(Show Context)
In this paper we provide an overview of ratedistortion (RD) based optimization techniques and their practical application to image and video coding. We begin with a short discussion of classical ratedistortion theory and then we show how in many practical coding scenarios, such as in standardscompliant coding environments, resource allocation can be put in an RD framework. We then introduce two popular techniques for resource allocation, namely, Lagrangian optimization and dynamic programming. After a discussion of these two techniques as well as some of their extensions, we conclude with a quick review of recent literature in these areas citing a number of applications related to image and video compression and transmission. We
LongTerm Memory MotionCompensated Prediction For Robust Video Transmission
, 2000
"... Longterm memory prediction extends the spatial displacement vector utilized in hybrid video coding by a variable time delay permitting the use of more than one reference frame for motion compensation. This extension provides improved ratedistortion performance. However, motion compensation in comb ..."
Abstract

Cited by 118 (28 self)
 Add to MetaCart
Longterm memory prediction extends the spatial displacement vector utilized in hybrid video coding by a variable time delay permitting the use of more than one reference frame for motion compensation. This extension provides improved ratedistortion performance. However, motion compensation in combination with transmission errors leads to temporal error propagation that occurs when the reference frames at encoder and decoder dier. In this paper, we present a framework that incorporates an error estimate into rateconstrained motion estimation and mode decision. Experimental results with a Rayleigh fading channel show that longterm memory motion compensation signicantly outperforms singleframe prediction. 1. INTRODUCTION The eciency of longterm memory motioncompensated prediction (MCP) as an approach to improve coding performance has been demonstrated in [1]. The ITUT has decided to adopt this feature as Annex U to version 3 of the H.263 standard. In this paper, we show that t...
Partial Encryption of Compressed Images and Videos
, 2000
"... The increased popularity of multimedia applications places a great demand on efficient data storage and transmission techniques. Network communication, especially over a wireless network, can easily be intercepted and must be protected from eavesdroppers. Unfortunately, encryption and decryption ..."
Abstract

Cited by 106 (1 self)
 Add to MetaCart
The increased popularity of multimedia applications places a great demand on efficient data storage and transmission techniques. Network communication, especially over a wireless network, can easily be intercepted and must be protected from eavesdroppers. Unfortunately, encryption and decryption are slow and it is often difficult, if not impossible, to carry out realtime secure image and video communication and processing. Methods have been proposed to combine compression and encryption together to reduce the overall processing time [3, 4, 12, 18, 20], but they are either insecure or too computationally intensive. We propose a novel solution, called partial encryption, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption is applied to several image and video compression algorithms in this paper. Only 13%27% of the output from quadtree compression algorithms [13, 17, 29, 30, 31, 32] is encrypted for typical images, and less than 2% is encrypted for 512 \Theta 512 images compressed by the SPIHT algorithm [26]. The results are similar for video compression, resulting in a significant reduction in encryption and decryption time. The proposed partial encryption schemes are fast, secure, and do not reduce the compression performance of the underlying compression algorithm. EDICS Number: SP 7.8 This research is supported in part by the Motorola Wireless Data Group and the Canadian Natural Sciences and Engineering Research Council under Grant OGP9198 and Postgraduate Scholarship. y Presently at Department of Computer Science, University of Waterloo. z To whom correspondence should be addressed. 1 1
Ratedistortion optimized tree structured compression algorithms for piecewise smooth images
 IEEE Trans. Image Processing
, 2005
"... IEEE Transactions on Image Processing This paper presents novel coding algorithms based on tree structured segmentation, which achieve the correct asymptotic ratedistortion (RD) behavior for a simple class of signals, known as piecewise polynomials, by using an RD based prune and join scheme. Fo ..."
Abstract

Cited by 76 (16 self)
 Add to MetaCart
(Show Context)
IEEE Transactions on Image Processing This paper presents novel coding algorithms based on tree structured segmentation, which achieve the correct asymptotic ratedistortion (RD) behavior for a simple class of signals, known as piecewise polynomials, by using an RD based prune and join scheme. For the one dimensional (1D) case, our scheme is based on binary tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an RD optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying RD behavior � D(R) ∼ c02 −c1R � , thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O (N log N). We then show the extension of this scheme to the two dimensional (2D) case using a quadtree. This quadtree coding scheme also achieves an exponentially decaying RD behavior, for the polygonal image model composed of a white polygon shaped object against a uniform black background, with low computational cost of O (N log N). Again, the key is an RD optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.
Trends and Perspectives in Image and Video Coding
 PROCEEDINGS OF THE IEEE (2005
, 2005
"... ..."
(Show Context)
A video compression scheme with optimal bit allocation between displacement vector field and displaced frame difference
 in Proc. IEEE International Conference on Image Processing
, 1997
"... In objectbased video, the encoding of the video data is decoupled into the encoding of shape, motion and texture information, which enables certain functionalities like contentbased interactivity and scalability. However, the problem of how to jointly encode these separate signals to reach the bes ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
(Show Context)
In objectbased video, the encoding of the video data is decoupled into the encoding of shape, motion and texture information, which enables certain functionalities like contentbased interactivity and scalability. However, the problem of how to jointly encode these separate signals to reach the best coding efficiency has never been solved thoroughly. In this paper, we present an operational ratedistortion optimal bit allocation scheme that provides a solution to this problem. Our approach is based on the Lagrangian relaxation and dynamic programming. Experimental results indicate that the proposed optimal encoding approach has considerable gains over an adhoc method without optimization. Furthermore the proposed algorithm is much more efficient than exhaustive search. 1.
Joint spacefrequency segmentation using balanced wavelet packet tree for leastcost image representation
 IEEE Trans. Im. Proc
, 1997
"... Abstract—We examine the question of how to choose a spacevarying filterbank tree representation that minimizes some additive cost function for an image. The idea is that for a particular cost function, e.g., energy compaction or quantization distortion, some tree structures perform better than oth ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We examine the question of how to choose a spacevarying filterbank tree representation that minimizes some additive cost function for an image. The idea is that for a particular cost function, e.g., energy compaction or quantization distortion, some tree structures perform better than others. While the wavelet tree represents a good choice for many signals, it is generally outperformed by the best tree from the library of wavelet packet frequencyselective trees. The recently introduced doubletree library of bases performs better still, by allowing different wavelet packet trees over all binary spatial segments of the image. We build on this foundation and present efficient new pruning algorithms for both one and twodimensional (1D and 2D) trees that will find the best basis from a library that is many times larger than the library of the singletree or doubletree algorithms. The augmentation of the library of bases overcomes the constrained nature of the spatial variation in the doubletree bases, and is a significant enhancement in practice. Use of these algorithms to select the leastcost expansion for images with a ratedistortion cost function gives a very effective signal adaptive compression scheme. This scheme is universal in the sense that, without assuming a model for the signal or making use of training data, it performs very well over a large class of signal types. In experiments it achieves compression rates that are competitive with the best trainingbased schemes. I.
Image Compression with Anisotropic Diffusion
, 2008
"... Compression is an important field of digital image processing where wellengineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes us ..."
Abstract

Cited by 24 (15 self)
 Add to MetaCart
Compression is an important field of digital image processing where wellengineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edgeenhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising, we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression, we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in
Forwardadaptive quantization with optimal overhead cost for image and video coding with applications to MPEG video coders
, 1995
"... We address the problem of optimal forwardadaptive quantization in the video and image coding framework. In this framework, as is consistent with that of most practical coders like MPEG, the encoder has the capability of changing the quantizer periodically (e.g. at a macroblock interval in MPEG). I ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
We address the problem of optimal forwardadaptive quantization in the video and image coding framework. In this framework, as is consistent with that of most practical coders like MPEG, the encoder has the capability of changing the quantizer periodically (e.g. at a macroblock interval in MPEG). In this paper, we formulate an optimal strategy, based on dynamic programming, for updating the quantizer choice for coding an image or video signal. While in some coding environments the overhead needed to specify the quantizer used by each block is equal for every choice of quantizer, in other situations (e.g. MPEG) the overhead cost is higher if the quantizer changes from one block to the next. We concentrate on the latter case which will be more likely encountered in situations where the overhead represents a significant fraction of the overall rate, as can be the case if a low bit rate is used (e.g. error frames in a typical motioncompensated video coder). We provide empirical evidence ...
HEVC Complexity and Implementation Analysis
"... Abstract—Advances in video compression technology have been driven by everincreasing processing power available in software and hardware. The emerging High Efficiency Video Coding (HEVC) standard aims to provide a doubling in coding efficiency with respect to the H.264/AVC high profile, delivering ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Advances in video compression technology have been driven by everincreasing processing power available in software and hardware. The emerging High Efficiency Video Coding (HEVC) standard aims to provide a doubling in coding efficiency with respect to the H.264/AVC high profile, delivering the same video quality at half the bit rate. In this paper, complexityrelated aspects that were considered in the standardization process are described. Furthermore, profiling of reference software and optimized software gives an indication of where HEVC may be more complex than its predecessors and where it may be simpler. Overall, the complexity of HEVC decoders does not appear to be significantly different from that of H.264/AVC decoders; this makes HEVC decoding in software very practical on current hardware. HEVC encoders are expected to be several times more complex than H.264/AVC encoders and will be a subject of research in years to come. Index Terms—High Efficiency Video Coding (HEVC), video coding. I.