Results 1  10
of
259
Factoring wavelet transforms into lifting steps
 J. Fourier Anal. Appl
, 1998
"... ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are also known as ladder structures. This dec ..."
Abstract

Cited by 434 (7 self)
 Add to MetaCart
ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet or subband filters into elementary matrices. That such a factorization is possible is wellknown to algebraists (and expressed by the formula); it is also used in linear systems theory in the electrical engineering community. We present here a selfcontained derivation, building the decomposition from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering. This factorization provides an alternative for the lattice factorization, with the advantage that it can also be used in the biorthogonal, i.e, nonunitary case. Like the lattice factorization, the decomposition presented here asymptotically reduces the computational complexity of the transform by a factor two. It has other applications, such as the possibility of defining a waveletlike transform that maps integers to integers. 1.
Overview of the scalable video coding extension of the H.264/AVC standard
 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY IN CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
, 2007
"... With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITUT VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC stand ..."
Abstract

Cited by 187 (4 self)
 Add to MetaCart
With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITUT VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.
Acoustical and Environmental Robustness in Automatic Speech Recognition
, 1990
"... This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in d ..."
Abstract

Cited by 169 (9 self)
 Add to MetaCart
This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desktop microphone (rather than a closetalking microphone) is used for speech input. Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy. Two kinds of
Voice communication across the Internet: a network voice terminal
, 1992
"... Voice conferencing has attracted interest as a useful and viable rst realtime application on the Internet. This report describes Nevot a network voice terminal meant to support multiple concurrent both twoparty and multiparty conferences on top of a variety of transport protocols and using audio ..."
Abstract

Cited by 120 (17 self)
 Add to MetaCart
Voice conferencing has attracted interest as a useful and viable rst realtime application on the Internet. This report describes Nevot a network voice terminal meant to support multiple concurrent both twoparty and multiparty conferences on top of a variety of transport protocols and using audio encodings o ering from vocoder to multichannel CD quality. Asitistobe used as an experimental tool, it o ers extensive con guration, trace and statistics options. The design is kept modular so that additional audio encodings, transport and realtime protocols as well as user interfaces can be added readily. In the rst part, the report describes the Xbased graphical user interface, the con guration and operation. The second part describes the individual components of Nevot and compares alternate implementations. An appendix covers the installation of Nevot. 1
Architectural Power Analysis: The Dual Bit Type Method
, 1995
"... This paper describes a novel strategy for generating accurate blackbox models of datapath power consumption at the architecture level. This is achieved by recognizing that power consumption in digital circuits is affected by activity, as well as physical capacitance. Since existing strategies chara ..."
Abstract

Cited by 112 (4 self)
 Add to MetaCart
This paper describes a novel strategy for generating accurate blackbox models of datapath power consumption at the architecture level. This is achieved by recognizing that power consumption in digital circuits is affected by activity, as well as physical capacitance. Since existing strategies characterize modules for purely random inputs, they fail to account for the effect of signal statistics on switching activity. The Dual Bit Type (DBT) model, however, accounts not only for the random activity of the least significant bits (LSB’s), but also for the correlated activity of the most significant bits (MSB’s), which contain two’scomplement sign information. The resulting model is parameterizable in terms of complexity factors such as word length and can be applied to a wide variety of modules ranging from adders, shifters, and multipliers to register files and memories. Since the model operates at the register transfer level (RTL), it is orders of magnitude faster than gate or circuitlevel tools, but while other architecturelevel techniques often err by 50100 % or more, the DBT method offers error rates on the order of 1015%.
Nonlinear wavelet transforms for image coding via lifting
, 2003
"... We investigate central issues such as invertibility, stability, synchronization, and frequency characteristics for nonlinear wavelet transforms built using the lifting framework. The nonlinearity comes from adaptively choosing between a class of linear predictors within the lifting framework. We al ..."
Abstract

Cited by 91 (3 self)
 Add to MetaCart
We investigate central issues such as invertibility, stability, synchronization, and frequency characteristics for nonlinear wavelet transforms built using the lifting framework. The nonlinearity comes from adaptively choosing between a class of linear predictors within the lifting framework. We also describe how earlier families of nonlinear filter banks can be extended through the use of prediction functions operating on a causal neighborhood of pixels. Preliminary compression results for model and realworld images demonstrate the promise of our techniques.
FeedbackBased Error Control for Mobile Video Transmission
 Proceedings of the IEEE
, 1999
"... this paper, we discuss such lastlineofdefense 00189219/99$10.00 1999 IEEE PROCEEDINGS OF THE IEEE, VOL. 87, NO. 10, OCTOBER 1999 1707 techniques that can be used to make low bitrate video coders error resilient. We concentrate on techniques that use acknowledgment information provided by a f ..."
Abstract

Cited by 85 (10 self)
 Add to MetaCart
this paper, we discuss such lastlineofdefense 00189219/99$10.00 1999 IEEE PROCEEDINGS OF THE IEEE, VOL. 87, NO. 10, OCTOBER 1999 1707 techniques that can be used to make low bitrate video coders error resilient. We concentrate on techniques that use acknowledgment information provided by a feedback channel
Wavelet transforms versus Fourier transforms
 Department of Mathematics, MIT, Cambridge MA
, 213
"... Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transfo ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higherorder wavelets are constructed, and it is surprisingly quick to compute with them — always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including highdefinition television). So far the Fourier Transform — or its 8 by 8 windowed version, the Discrete Cosine Transform — is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory. 1. The Haar wavelet To explain wavelets we start with an example. It has every property we hope for, except one. If that one defect is accepted, the construction is simple and the computations are fast. By trying to remove the defect, we are led to dilation equations and recursively defined functions and a small world of fascinating new problems — many still unsolved. A sensible person would stop after the first wavelet, but fortunately mathematics goes on. The basic example is easier to draw than to describe: W(x)
Lossy Source Coding
 IEEE Trans. Inform. Theory
, 1998
"... Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 year ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
Lossy coding of speech, highquality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called ratedistortion theory. For the first 25 years of its existence, ratedistortion theory had relatively little impact on the methods and systems actually used to compress real sources. Today, however, ratedistortion theoretic concepts are an important component of many lossy compression techniques and standards. We chronicle the development of ratedistortion theory and provide an overview of its influence on the practice of lossy source coding. Index TermsData compression, image coding, speech coding, rate distortion theory, signal coding, source coding with a fidelity criterion, video coding. I.
Source Model for Transform Video Coder and Its Application  Part I: Fundamental Theory
 IEEE Trans. on CSVT
, 1997
"... Abstract — In the first part of this paper, we derive a source model describing the relationship between bits, distortion, and quantization step size for transform coders. Based on this source model, a variable frame rate coding algorithm is developed. The basic idea is to select a proper picture fr ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
Abstract — In the first part of this paper, we derive a source model describing the relationship between bits, distortion, and quantization step size for transform coders. Based on this source model, a variable frame rate coding algorithm is developed. The basic idea is to select a proper picture frame rate to ensure a minimum picture quality for every frame. Because our source model can predict approximately the number of coded bits when a certain quantization step size is used, we could predict the quality and bits of coded images without going through the entire realcoding process. Therefore, we could skip the right number of picture frames to accomplish the goal of constant image quality. Our proposed variable frame rate coding schemes are simple but quite effective as demonstrated by simulation results. The results of using another variable frame rate scheme, Test Model for H.263 (TMN5), and the results of using a fixed frame rate coding scheme, Reference Model 8 for H.261 (RM8), are also provided for comparison. Index Terms — Image coding, rate distortion theory, source coding. I.