Results 1  10
of
87
Factoring wavelet transforms into lifting steps
 J. Fourier Anal. Appl
, 1998
"... ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are also known as ladder structures. This dec ..."
Abstract

Cited by 434 (7 self)
 Add to MetaCart
ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet or subband filters into elementary matrices. That such a factorization is possible is wellknown to algebraists (and expressed by the formula); it is also used in linear systems theory in the electrical engineering community. We present here a selfcontained derivation, building the decomposition from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering. This factorization provides an alternative for the lattice factorization, with the advantage that it can also be used in the biorthogonal, i.e, nonunitary case. Like the lattice factorization, the decomposition presented here asymptotically reduces the computational complexity of the transform by a factor two. It has other applications, such as the possibility of defining a waveletlike transform that maps integers to integers. 1.
Effective Erasure Codes for Reliable Computer Communication Protocols
, 1997
"... Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to b ..."
Abstract

Cited by 412 (14 self)
 Add to MetaCart
Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to become uncorrelated thus greatly reducing the effectiveness of retransmissions. In such cases, Forward Error Correction (FEC) techniques can be used, consisting in the transmission of redundant packets (based on error correcting codes) to allow the receivers to recover from independent packet losses. Despite the widespread use of error correcting codes in many fields of information processing, and a general consensus on the usefulness of FEC techniques within some of the Internet protocols, very few actual implementations exist of the latter. This probably derives from the different types of applications, and from concerns related to the complexity of implementing such codes in software. To f...
Wavelet transforms versus Fourier transforms
 Department of Mathematics, MIT, Cambridge MA
, 213
"... Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transfo ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higherorder wavelets are constructed, and it is surprisingly quick to compute with them — always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including highdefinition television). So far the Fourier Transform — or its 8 by 8 windowed version, the Discrete Cosine Transform — is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory. 1. The Haar wavelet To explain wavelets we start with an example. It has every property we hope for, except one. If that one defect is accepted, the construction is simple and the computations are fast. By trying to remove the defect, we are led to dilation equations and recursively defined functions and a small world of fascinating new problems — many still unsolved. A sensible person would stop after the first wavelet, but fortunately mathematics goes on. The basic example is easier to draw than to describe: W(x)
Superfast solution of real positive definite Toeplitz systems
 SIAM J. Matrix Anal. Appl
, 1988
"... Abstract. We describe an implementation of the generalized Schur algorithm for the superfast solution of real positive definite Toeplitz systems of order n + 1, where n = 2ν. Our implementation uses the splitradix fast Fourier transform algorithms for real data of Duhamel. We are able to obtain the ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
Abstract. We describe an implementation of the generalized Schur algorithm for the superfast solution of real positive definite Toeplitz systems of order n + 1, where n = 2ν. Our implementation uses the splitradix fast Fourier transform algorithms for real data of Duhamel. We are able to obtain the nth Szegő polynomial using fewer than 8n log2 2 n real arithmetic operations without explicit use of the bitreversal permutation. Since Levinson’s algorithm requires slightly more than 2n2 operations to obtain this polynomial, we achieve crossover with Levinson’s algorithm at n = 256. Key words. Toeplitz matrix, Schur’s algorithm, splitradix Fast Fourier Transform
NONSUBSAMPLED CONTOURLET TRANSFORM: FILTER DESIGN AND APPLICATIONS IN DENOISING
"... In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields filters that can be implemented efficiently through a lifting factorization. We apply the constructed transform in image noise removal where the results obtained are comparable to the stateofthe art, being superior in some cases.
A generalized method for constructing subquadratic complexity GF(2 k ) multipliers
 IEEE Transactions on Computers
, 2004
"... We introduce a generalized method for constructing subquadratic complexity multipliers for even characteristic field extensions. The construction is obtained by recursively extending short convolution algorithms and nesting them. To obtain the short convolution algorithms the Winograd short convolu ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
We introduce a generalized method for constructing subquadratic complexity multipliers for even characteristic field extensions. The construction is obtained by recursively extending short convolution algorithms and nesting them. To obtain the short convolution algorithms the Winograd short convolution algorithm is reintroduced and analyzed in the context of polynomial multiplication. We present a recursive construction technique that extends any d point multiplier into an n = d k point multiplier with area that is subquadratic and delay that is logarithmic in the bitlength n. We present a thorough analysis that establishes the exact space and time complexities of these multipliers. Using the recursive construction method we obtain six new constructions, among which one turns out to be identical to the Karatsuba multiplier. All six algorithms have subquadratic space complexities and two of the algorithms have significantly better time complexities than the Karatsuba algorithm. Keywords: Bitparallel multipliers, finite fields, Winograd convolution 1
Low Complexity Bit Parallel Architectures for Polynomial Basis Multiplication over GF(2 m
 IEEE Transactions on Computers
, 2004
"... Abstract—Representing the field elements with respect to the polynomial (or standard) basis, we consider bit parallel architectures for multiplication over the finite field GFð2 m Þ. In this effect, first we derive a new formulation for polynomial basis multiplication in terms of the reduction matri ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Abstract—Representing the field elements with respect to the polynomial (or standard) basis, we consider bit parallel architectures for multiplication over the finite field GFð2 m Þ. In this effect, first we derive a new formulation for polynomial basis multiplication in terms of the reduction matrix Q. The main advantage of this new formulation is that it can be used with any field defining irreducible polynomial. Using this formulation, we then develop a generalized architecture for the multiplier and analyze the time and gate complexities of the proposed multiplier as a function of degree m and the reduction matrix Q. To the best of our knowledge, this is the first time that these complexities are given in terms of Q. Unlike most other articles on bit parallel finite field multipliers, here we also consider the number of signals to be routed in hardware implementation and we show that, compared to the wellknown Mastrovito’s multiplier, the proposed architecture has fewer routed signals. In this article, the proposed generalized architecture is further optimized for three special types of polynomials, namely, equally spaced polynomials, trinomials, and pentanomials. We have obtained explicit formulas and complexities of the multipliers for these three special irreducible polynomials. This makes it very easy for a designer to implement the proposed multipliers using hardware description languages like VHDL and Verilog with minimum knowledge of finite field arithmetic. Index Terms—Finite or Galois field, Mastrovito multiplier, allone polynomial, polynomial basis, trinomial, pentanomial and equallyspaced polynomial. 1
Automatic Generation of Prime Length FFT Programs
 IEEE Transactions on Signal Processing
, 1996
"... We describe a set of programs for circular convolution and prime length FFTs that are relatively short, possess great structure, share many computational procedures, and cover a large variety of lengths. The programs make clear the structure of the algorithms and clearly enumerate independent comput ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We describe a set of programs for circular convolution and prime length FFTs that are relatively short, possess great structure, share many computational procedures, and cover a large variety of lengths. The programs make clear the structure of the algorithms and clearly enumerate independent computational branches that can be performed in parallel. Moreover, each of these independent operations is made up of a sequence of suboperations which can be implemented as vector/parallel operations. This is in contrast with previously existing programs for prime length FFTs: they consist of straight line code, no code is shared between them, and they can not be easily adapted for vector/parallel implementations. We have also developed a program that automatically generates these programs for prime length FFTs. This code generating program requires information only about a set of modules for computing cyclotomic convolutions. Contact Address: Ivan W. Selesnick Electrical and Computer Engineer...
Probabilistic Arithmetic
, 1989
"... This thesis develops the idea of probabilistic arithmetic. The aim is to replace arithmetic operations on numbers with arithmetic operations on random variables. Specifically, we are interested in numerical methods of calculating convolutions of probability distributions. The longterm goal is to ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
This thesis develops the idea of probabilistic arithmetic. The aim is to replace arithmetic operations on numbers with arithmetic operations on random variables. Specifically, we are interested in numerical methods of calculating convolutions of probability distributions. The longterm goal is to be able to handle random problems (such as the determination of the distribution of the roots of random algebraic equations) using algorithms which have been developed for the deterministic case. To this end, in this thesis we survey a number of previously proposed methods for calculating convolutions and representing probability distributions and examine their defects. We develop some new results for some of these methods (the Laguerre transform and the histogram method), but ultimately find them unsuitable. We find that the details on how the ordinary convolution equations are calculated are