Results 1  10
of
18
An Optimal Algorithm for Approximate Nearest Neighbor Searching in Fixed Dimensions
 ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 1994
"... Consider a set S of n data points in real ddimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any po ..."
Abstract

Cited by 786 (31 self)
 Add to MetaCart
Consider a set S of n data points in real ddimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any positive real ffl, a data point p is a (1 + ffl)approximate nearest neighbor of q if its distance from q is within a factor of (1 + ffl) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in R d in O(dn log n) time and O(dn) space, so that given a query point q 2 R d , and ffl ? 0, a (1 + ffl)approximate nearest neighbor of q can be computed in O(c d;ffl log n) time, where c d;ffl d d1 + 6d=ffle d is a factor depending only on dimension and ffl. In general, we show that given an integer k 1, (1 + ffl)approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time.
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Fractal Image Compression via Nearest Neighbor Search
 Conf. Proc. NATO ASI Fractal Image Encoding and Analysis
, 1996
"... In fractal image compression the encoding step is computationally expensive. A large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find best matches for other image portions called ranges. Our theory developed here shows that this bas ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
In fractal image compression the encoding step is computationally expensive. A large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find best matches for other image portions called ranges. Our theory developed here shows that this basic procedure of fractal image compression is equivalent to multidimensional nearest neighbor search in a space of feature vectors. This result is useful for accelerating the encoding procedure in fractal image compression. The traditional sequential search takes linear time whereas the nearest neighbor search can be organized to require only logarithmic time. The fast search has been integrated into an existing stateoftheart classification method thereby accelerating the searches carried out in the individual domain classes. In this case we record acceleration factors up to about 50 depending on image and domain pool size with negligible or minor degradation in both image quality and com...
Codebook Clustering by SelfOrganizing Maps for Fractal Image Compression
, 1995
"... A fast encoding scheme for fractal image compression is presented. The method uses a clustering algorithm based on Kohonen's selforganizing maps. Domain blocks are clustered, yielding a classification with a notion of distance which is not given in traditional classification schemes. 1. INTRODUCTIO ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
A fast encoding scheme for fractal image compression is presented. The method uses a clustering algorithm based on Kohonen's selforganizing maps. Domain blocks are clustered, yielding a classification with a notion of distance which is not given in traditional classification schemes. 1. INTRODUCTION The time complexity of the encoding is one of the major drawbacks of fractal image compression. Each of a large number of image subsets, called ranges, has to be compared sequentially to a large number of other image subsets called domains. In his original approach, Jacquin 1 used a classification scheme to reduce the number of comparisons. Blocks were classified according to their perceptual geometric features. For a given range block, only domain blocks within the same class were considered. But since only 3 classes were differentiated, the encoding was still very slow. A more elaborate classification technique based on the intensity and the variance of the blocks was used by Fisher e...
Rotated partial distance search for faster vector quantization encoding
 IEEE Signal Processing Letters
, 2000
"... Abstract — Partial Distance Search (PDS) is a method of reducing the amount of computation required for vector quantization encoding. The method is simple and general enough to be incorporated into many fast encoding algorithms. This paper describes a simple improvement to PDS, based on principal co ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract — Partial Distance Search (PDS) is a method of reducing the amount of computation required for vector quantization encoding. The method is simple and general enough to be incorporated into many fast encoding algorithms. This paper describes a simple improvement to PDS, based on principal components analysis,that rotates the codebook without altering the interpoint distances. Like PDS,this new method can be used to improve many fast encoding algorithms. The algorithm decreases the decoding time of PDS by as much as 44 % and decreases the decoding time of kd trees by as much as 66 % on common vector quantization benchmarks.
Fast Codevector Search Algorithm for 3D Vector Quantized Codebook
 WASET International Journal of Electrical Computer and Systems Engineering (IJCISE
, 2008
"... Abstract—This paper presents a very simple and efficient algorithm for codebook search, which reduces a great deal of computation as compared to the full codebook search. The algorithm is based on sorting and centroid technique for search. The results table shows the effectiveness of the proposed al ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract—This paper presents a very simple and efficient algorithm for codebook search, which reduces a great deal of computation as compared to the full codebook search. The algorithm is based on sorting and centroid technique for search. The results table shows the effectiveness of the proposed algorithm in terms of computational complexity. In this paper we also introduce a new performance parameter named as Average fractional change in pixel value as we feel that it gives better understanding of the closeness of the image since it is related to the perception. This new performance parameter takes into consideration the average fractional change in each pixel value.
Pairwise Nearest Neighbor Method Revisited
, 2004
"... The pairwise nearest neighbor (PNN) method, also known as Ward's method belongs to the class of agglomerative clustering methods. The PNN method generates hierarchical clustering using a sequence of merge operations until the desired number of clusters is obtained. This method selects the cluster pa ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The pairwise nearest neighbor (PNN) method, also known as Ward's method belongs to the class of agglomerative clustering methods. The PNN method generates hierarchical clustering using a sequence of merge operations until the desired number of clusters is obtained. This method selects the cluster pair to be merged so that it increases the given objective function value least. The main drawback of the PNN method is its slowness because the time complexity of the fastest known exact implementation of the PNN method is lower bounded by O(N²), where N is the number of data objects. We consider several speedup methods for the PNN method in the first publication. These methods maintain the precision of the method. Another method for speedingup the PNN method is investigated in the second publication, where we utilize a kneighborhood graph for reducing distance calculations and operations. A remarkable speedup is achieved at the cost of slight increase in distortion. The PNN method can also be adapted for multilevel thresholding, which can be seen as
Color Image Segmentation using Kekre’s Algorithm for Vector Quantization
 International Journal of Computer Science (IJCS
, 2008
"... Abstract—In this paper we propose segmentation approach based on Vector Quantization technique. Here we have used Kekre’s fast codebook generation algorithm for segmenting lowaltitude aerial image. This is used as a preprocessing step to form segmented homogeneous regions. Further to merge adjacent ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—In this paper we propose segmentation approach based on Vector Quantization technique. Here we have used Kekre’s fast codebook generation algorithm for segmenting lowaltitude aerial image. This is used as a preprocessing step to form segmented homogeneous regions. Further to merge adjacent regions color similarity and volume difference criteria is used. Experiments performed with real aerial images of varied nature demonstrate that this approach does not result in over segmentation or under segmentation. The vector quantization seems to give far better results as compared to conventional onthefly watershed algorithm.
A Fast Full Search Equivalent For MeanShapeGain Vector Quantizers
 In 20th Symposium on Information Theory in the Benelux
, 1999
"... this paper, we address the problem of finding a faster way to find the best shape codeword. In SGVQ, the meanremove step is skipped, and the shapes are only the normalized input vectors. We will concentrate on MSGVQ, but most of the following is also applicable to SGVQ. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
this paper, we address the problem of finding a faster way to find the best shape codeword. In SGVQ, the meanremove step is skipped, and the shapes are only the normalized input vectors. We will concentrate on MSGVQ, but most of the following is also applicable to SGVQ.
Fractal Compression Using the Discrete KarhunenLoeve Transform
, 1998
"... Fractal coding of images is a quite recent and efficient method whose major drawback is the very slow compression phase, due to a timeconsuming similarity search between image blocks. A general acceleration method based on feature vectors is described, of which we can find many instances in the li ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Fractal coding of images is a quite recent and efficient method whose major drawback is the very slow compression phase, due to a timeconsuming similarity search between image blocks. A general acceleration method based on feature vectors is described, of which we can find many instances in the litterature. This general method is then optimized using the wellknown KarhunenLoeve expansion, allowing optimal dimensionality reduction of the search space. Finally a simple search algorithm is designed, based on orthogonal range searching and avoiding the "curse of dimensionality" problem of classical best match searching methods. The application of the technique to vector quantization is also discussed.