Results 1  10
of
16
An Optimal Algorithm for Approximate Nearest Neighbor Searching in Fixed Dimensions
 ACMSIAM SYMPOSIUM ON DISCRETE ALGORITHMS
, 1994
"... Consider a set S of n data points in real ddimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any po ..."
Abstract

Cited by 786 (31 self)
 Add to MetaCart
Consider a set S of n data points in real ddimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching we preprocess S into a data structure, so that given any query point q 2 R d , the closest point of S to q can be reported quickly. Given any positive real ffl, a data point p is a (1 + ffl)approximate nearest neighbor of q if its distance from q is within a factor of (1 + ffl) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in R d in O(dn log n) time and O(dn) space, so that given a query point q 2 R d , and ffl ? 0, a (1 + ffl)approximate nearest neighbor of q can be computed in O(c d;ffl log n) time, where c d;ffl d d1 + 6d=ffle d is a factor depending only on dimension and ffl. In general, we show that given an integer k 1, (1 + ffl)approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time.
Quantization
 IEEE TRANS. INFORM. THEORY
, 1998
"... The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modula ..."
Abstract

Cited by 639 (11 self)
 Add to MetaCart
The history of the theory and practice of quantization dates to 1948, although similar ideas had appeared in the literature as long ago as 1898. The fundamental role of quantization in modulation and analogtodigital conversion was first recognized during the early development of pulsecode modulation systems, especially in the 1948 paper of Oliver, Pierce, and Shannon. Also in 1948, Bennett published the first highresolution analysis of quantization and an exact analysis of quantization noise for Gaussian processes, and Shannon published the beginnings of rate distortion theory, which would provide a theory for quantization as analogtodigital conversion and as data compression. Beginning with these three papers of fifty years ago, we trace the history of quantization from its origins through this decade, and we survey the fundamentals of the theory and many of the popular and promising techniques for quantization.
Fast NearestNeighbor Search Algorithms Based on ApproximationElimination Search
 In Proceedings of the ACMSIAM Symposium on Discrete Algorithms
, 2000
"... In this paper, we provide an overview of fast nearestneighbor search algorithms based on an &approxima tion}elimination' framework under a class of elimination rules, namely, partial distance elimination, hypercube elimination and absoluteerrorinequality elimination derived from approximations o ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, we provide an overview of fast nearestneighbor search algorithms based on an &approxima tion}elimination' framework under a class of elimination rules, namely, partial distance elimination, hypercube elimination and absoluteerrorinequality elimination derived from approximations of Euclidean distance. Previous algorithms based on these elimination rules are reviewed in the context of approximation}elimination search. The main emphasis in this paper is a comparative study of these elimination constraints with reference to their approximation}elimination e$ciency set within di!erent approximation schemes. # 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
Fast Codevector Search Algorithm for 3D Vector Quantized Codebook
 WASET International Journal of Electrical Computer and Systems Engineering (IJCISE
, 2008
"... Abstract—This paper presents a very simple and efficient algorithm for codebook search, which reduces a great deal of computation as compared to the full codebook search. The algorithm is based on sorting and centroid technique for search. The results table shows the effectiveness of the proposed al ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract—This paper presents a very simple and efficient algorithm for codebook search, which reduces a great deal of computation as compared to the full codebook search. The algorithm is based on sorting and centroid technique for search. The results table shows the effectiveness of the proposed algorithm in terms of computational complexity. In this paper we also introduce a new performance parameter named as Average fractional change in pixel value as we feel that it gives better understanding of the closeness of the image since it is related to the perception. This new performance parameter takes into consideration the average fractional change in each pixel value.
Pairwise Nearest Neighbor Method Revisited
, 2004
"... The pairwise nearest neighbor (PNN) method, also known as Ward's method belongs to the class of agglomerative clustering methods. The PNN method generates hierarchical clustering using a sequence of merge operations until the desired number of clusters is obtained. This method selects the cluster pa ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The pairwise nearest neighbor (PNN) method, also known as Ward's method belongs to the class of agglomerative clustering methods. The PNN method generates hierarchical clustering using a sequence of merge operations until the desired number of clusters is obtained. This method selects the cluster pair to be merged so that it increases the given objective function value least. The main drawback of the PNN method is its slowness because the time complexity of the fastest known exact implementation of the PNN method is lower bounded by O(N²), where N is the number of data objects. We consider several speedup methods for the PNN method in the first publication. These methods maintain the precision of the method. Another method for speedingup the PNN method is investigated in the second publication, where we utilize a kneighborhood graph for reducing distance calculations and operations. A remarkable speedup is achieved at the cost of slight increase in distortion. The PNN method can also be adapted for multilevel thresholding, which can be seen as
Fast FullSearch Equivalent NearestNeighbour Search Algorithms
, 1999
"... A fundamental activity common to many image processing, pattern classification, and clustering algorithms involves searching a set of n, kdimensional data for the one which is nearest to a given target item with respect to a distance function. Our goal is to find fast search algorithms which are fu ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A fundamental activity common to many image processing, pattern classification, and clustering algorithms involves searching a set of n, kdimensional data for the one which is nearest to a given target item with respect to a distance function. Our goal is to find fast search algorithms which are fullsearch equivalentthat is, the resulting match is as good as what we could obtain if we were to search the set exhaustively. We propose a framework made up of three components, namely (i) a technique for obtaining a good initial match, (ii) an inexpensive method for determining whether the current match is a fullsearch equivalent match, and (iii) an effective technique for improving the current match. Our approach is to consider good solutions for each component in order to find an algorithm which balances the overall complexity of the search. We also propose a technique for hierarchical ordering and cluster elimination using a minimal cost spanning tree. Our experiments on vector quantisation coding of images show that the framework and techniques we proposed can be used to construct suitable algorithms for most of our data sets which require fullsearch equivalent matches at an average arithmetic cost of less than O(k log n) while using only O(n) space.
Vector Quantized Codebook Optimization using KMeans
"... Abstract: In this paper we are proposing Kmeans algorithm for optimization of codebook. In general Kmeans is an optimization algorithm but this algorithm takes very long time to converge. We are using existing codebook so that the convergence time for Kmeans is reduced considerably. For demonstr ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract: In this paper we are proposing Kmeans algorithm for optimization of codebook. In general Kmeans is an optimization algorithm but this algorithm takes very long time to converge. We are using existing codebook so that the convergence time for Kmeans is reduced considerably. For demonstration we have used codebooks obtained from Linde Buzo and Gray (LBG) and Kekre’s Fast Codebook Generation (KFCG) algorithms. It is observed that the optimal error obtained from both LBG and KFCG is almost same indicating that there is a unique minima. From the results it is obvious that KFCG codebook takes less number of iterations as compared to LBG codebook. This indicates that KFCG codebook is close to the optimum. This is also indicated by less Mean Squared Error (MSE) for it.
Dynamic Memory Model Based Optimization Of Scalar And Vector Quantizer For Fast Image Encoding
 IN PROC. IEEE INT. CONF. IMAGE PROCESSING
, 2000
"... The rapid progress of computers and today's heterogeneous computing environment means computationintensive signal processing algorithms must be optimized for performance in a machine dependent fashion. In this paper, we present formal machinedependent optimizations of scalar and vector quantizer en ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The rapid progress of computers and today's heterogeneous computing environment means computationintensive signal processing algorithms must be optimized for performance in a machine dependent fashion. In this paper, we present formal machinedependent optimizations of scalar and vector quantizer encoders. Using a dynamic memory model, the optimal computationmemory tradeoff is exploited to minimize the encoding time. Experiments show marked improvements over existing techniques.
Color Image Segmentation using Kekre’s Algorithm for Vector Quantization
 International Journal of Computer Science (IJCS
, 2008
"... Abstract—In this paper we propose segmentation approach based on Vector Quantization technique. Here we have used Kekre’s fast codebook generation algorithm for segmenting lowaltitude aerial image. This is used as a preprocessing step to form segmented homogeneous regions. Further to merge adjacent ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—In this paper we propose segmentation approach based on Vector Quantization technique. Here we have used Kekre’s fast codebook generation algorithm for segmenting lowaltitude aerial image. This is used as a preprocessing step to form segmented homogeneous regions. Further to merge adjacent regions color similarity and volume difference criteria is used. Experiments performed with real aerial images of varied nature demonstrate that this approach does not result in over segmentation or under segmentation. The vector quantization seems to give far better results as compared to conventional onthefly watershed algorithm.
THREE IMPROVED CODEBOOK SEARCHING ALGORITHMS FOR IMAGE COMPRESSION USING VECTOR QUANTIZER
"... In this paper, we propose three improved codebook searching algorithms for vector quantization (VQ). Our improved schemes are based on three fast searching methods proposed by Huang et al., ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we propose three improved codebook searching algorithms for vector quantization (VQ). Our improved schemes are based on three fast searching methods proposed by Huang et al.,