Results 1  10
of
1,077,456
KMeans VQ Algorithm using a LowCost Parallel Cluster Computing
"... It is wellknown that the time and memory necessary to create a codebook from large training databases have hindered the vector quantization based systems for real applications. To overcome this problem, we present a parallel approach for the Kmeans Vector Quantization (VQ) algorithm based on maste ..."
Abstract
 Add to MetaCart
It is wellknown that the time and memory necessary to create a codebook from large training databases have hindered the vector quantization based systems for real applications. To overcome this problem, we present a parallel approach for the Kmeans Vector Quantization (VQ) algorithm based
A LowCost Parallel KMeans VQ Algorithm Using Cluster Computing
"... In this paper we propose a parallel approach for the Kmeans Vector Quantization (VQ) algorithm used in a twostage Hidden Markov Model (HMM)based system for recognizing handwritten numeral strings. With this parallel algorithm, based on the master/slave paradigm, we overcome two drawbacks of the se ..."
Abstract
 Add to MetaCart
In this paper we propose a parallel approach for the Kmeans Vector Quantization (VQ) algorithm used in a twostage Hidden Markov Model (HMM)based system for recognizing handwritten numeral strings. With this parallel algorithm, based on the master/slave paradigm, we overcome two drawbacks
Xmeans: Extending Kmeans with Efficient Estimation of the Number of Clusters
 In Proceedings of the 17th International Conf. on Machine Learning
, 2000
"... Despite its popularity for general clustering, Kmeans suffers three major shortcomings; it scales poorly computationally, the number of clusters K has to be supplied by the user, and the search is prone to local minima. We propose solutions for the first two problems, and a partial remedy for the t ..."
Abstract

Cited by 410 (5 self)
 Add to MetaCart
Despite its popularity for general clustering, Kmeans suffers three major shortcomings; it scales poorly computationally, the number of clusters K has to be supplied by the user, and the search is prone to local minima. We propose solutions for the first two problems, and a partial remedy
Refining Initial Points for KMeans Clustering
, 1998
"... Practical approaches to clustering use an iterative procedure (e.g. KMeans, EM) which converges to one of numerous local minima. It is known that these iterative techniques are especially sensitive to initial starting conditions. We present a procedure for computing a refined starting condition fro ..."
Abstract

Cited by 311 (5 self)
 Add to MetaCart
Practical approaches to clustering use an iterative procedure (e.g. KMeans, EM) which converges to one of numerous local minima. It is known that these iterative techniques are especially sensitive to initial starting conditions. We present a procedure for computing a refined starting condition
Cluster Ensembles  A Knowledge Reuse Framework for Combining Multiple Partitions
 Journal of Machine Learning Research
, 2002
"... This paper introduces the problem of combining multiple partitionings of a set of objects into a single consolidated clustering without accessing the features or algorithms that determined these partitionings. We first identify several application scenarios for the resultant 'knowledge reuse&ap ..."
Abstract

Cited by 588 (21 self)
 Add to MetaCart
clustering. Due to the low computational costs of our techniques, it is quite feasible to use a supraconsensus function that evaluates all three approaches against the objective function and picks the best solution for a given situation. We evaluate the effectiveness of cluster ensembles in three
PVFS: A Parallel File System For Linux Clusters
 IN PROCEEDINGS OF THE 4TH ANNUAL LINUX SHOWCASE AND CONFERENCE
, 2000
"... As Linux clusters have matured as platforms for lowcost, highperformance parallel computing, software packages to provide many key services have emerged, especially in areas such as message passing and networking. One area devoid of support, however, has been parallel file systems, which are criti ..."
Abstract

Cited by 416 (33 self)
 Add to MetaCart
As Linux clusters have matured as platforms for lowcost, highperformance parallel computing, software packages to provide many key services have emerged, especially in areas such as message passing and networking. One area devoid of support, however, has been parallel file systems, which
Fast approximate nearest neighbors with automatic algorithm configuration
 In VISAPP International Conference on Computer Vision Theory and Applications
, 2009
"... nearestneighbors search, randomized kdtrees, hierarchical kmeans tree, clustering. For many computer vision problems, the most time consuming component consists of nearest neighbor matching in highdimensional spaces. There are no known exact algorithms for solving these highdimensional problems ..."
Abstract

Cited by 445 (2 self)
 Add to MetaCart
nearestneighbors search, randomized kdtrees, hierarchical kmeans tree, clustering. For many computer vision problems, the most time consuming component consists of nearest neighbor matching in highdimensional spaces. There are no known exact algorithms for solving these high
Selftuning spectral clustering
 Advances in Neural Information Processing Systems 17
, 2004
"... We study a number of open issues in spectral clustering: (i) Selecting the appropriate scale of analysis, (ii) Handling multiscale data, (iii) Clustering with irregular background clutter, and, (iv) Finding automatically the number of groups. We first propose that a ‘local ’ scale should be used to ..."
Abstract

Cited by 357 (2 self)
 Add to MetaCart
the number of groups. This leads to a new algorithm in which the final randomly initialized kmeans stage is eliminated. 1
Learning the k in kmeans
 In Proc. 17th NIPS
, 2003
"... When clustering a dataset, the right number k of clusters to use is often not obvious, and choosing k automatically is a hard algorithmic problem. In this paper we present an improved algorithm for learning k while clustering. The Gmeans algorithm is based on a statistical test for the hypothesis t ..."
Abstract

Cited by 133 (5 self)
 Add to MetaCart
When clustering a dataset, the right number k of clusters to use is often not obvious, and choosing k automatically is a hard algorithmic problem. In this paper we present an improved algorithm for learning k while clustering. The Gmeans algorithm is based on a statistical test for the hypothesis
An efficient kmeans clustering algorithm
 In Proceedings of IPPS/SPDP Workshop on High Performance Data Mining
, 1998
"... In this paper, we present a novel algorithm for performing kmeans clustering. It organizes all the patterns in a kd tree structure such that one can find all the patterns which are closest to a given prototype efficiently. The main intuition behind our approach is as follows. All the prototypes ar ..."
Abstract

Cited by 79 (0 self)
 Add to MetaCart
In this paper, we present a novel algorithm for performing kmeans clustering. It organizes all the patterns in a kd tree structure such that one can find all the patterns which are closest to a given prototype efficiently. The main intuition behind our approach is as follows. All the prototypes
Results 1  10
of
1,077,456