Results 1 
4 of
4
Parallel Algorithms for Hierarchical Clustering
 Parallel Computing
, 1995
"... Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper reviews important results for sequential algorithms and describes previous work on parallel algorithms f ..."
Abstract

Cited by 80 (1 self)
 Add to MetaCart
Hierarchical clustering is a common method used to determine clusters of similar data points in multidimensional spaces. O(n 2 ) algorithms are known for this problem [3, 4, 10, 18]. This paper reviews important results for sequential algorithms and describes previous work on parallel algorithms for hierarchical clustering. Parallel algorithms to perform hierarchical clustering using several distance metrics are then described. Optimal PRAM algorithms using n log n processors are given for the average link, complete link, centroid, median, and minimum variance metrics. Optimal butterfly and tree algorithms using n log n processors are given for the centroid, median, and minimum variance metrics. Optimal asymptotic speedups are achieved for the best practical algorithm to perform clustering using the single link metric on a n log n processor PRAM, butterfly, or tree. Keywords. Hierarchical clustering, pattern analysis, parallel algorithm, butterfly network, PRAM algorithm. 1 In...
Computer Vision Algorithms on Reconfigurable Logic Arrays
 IEEE TRANS. ON PARALLEL AND DISTRIBUTED SYSTEMS
, 1999
"... Computer vision algorithms are natural candidates for high performance computing due to their inherent parallelism and intense computational demands. For example, a simple 3 x 3 convolution on a 512 x 512 gray scale image at 30 frames per second requires 67.5 million multiplications and 60 million a ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Computer vision algorithms are natural candidates for high performance computing due to their inherent parallelism and intense computational demands. For example, a simple 3 x 3 convolution on a 512 x 512 gray scale image at 30 frames per second requires 67.5 million multiplications and 60 million additions to be performed in one second. Computer vision tasks can be classified into three categories based on their computational complexity andcommunication complexity: lowlevel, intermediatelevel and highlevel. Specialpurpose hardware provides better performance compared to a generalpurpose hardware for all the three levels of vision tasks. With recent advances in very large scale integration (VLSI) technology, an application specific integrated circuit (ASIC) can provide the best performance in terms of total execution time. However, long design cycle time, high development cost and inflexibility of a dedicated hardware deter design of ASICs. In contrast, field programmable gate arrays (FPGAs) support lower design verification time and easier design adaptability atalower cost. Hence, FPGAs with an array of reconfigurable logic blocks canbevery useful compute elements. FPGAbased custom computing machines are
A stochastic connectionist approach for global optimization with application to pattern clustering
 IEEE Transactions on Systems, Man, and CyberneticsPart B
, 2000
"... Abstract—In this paper, a stochastic connectionist approach is proposed for solving function optimization problems with realvalued parameters. With the assumption of increased processing capability of a node in the connectionist network, we show how a broader class of problems can be solved. As the ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—In this paper, a stochastic connectionist approach is proposed for solving function optimization problems with realvalued parameters. With the assumption of increased processing capability of a node in the connectionist network, we show how a broader class of problems can be solved. As the proposed approach is a stochastic search technique, it avoids getting stuck in local optima. Robustness of the approach is demonstrated on several multimodal functions with different numbers of variables. Optimization of a wellknown partitional clustering criterion, the squarederror criterion (SEC), is formulated as a function optimization problem and is solved using the proposed approach. This approach is used to cluster selected data sets and the results obtained are compared with that of the Kmeans algorithm and a simulated annealing (SA) approach. The amenability of the connectionist approach to parallelization enables effective use of parallel hardware. Index Terms—Clustering, connectionist approaches, function optimization, global optimization. I.
Vectorization and Parallelization of Clustering Algorithms
 VI Spanish Symposium on Pattern Recognition and Image Analysis
, 1995
"... In this work we present a study on the parallelization of code segments that are typical of clustering algorithms. In order to approach this problem from a practical point of view we have considered the parallelization on the three types of architectures currently available from parallel system manu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this work we present a study on the parallelization of code segments that are typical of clustering algorithms. In order to approach this problem from a practical point of view we have considered the parallelization on the three types of architectures currently available from parallel system manufacturers: vector computers, shared memory multiprocessors and distributed memory multicomputers. We have selected the FC (Fuzzy Covariance) and AD (Affinity Decompositions) algorithms as representative of the different computational structures found in clustering algorithms. We present a comparative study of the results obtained from running these algorithms on three systems: VP2400/10, KSR1 and AP1000. 1 Introduction The automatic classification of data is one of the basic tasks in pattern recognition. Given its iterative nature and high computational cost (CPU time), the most adequate solution for its numerical treatment is to use concurrent techniques in order to reduce the execution ...