## A Fast Clustering Algorithm to Cluster Very Large Categorical Data Sets in Data Mining (1997)

### Cached

### Download Links

- [www.cit.gu.edu.au]
- [www.cs.gsu.edu]
- [www.ece.northwestern.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | In Research Issues on Data Mining and Knowledge Discovery |

Citations: | 92 - 2 self |

### BibTeX

@INPROCEEDINGS{Huang97afast,

author = {Zhexue Huang},

title = {A Fast Clustering Algorithm to Cluster Very Large Categorical Data Sets in Data Mining},

booktitle = {In Research Issues on Data Mining and Knowledge Discovery},

year = {1997},

pages = {1--8}

}

### Years of Citing Articles

### OpenURL

### Abstract

Partitioning a large set of objects into homogeneous clusters is a fundamental operation in data mining. The k-means algorithm is best suited for implementing this operation because of its efficiency in clustering large data sets. However, working only on numeric values limits its use in data mining because data sets in data mining often contain categorical values. In this paper we present an algorithm, called k-modes, to extend the k-means paradigm to categorical domains. We introduce new dissimilarity measures to deal with categorical objects, replace means of clusters with modes, and use a frequency based method to update modes in the clustering process to minimise the clustering cost function. Tested with the well known soybean disease data set the algorithm has demonstrated a very good classification performance. Experiments on a very large health insurance data set consisting of half a million records and 34 categorical attributes show that the algorithm is scalable in terms of ...

### Citations

8308 |
Genetic Algorithms
- Goldberg
- 1989
(Show Context)
Citation Context ... a local optimum (MacQueen 1967, Selim and Ismail 1984). To find out the global optimum, techniques such as deterministic annealing (Kirkpatrick et al. 1983, Rose et al. 1990) and genetic algorithms (=-=Goldberg 1989-=-, Murthy and Chowdhury 1996) can be incorporated with the k-means algorithm. 3. It works only on numeric values because it minimises a cost function by calculating the means of clusters. 4. The cluste... |

3931 | Optimization by simulated annealing
- Kirkpatrick, Gelatt, et al.
- 1983
(Show Context)
Citation Context ...nal complexity is O(n 2 ) (Murtagh 1992). 2. It often terminates at a local optimum (MacQueen 1967, Selim and Ismail 1984). To find out the global optimum, techniques such as deterministic annealing (=-=Kirkpatrick et al. 1983-=-, Rose et al. 1990) and genetic algorithms (Goldberg 1989, Murthy and Chowdhury 1996) can be incorporated with the k-means algorithm. 3. It works only on numeric values because it minimises a cost fun... |

2329 |
Dubes. Algorithms for Clustering Data
- Jain, Richard
- 1988
(Show Context)
Citation Context ...to clusters such that objects in the same cluster are more similar to each other than objects in different clusters according to some defined criteria. Statistical clustering methods (Anderberg 1973, =-=Jain and Dubes 1988-=-) use similarity measures to partition objects whereas conceptual clustering methods cluster objects according to the concepts objects carry (Michalski and Stepp 1983, Fisher 1987). The most distinct ... |

2069 | Some Methods for Classification and Analysis of Multivariate Observations
- MacQueen
- 1967
(Show Context)
Citation Context ...h focus (Shafer et al. 1996). In this paper we present a fast clustering algorithm used to cluster categorical data. The algorithm, called kmodes, is an extension to the well known k-means algorithm (=-=MacQueen 1967-=-). Compared to other clustering methods the k-means algorithm and its variants (Anderberg 1973) are efficient in clustering large data sets, thus very suitable for data mining. However, their use is o... |

797 |
Cluster Analysis for Applications
- Anderberg
- 1973
(Show Context)
Citation Context ...et of objects into clusters such that objects in the same cluster are more similar to each other than objects in different clusters according to some defined criteria. Statistical clustering methods (=-=Anderberg 1973-=-, Jain and Dubes 1988) use similarity measures to partition objects whereas conceptual clustering methods cluster objects according to the concepts objects carry (Michalski and Stepp 1983, Fisher 1987... |

307 |
Theory and Applications of Correspondence Analysis
- Greenacre
- 1984
(Show Context)
Citation Context ... ) ( ) ( , ) = + = (4) where n x j , n y are the numbers of objects in the data set that have categories x j and y j for attribute j. Because d X Y c 2 ( , ) is similar to the chi-square distance in (=-=Greenacre 1984-=-), we call it chi-square distance. This dissimilarity measure gives more importance to rare categories than frequent ones. Eq. (4) is useful in discovering under-represented object clusters such as fr... |

165 |
Discrimination and Classification
- Hand
- 1981
(Show Context)
Citation Context ...ction E y d X Q i l i i n l k =s= = , ( , ) 1 1 (1) where n is the number of objects in a data set X, X isX, Q l is the mean of cluster l, and y i l is an element of a partition matrix Y x l n as in (=-=Hand 1981-=-). d is a dissimilarity measure usually defined by the squared Euclidean distance. There exist a few variants of the k-means algorithm which differ in selection of the initial k means, dissimilarity c... |

161 |
A new approach to clustering
- Ruspini
- 1969
(Show Context)
Citation Context ... means (Anderberg 1973, Bobrowski and Bezdek 1991). The sophisticated variants of the k-means algorithm include the well-known ISODATA algorithm (Ball and Hall 1967) and the fuzzy k-means algorithms (=-=Ruspini 1969-=-, 1973). Most k-means type algorithms have been proved convergent (MacQueen 1967, Bezdek 1980, Selim and Ismail 1984). The k-means algorithm has the following important properties. 1. It is efficient ... |

158 | A general coefficient of similarity and some of its properties
- Gower
- 1971
(Show Context)
Citation Context ...works on categorical attributes and produces the cluster modes, which describe the clusters, thus very useful to the user in interpreting the clustering results. Using Gower's similarity coefficient (=-=Gower 1971-=-) and other dissimilarity measures (Gowda and Diday 1991) one can use a hierarchical clustering method to cluster categorical or mixed data. However, the hierarchical clustering methods are not effici... |

132 |
K-means-type algorithms: a generalized convergence theorem and characterization of local optimality
- Selim, Ismail
- 1984
(Show Context)
Citation Context ...lude the well-known ISODATA algorithm (Ball and Hall 1967) and the fuzzy k-means algorithms (Ruspini 1969, 1973). Most k-means type algorithms have been proved convergent (MacQueen 1967, Bezdek 1980, =-=Selim and Ismail 1984-=-). The k-means algorithm has the following important properties. 1. It is efficient in processing large data sets. The computational complexity of the algorithm is O(tkmn), where m is the number of at... |

100 | A clustering technique for summarizing multivariate data - Ball, Hall - 1967 |

99 | ªAutomated Construction of Classifications: Conceptual Clustering versus Numerical Taxonomy,º
- Michalski, Stepp
- 1983
(Show Context)
Citation Context ...al clustering methods (Anderberg 1973, Jain and Dubes 1988) use similarity measures to partition objects whereas conceptual clustering methods cluster objects according to the concepts objects carry (=-=Michalski and Stepp 1983-=-, Fisher 1987). The most distinct characteristic of data mining is that it deals with very large data sets (gigabytes or even terabytes). This requires the algorithms used in data mining to be scalabl... |

85 |
A convergence theorem for the fuzzy ISODATA clustering algorithms
- Bezdek
- 1980
(Show Context)
Citation Context ...algorithm include the well-known ISODATA algorithm (Ball and Hall 1967) and the fuzzy k-means algorithms (Ruspini 1969, 1973). Most k-means type algorithms have been proved convergent (MacQueen 1967, =-=Bezdek 1980-=-, Selim and Ismail 1984). The k-means algorithm has the following important properties. 1. It is efficient in processing large data sets. The computational complexity of the algorithm is O(tkmn), wher... |

49 |
Symbolic clustering using a new dissimilarity measure
- GOWDA, DIDAY
- 1992
(Show Context)
Citation Context ...he cluster modes, which describe the clusters, thus very useful to the user in interpreting the clustering results. Using Gower's similarity coefficient (Gower 1971) and other dissimilarity measures (=-=Gowda and Diday 1991-=-) one can use a hierarchical clustering method to cluster categorical or mixed data. However, the hierarchical clustering methods are not efficient in processing large data sets. Their use is limited ... |

41 | Clustering large datasets with mixed numeric and categorical values
- Huang
- 1997
(Show Context)
Citation Context ...ins are not ordered. The k-modes algorithm in this paper removes this limitation and extends the k-means paradigm to categorical domains whilst preserving the efficiency of the k-means algorithm. In (=-=Huang 1997-=-) we have proposed an algorithm, called k-prototypes, to cluster large data sets with mixed numeric and categorical values. In the k-prototypes algorithm we define a dissimilarity measure that takes i... |

35 |
Knowledge Acquisition Via
- Fisher
- 1987
(Show Context)
Citation Context ...erberg 1973, Jain and Dubes 1988) use similarity measures to partition objects whereas conceptual clustering methods cluster objects according to the concepts objects carry (Michalski and Stepp 1983, =-=Fisher 1987-=-). The most distinct characteristic of data mining is that it deals with very large data sets (gigabytes or even terabytes). This requires the algorithms used in data mining to be scalable. However, m... |

33 | A conceptual version of the K-means algorithm - Ralambondrainy - 1995 |

13 | Comments on “Parallel Algorithms for Hierarchical Clustering and Cluster Validity - Murtagh - 1992 |

6 | New experimental results in fuzzy clustering - Ruspini - 1973 |

5 |
Learning Based on Conceptual Distance
- Kodratoff, Tecuci
- 1988
(Show Context)
Citation Context ...categorical domains and used to represent missing values. To simplify the dissimilarity measure we do not consider the conceptual inclusion relationships among values in a categorical domain like in (=-=Kodratoff and Tecuci 1988-=-) such that car and vehicle are two categorical values in a domain and conceptually a car is also a vehicle. However, such relationships may exist in real world databases. 2.2 Categorical Objects Like... |

4 |
c-Means clustering with the l and l norms
- Bobrowski, Bezdek
- 1991
(Show Context)
Citation Context ...n distance. There exist a few variants of the k-means algorithm which differ in selection of the initial k means, dissimilarity calculations and strategies to calculate cluster means (Anderberg 1973, =-=Bobrowski and Bezdek 1991-=-). The sophisticated variants of the k-means algorithm include the well-known ISODATA algorithm (Ball and Hall 1967) and the fuzzy k-means algorithms (Ruspini 1969, 1973). Most k-means type algorithms... |

4 |
Comments of "Parallel algorithms for hierarchical clustering and cluster validity
- Murtagh
- 1992
(Show Context)
Citation Context ...e whole data set. Usually, k, m, tsn. In clustering large data sets the k-means algorithm is much faster than the hierarchical clustering algorithms whose general computational complexity is O(n 2 ) (=-=Murtagh 1992-=-). 2. It often terminates at a local optimum (MacQueen 1967, Selim and Ismail 1984). To find out the global optimum, techniques such as deterministic annealing (Kirkpatrick et al. 1983, Rose et al. 19... |

4 | c-Means Clustering with the l 1 and l ∞ Norms - BOBROWSKI, BEZDEK - 1991 |

2 |
A Deterministic Annealing Approach to
- Rose, Gurewitz, et al.
- 1990
(Show Context)
Citation Context ... (Murtagh 1992). 2. It often terminates at a local optimum (MacQueen 1967, Selim and Ismail 1984). To find out the global optimum, techniques such as deterministic annealing (Kirkpatrick et al. 1983, =-=Rose et al. 1990-=-) and genetic algorithms (Goldberg 1989, Murthy and Chowdhury 1996) can be incorporated with the k-means algorithm. 3. It works only on numeric values because it minimises a cost function by calculati... |