Results 1  10
of
451
A PTAS for the minimum consensus clustering problem with a fixed number of clusters
 In Proc. 11th ICTCS
, 2009
"... The Consensus Clustering problem has been introduced as an effective way to analyze the results of different microarray experiments [5, 6]. The problem consists of looking for a partition that best summarizes a set of input partitions (each corresponding to a different microarray experiment) under a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The Consensus Clustering problem has been introduced as an effective way to analyze the results of different microarray experiments [5, 6]. The problem consists of looking for a partition that best summarizes a set of input partitions (each corresponding to a different microarray experiment) under
CURE: An Efficient Clustering Algorithm for Large Data sets
 Published in the Proceedings of the ACM SIGMOD Conference
, 1998
"... Clustering, in data mining, is useful for discovering groups and identifying interesting distributions in the underlying data. Traditional clustering algorithms either favor clusters with spherical shapes and similar sizes, or are very fragile in the presence of outliers. We propose a new clustering ..."
Abstract

Cited by 722 (5 self)
 Add to MetaCart
clustering algorithm called CURE that is more robust to outliers, and identifies clusters having nonspherical shapes and wide variances in size. CURE achieves this by representing each cluster by a certain fixed number of points that are generated by selecting well scattered points from the cluster
Aggregating inconsistent information: ranking and clustering
, 2005
"... We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the number of disagreements with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc ..."
Abstract

Cited by 226 (17 self)
 Add to MetaCart
set problem on tournaments, and correlation and consensus clustering. We show that for all these problems (and various weighted versions of them), we can obtain improved approximation factors using essentially the same remarkably simple algorithm. Additionally, we almost settle a long
Incremental Clustering and Dynamic Information Retrieval
, 1997
"... Motivated by applications such as document and image classification in information retrieval, we consider the problem of clustering dynamic point sets in a metric space. We propose a model called incremental clustering which is based on a careful analysis of the requirements of the information retri ..."
Abstract

Cited by 191 (4 self)
 Add to MetaCart
incremental clustering algorithms which have a provably good performance. We complement our positive results with lower bounds on the performance of incremental algorithms. Finally, we consider the dual clustering problem where the clusters are of fixed diameter, and the goal is to minimize the number
Determining the Number of Clusters via Iterative Consensus Clustering ∗
, 2013
"... We use a cluster ensemble to determine the number of clusters, k, in a group of data. A consensus similarity matrix is formed from the ensemble using multiple algorithms and several values for k. A random walk is induced on the graph defined by the consensus matrix and the eigenvalues of the associa ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We use a cluster ensemble to determine the number of clusters, k, in a group of data. A consensus similarity matrix is formed from the ensemble using multiple algorithms and several values for k. A random walk is induced on the graph defined by the consensus matrix and the eigenvalues
Clustering ensembles: Models of consensus and weak partitions
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2005
"... Clustering ensembles have emerged as a powerful method for improving both the robustness as well as the stability of unsupervised classification solutions. However, finding a consensus clustering from multiple partitions is a difficult problem that can be approached from graphbased, combinatorial ..."
Abstract

Cited by 85 (3 self)
 Add to MetaCart
. Combination accuracy is analyzed as a function of several parameters that control the power and resolution of component partitions as well as the number of partitions. We also analyze clustering ensembles with incomplete information and the effect of missing cluster labels on the quality of overall consensus
Combining Multiple Weak Clusterings
, 2003
"... A data set can be clustered in many ways depending on the clustering algorithm employed, parameter settings used and other factors. Can multiple clusterings be combined so that the final partitioning of data provides better clustering? The answer depends on the quality of clusterings to be combined ..."
Abstract

Cited by 83 (5 self)
 Add to MetaCart
the combination accuracy as a function of parameters controlling the power and resolution of component partitions as well as the learning dynamics vs. the number of clusterings involved. Finally, some empirical studies compare the effectiveness of several consensus functions.
A Polynomial Time Approximation Scheme for kConsensus Clustering
, 2010
"... This paper introduces a polynomial time approximation scheme for the metric Correlation Clustering problem, when the number of clusters returned is bounded (by k). Consensus Clustering is a fundamental aggregation problem, with considerable application, and it is analysed here as a metric variant of ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
of the Correlation Clustering problem. The PTAS exploits a connection between Correlation Clustering and the kcut problems. This requires the introduction of a new rebalancing technique, based on minimum cost perfect matchings, to provide clusters of the required sizes. Both Consensus Clustering and Correlation
A Flexible Framework for Consensus Clustering
, 2013
"... A novel framework for consensus clustering is presented which has the ability to determine both the number of clusters and a final solution using multiple algorithms. A consensus similarity matrix is formed from an ensemble using multiple algorithms and several values for k. A variety of dimension ..."
Abstract
 Add to MetaCart
to determine the number, k, of clusters in a dataset by considering a random walk on the graph defined by the consensus matrix. The eigenvalues of the associated transition probability matrix are used to determine the number of clusters. This method succeeds at determining the number of clusters in many
1Clusterbased Distributed Consensus
"... In this paper, we incorporate clustering techniques into distributed consensus algorithms for faster convergence and better energy efficiency. Together with a simple distributed clustering algorithm, we design clusterbased distributed consensus algorithms in forms of both fixed linear iteration and ..."
Abstract
 Add to MetaCart
In this paper, we incorporate clustering techniques into distributed consensus algorithms for faster convergence and better energy efficiency. Together with a simple distributed clustering algorithm, we design clusterbased distributed consensus algorithms in forms of both fixed linear iteration
Results 1  10
of
451