Results 1 
3 of
3
Iterate: A conceptual clustering algorithm for data mining
 IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS
, 1998
"... The data exploration task can be divided into three interrelated subtasks: (i) feature selection, (ii) discovery, and (iii) interpretation. This paper describes an unsupervised discovery method with biases geared toward partitioning objects into clusters that improve interpretability. The algorithm, ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
The data exploration task can be divided into three interrelated subtasks: (i) feature selection, (ii) discovery, and (iii) interpretation. This paper describes an unsupervised discovery method with biases geared toward partitioning objects into clusters that improve interpretability. The algorithm, ITERATE, employs: (i) a data ordering scheme and (ii) an iterative redistribution operator to produce maximally cohesive and distinct clusters. Cohesion or intraclass similarity is measured in terms of the match between individual objects and their assigned cluster prototype. Distinctness or interclass dissimilarity is measured by an average of the variance of the distribution matchbetween clusters. We demonstrate that interpretability, from a problem solving viewpoint, is addressed by theintra and interclass measures. Empirical results demonstrate the properties of the discovery algorithm, and its applications to problem solving.
Iterate: A conceptual clustering method for knowledge discovery in databases
 In Braunschweig, B., & Day, R. (Eds.), Innovative Applications of Artificial Intelligence in the Oil and Gas Industry
, 1995
"... ..."
Learning Finite Automata Using Local Distinguishing Experiments
 Proc. IJCAI 1993
, 1993
"... One of the open problems listed in [ Rivest and Schapire, 1989 ] is whether and how that the copies of L in their algorithm can be combined into one for better performance. This paper describes an algorithm called D that does that combination. The idea is to represent the states of the learned ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
One of the open problems listed in [ Rivest and Schapire, 1989 ] is whether and how that the copies of L in their algorithm can be combined into one for better performance. This paper describes an algorithm called D that does that combination. The idea is to represent the states of the learned model using observable symbols as well as hidden symbols that are constructed during learning. These hidden symbols are created to reflect the distinct behaviors of the model states. The distinct behaviors are represented as local distinguishing experiments (LDEs) (not to be confused with global distinguishing sequences), and these LDEs are created when the learner's prediction mismatches the actual observation from the unknown machine. To synchronize the model with the environment, these LDEs can also be concatenated to form a homing sequence. It can be shown that D can learn, with probability 1 \Gamma ΒΈ, a model that is an fflapproximation of the unknown machine, in a number of action...