Results 1  10
of
29
From data mining to knowledge discovery in databases
 AI Magazine
, 1996
"... ■ Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases ..."
Abstract

Cited by 432 (0 self)
 Add to MetaCart
(Show Context)
■ Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. What is all the excitement about? This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular realworld applications, specific datamining techniques, challenges involved in realworld applications of knowledge discovery, and current and future research directions in the field. Across a wide variety of fields, data are
A Guide to the Literature on Learning Probabilistic Networks From Data
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Abstract

Cited by 191 (0 self)
 Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra...
Knowledge Discovery and Data Mining: Towards a Unifying Framework
, 1996
"... This paper presents a first step towards a unifying framework for Knowledge Discovery in Databases. We describe links between data mining, knowledge discovery, and other related fields. We then define the KDD process and basic data mining algorithms, discuss application issues and conclude with an a ..."
Abstract

Cited by 167 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a first step towards a unifying framework for Knowledge Discovery in Databases. We describe links between data mining, knowledge discovery, and other related fields. We then define the KDD process and basic data mining algorithms, discuss application issues and conclude with an analysis of challenges facing practitioners in the field. keywords: Knowledge Discovery in Databases (KDD), Data mining, overview article, large databases, automated analysis, issues and challenges in data mining. To appear: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD96), Portland, Oregon, August 24, 1996, AAAI Press. http://wwwaig. jpl.nasa.gov/kdd96 Knowledge Discovery and Data Mining: Towards a Unifying Framework Usama Fayyad Microsoft Research One Microsoft Way Redmond, WA 98052, USA fayyad@microsoft.com Gregory PiatetskyShapiro GTE Laboratories, MS 44 Waltham, MA 02154, USA gps@gte.com Padhraic Smyth Information and Computer S...
Dependency Networks for Relational Data
 In Proceedings of the 4th IEEE International Conference on Data Mining
, 2004
"... Instance independence is a critical assumption of traditional machine learning methods contradicted by many relational datasets. For example, in scientific literature datasets there are dependencies among the references of a paper. Recent work on graphical models for relational data has demonstrated ..."
Abstract

Cited by 67 (10 self)
 Add to MetaCart
(Show Context)
Instance independence is a critical assumption of traditional machine learning methods contradicted by many relational datasets. For example, in scientific literature datasets there are dependencies among the references of a paper. Recent work on graphical models for relational data has demonstrated significant performance gains for models that exploit the dependencies among instances. In this paper, we present relational dependency networks (RDNs), a new form of graphical model capable of reasoning with such dependencies in a relational setting. We describe the details of RDN models and outline their strengths, most notably the ability to learn and reason with cyclic relational dependencies. We present RDN models learned on a number of realworld datasets, and evaluate the models in a classification context, showing significant performance improvements. In addition, we use synthetic data to evaluate the quality of model learning and inference procedures. 1.
Discovering Bayesian Networks in Incomplete Databases
, 1997
"... Bayesian Belief Networks (bbns) are becoming increasingly popular in the Knowledge Discovery and Data Mining community. A bbn is defined by a graphical structure of conditional dependencies among the domain variables and a set of probability distributions defining these dependencies. In this way, bb ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Bayesian Belief Networks (bbns) are becoming increasingly popular in the Knowledge Discovery and Data Mining community. A bbn is defined by a graphical structure of conditional dependencies among the domain variables and a set of probability distributions defining these dependencies. In this way, bbns provide a compact formalism  grounded in the welldeveloped mathematics of probability theory  able to predict variable values, explain observations, and visualize dependencies among variables. During the past few years, several efforts have been addressed to develop methods able to extract both the graphical structure and the conditional probabilities of a bbn from a database. All these methods share the assumption that the database at hand is complete, that is, it does not report any entry as unknown. When this assumption fails, these methods have to resort to expensive iterative procedures which are infeasible for large databases. This paper describes a new Knowledge Discovery sys...
Discovery Of MultipleLevel Rules From Large Databases
, 1996
"... With the widespread computerization in business, government, and science, the efficient and effective discovery of interesting information from large databases becomes essential. Data mining or Knowledge Discovery in Database (KDD) emerges as a solution to the data analysis problems faced by many or ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
With the widespread computerization in business, government, and science, the efficient and effective discovery of interesting information from large databases becomes essential. Data mining or Knowledge Discovery in Database (KDD) emerges as a solution to the data analysis problems faced by many organizations. Previous studies on data mining have been focused on the discovery of knowledge at a single conceptual level, either at the primitive level or at a rather high conceptual level. However, it is often desirable to discover knowledge at multiple conceptual levels, which will provide a spectrum of understanding, from general to specific, for the underlying data. In this thesis, we first introduce the conceptual hierarchy, a hierarchical organization of the data in the databases. Two algorithms for dynamic adjustment of conceptual hierarchies are developed, as well as another algorithm for automatic generation of conceptual hierarchies for numerical attributes. In addition, a set of ...
Mathematical Programming Approaches To Machine Learning And Data Mining
, 1998
"... Machine learning problems of supervised classification, unsupervised clustering and parsimonious approximation are formulated as mathematical programs. The feature selection problem arising in the supervised classification task is effectively addressed by calculating a separating plane by minimizing ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Machine learning problems of supervised classification, unsupervised clustering and parsimonious approximation are formulated as mathematical programs. The feature selection problem arising in the supervised classification task is effectively addressed by calculating a separating plane by minimizing separation error and the number of problem features utilized. The support vector machine approach is formulated using various norms to measure the margin of separation. The clustering problem of assigning m points in ndimensional real space to k clusters is formulated as minimizing a piecewiselinear concave function over a polyhedral set. This problem is also formulated in a novel fashion by minimizing the sum of squared distances of data points to nearest cluster planes characterizing the k clusters. The problem of obtaining a parsimonious solution to a linear system where the right hand side vector may be corrupted by noise is formulated as minimizing the system residual plus either the number of nonzero elements in the solution vector or the norm of the solution vector. The feature selection problem, the clustering problem and the parsimonious approximation problem can all be stated as the minimization of a concave function over a polyhedral region and are solved by a theoretically justifiable, fast and finite successive linearization algorithm. Numerical tests indicate the utility and efficiency of these formulations on realworld databases. In particular, the feature selection approach via concave minimization computes a separatingplane based classifier that improves upon the generalization ability of a separating plane computed without feature suppression. This approach produces ii classifiers utilizing fewer original problem features than the support vector machin...
Towards Automated Synthesis of Data Mining Programs
 Proc. 5th Intl. Conf. Knowledge Discovery and Data Mining
, 1999
"... Code synthesis is routinely used in industry to generate GUIs, form lling applications, and database support code and is even used with COBOL. In this paper we consider the question of whether code synthesis could also be applied to the data mining phase of knowledge discovery. We view this as a rap ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Code synthesis is routinely used in industry to generate GUIs, form lling applications, and database support code and is even used with COBOL. In this paper we consider the question of whether code synthesis could also be applied to the data mining phase of knowledge discovery. We view this as a rapid prototyping method. Rapid prototyping of statistical data analysis algorithms would allow experienced analysts to experiment with di erent statistical models before choosing one, but without requiring prohibitively expensive programming e orts. It would also smooth the steep learning curve often faced by novice users of data mining tools and libraries. Finally, it would accelerate dissemination of essential research results and the development of applications. In this paper, we present a framework and the basic software for the automated synthesis of data analysis programs. We use a speci cation language that generalizes Bayesian networks, a popular notation used in many communities. Using decomposition methods and algorithm templates, our system transforms the network through several levels of representation and then nally into pseudocode which can be translated into the implementation language of choice. Here, we explain the framework on a mixture of Gaussians model, a core data mining algorithm at the heart of many commercial clustering tools. We mention the e ectiveness of our framework by generating pseudocode for some more sophisticated algorithms from recent literature.
Taxonomy for characterizing ensemble methods in classification tasks: A review and annotated bibliography
 Computational Statistics & Data Analysis, In Press, Corrected Proof
, 2009
"... Ensemble methodology, which builds a classification model by integrating multiple classifiers, can be used for improving prediction performance. Researchers from various disciplines such as statistics, pattern recognition, and machine learning have seriously explored the use of ensemble methodology. ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Ensemble methodology, which builds a classification model by integrating multiple classifiers, can be used for improving prediction performance. Researchers from various disciplines such as statistics, pattern recognition, and machine learning have seriously explored the use of ensemble methodology. This paper presents an updated survey of ensemble methods in classification tasks, while introducing a new taxonomy for characterizing them. The new taxonomy, presented from the algorithm designer’s point of view, is based on five dimensions: inducer, combiner, diversity, size, and members dependency. We also propose several selection criteria, presented from the practitioner’s point of view, for choosing the most suitable ensemble method. Key words: