Results 1 
8 of
8
Latent classification models
 Machine Learning
, 2005
"... Abstract. One of the simplest, and yet most consistently wellperforming set of classifiers is the Naïve Bayes models. These models rely on two assumptions: (i) All the attributes used to describe an instance are conditionally independent given the class of that instance, and (ii) all attributes fol ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract. One of the simplest, and yet most consistently wellperforming set of classifiers is the Naïve Bayes models. These models rely on two assumptions: (i) All the attributes used to describe an instance are conditionally independent given the class of that instance, and (ii) all attributes follow a specific parametric family of distributions. In this paper we propose a new set of models for classification in continuous domains, termed latent classification models. The latent classification model can roughly be seen as combining the Naïve Bayes model with a mixture of factor analyzers, thereby relaxing the assumptions of the Naïve Bayes classifier. In the proposed model the continuous attributes are described by a mixture of multivariate Gaussians, where the conditional dependencies among the attributes are encoded using latent variables. We present algorithms for learning both the parameters and the structure of a latent classification model, and we demonstrate empirically that the accuracy of the proposed model is significantly higher than the accuracy of other probabilistic classifiers. Keywords: classification, probabilistic graphical models, Naïve Bayes, correlation
Mixnets: Factored Mixtures of Gaussians in Bayesian Networks with Mixed Continuous And Discrete Variables
, 2000
"... Recently developed techniques have made it possible to quickly learn accurate probability density functions from data in lowdimensional continuous spaces. In particular, mixtures of Gaussians can be fitted to data very quickly using an accelerated EM algorithm that employs multiresolution kdtrees ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Recently developed techniques have made it possible to quickly learn accurate probability density functions from data in lowdimensional continuous spaces. In particular, mixtures of Gaussians can be fitted to data very quickly using an accelerated EM algorithm that employs multiresolution kdtrees (Moore, 1999). In this paper, we propose a kind of Bayesian network in which lowdimensional mixtures of Gaussians over different subsets of the domain’s variables are combined into a coherent joint probability model over the entire domain. The network is also capable of modeling complex dependencies between discrete variables and continuous variables without requiring discretization of the continuous variables. We present efficient heuristic algorithms for automatically learning these networks from data, and perform comparative experiments illustrating how well these networks model real scientific data and synthetic data. We also briefly discuss some possible improvements to the networks, as well as possible applications.
Fuzzy Bayesian Networks  A General Formalism for Representation, Inference and Learning with Hybrid Bayesian Networks
"... This paper proposes a general formalism for representation, inference and learning with general hybrid Bayesian networks in which continuous and discrete variables may appear anywhere in a directed acyclic graph. The formalism fuzzies a hybrid Bayesian network into two alternative forms: The rst ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper proposes a general formalism for representation, inference and learning with general hybrid Bayesian networks in which continuous and discrete variables may appear anywhere in a directed acyclic graph. The formalism fuzzies a hybrid Bayesian network into two alternative forms: The rst form replaces each continuous variable in the given directed acyclic graph (DAG) by a partner discrete variable and adding a directed link from the partner discrete variable to the continuous one. The mapping between two variables is not crisp quantization but is approximated (fuzzied) by a conditional Gaussian (CG) distribution. The CG model is equivalent to a fuzzy set but no fuzzy logic formalism is employed. The conditional distribution of a discrete variable given its discrete parents is still assumed to be multinomial as in discrete Bayesian networks. The second form only replaces each continuous variable whose descendants include discrete variables by a partner discrete variable a...
Interpolating conditional density trees
 A. Darwiche, N. Friedman (Eds.), Uncertainty in Artificial Intelligence
, 2002
"... Joint distributions over many variables are frequently modeled by decomposing them into products of simpler, lowerdimensional conditional distributions, such as in sparsely connected Bayesian networks. However, automatically learning such models can be very computationally expensive when there are ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Joint distributions over many variables are frequently modeled by decomposing them into products of simpler, lowerdimensional conditional distributions, such as in sparsely connected Bayesian networks. However, automatically learning such models can be very computationally expensive when there are many datapoints and many continuous variables with complex nonlinear relationships, particularly when no good ways of decomposing the joint distribution are known a priori. In such situations, previous research has generally focused on the use of discretization techniques in which each continuous variable has a single discretization that is used throughout the entire network. In this paper, we present and compare a wide variety of treebased algorithms for learning and evaluating conditional density estimates over continuous variables. These trees can be thought of as discretizations that vary according to the particular interactions being modeled; however, the density within a given leaf of the tree need not be assumed constant, and we show that such nonuniform leaf densities lead to more accurate density estimation. We have developed Bayesian network structurelearning algorithms that employ these treebased conditional density representations, and we show that they can be used to practically learn complex joint probability models over dozens of continuous variables from thousands of datapoints. We focus on nding models that are simultaneously accurate, fast to learn, and fast to evaluate once they are learned.
Fast Factored Density Estimation and Compression with Bayesian Networks
, 2002
"... my family especially my father, Donald. iv Abstract Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general approach to automatic classification problems is to learn a probabilistic model of each class from data in ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
my family especially my father, Donald. iv Abstract Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general approach to automatic classification problems is to learn a probabilistic model of each class from data in which the classes are known, and then use Bayes's rule with these models to predict the correct classes of other data for which they are not known. Anomaly detection and scientific discovery tasks can often be addressed by learning probability models over possible events and then looking for events to which these models assign low probabilities. Many data compression algorithms such as Huffman coding and arithmetic coding rely on probabilistic models of the data stream in order achieve high compression rates.
Web Usage Mining As a Tool for Personalization: a Survey
 International Journal of User Modeling and UserAdapted Interaction
, 2003
"... This paper is a survey of recent work in the field of Web Usage Mining for the benefit of research on the personalization of Webbased information services. The essence of personalization is the adaptability of information systems to the needs of their users. This issue is becoming increasingly impo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper is a survey of recent work in the field of Web Usage Mining for the benefit of research on the personalization of Webbased information services. The essence of personalization is the adaptability of information systems to the needs of their users. This issue is becoming increasingly important on the Web, as nonexpert users are overwhelmed by the quantity of information available online, while commercial Web sites strive to add value to their services in order to create loyal relationships with their visitorscustomers. This article views Web Personalization through the prism of personalization policies adopted by Web sites and implementing a variety of functions. In this context, the area of Web usage mining is a valuable source of ideas and methods for the implementation of personalization functionality. We therefore present a survey of the most recent work in the field of Web usage mining, focusing on the problems that have been identified and the solutions that have been proposed.