Results 1  10
of
25
Mixnets: Factored Mixtures of Gaussians in Bayesian Networks with Mixed Continuous And Discrete Variables
, 2000
"... Recently developed techniques have made it possible to quickly learn accurate probability density functions from data in lowdimensional continuous spaces. In particular, mixtures of Gaussians can be fitted to data very quickly using an accelerated EM algorithm that employs multiresolution kdtrees ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Recently developed techniques have made it possible to quickly learn accurate probability density functions from data in lowdimensional continuous spaces. In particular, mixtures of Gaussians can be fitted to data very quickly using an accelerated EM algorithm that employs multiresolution kdtrees (Moore, 1999). In this paper, we propose a kind of Bayesian network in which lowdimensional mixtures of Gaussians over different subsets of the domain’s variables are combined into a coherent joint probability model over the entire domain. The network is also capable of modeling complex dependencies between discrete variables and continuous variables without requiring discretization of the continuous variables. We present efficient heuristic algorithms for automatically learning these networks from data, and perform comparative experiments illustrating how well these networks model real scientific data and synthetic data. We also briefly discuss some possible improvements to the networks, as well as possible applications.
Continuous Sigmoidal Belief Networks Trained Using Slice Sampling
 Advances in Neural Information Processing Systems 9
"... Realvalued random hidden variables can be useful for modelling latent structure that explains correlations among observed variables. I propose a simple unit that adds zeromean Gaussian noise to its input before passing it through a sigmoidal squashing function. Such units can produce a variety ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Realvalued random hidden variables can be useful for modelling latent structure that explains correlations among observed variables. I propose a simple unit that adds zeromean Gaussian noise to its input before passing it through a sigmoidal squashing function. Such units can produce a variety of useful behaviors, ranging from deterministic to binary stochastic to continuous stochastic. I show how "slice sampling" can be used for inference and learning in topdown networks of these units and demonstrate learning on two simple problems. 1 Introduction A variety of unsupervised connectionist models containing discretevalued hidden units have been developed. These include Boltzmann machines (Hinton and Sejnowski 1986), binary sigmoidal belief networks (Neal 1992) and Helmholtz machines (Hinton et al. 1995; Dayan et al. 1995). However, some hidden variables, such as translation or scaling in images of shapes, are best represented using continuous values. Continuousvalued Bolt...
Gene expression data analysis and modelling
 Session on Gene Expression and Genetic Networks, Pacific Symposium on Biocomputing
"... ..."
Representing Probabilistic Rules with Networks of Gaussian Basis Functions
 MACHINE LEARNING
, 1995
"... There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multilayer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning s ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multilayer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning systems in neural computation. We show that networks of Gaussian basis functions can be generated from simple probabilistic rules. Also, if appropriate learning rules are used, probabilistic rules can be extracted from trained networks. We present methods for the reduction of network complexity with the goal of obtaining concise and meaningful rules. We show how prior knowledge can be refined or supplemented using data by employing either a Bayesian approach, by a weighted combination of knowledge bases, or by generating artificial training data representing the prior knowledge. We validate our approach using a standard statistical data set.
Learning Bayesian belief networks with neural network estimators
 In Neural Information Processing Systems 9
, 1997
"... In this paper we propose a method for learning Bayesian belief networks from data. The method uses artificial neural networks as probability estimators, thus avoiding the need for making prior assumptions on the nature of the probability distributions governing the relationships among the participat ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
In this paper we propose a method for learning Bayesian belief networks from data. The method uses artificial neural networks as probability estimators, thus avoiding the need for making prior assumptions on the nature of the probability distributions governing the relationships among the participating variables. This new method has the potential for being applied to domains containing both discrete and continuous variables arbitrarily distributed. We compare the learning performance of this new method with the performance of the method proposed by Cooper and Herskovits in [10]. The experimental results show that, although the learning scheme based on the use of ANN estimators is slower, the learning accuracy of the two methods is comparable. y To appear in Advances in Neural Information Processing Systems, 1996. 1 Introduction Bayesian belief networks (BBN), often referred to as probabilistic networks, are a powerful formalism for representing and reasoning under uncertainty. This...
Interpolating conditional density trees
 A. Darwiche, N. Friedman (Eds.), Uncertainty in Artificial Intelligence
, 2002
"... Joint distributions over many variables are frequently modeled by decomposing them into products of simpler, lowerdimensional conditional distributions, such as in sparsely connected Bayesian networks. However, automatically learning such models can be very computationally expensive when there are ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Joint distributions over many variables are frequently modeled by decomposing them into products of simpler, lowerdimensional conditional distributions, such as in sparsely connected Bayesian networks. However, automatically learning such models can be very computationally expensive when there are many datapoints and many continuous variables with complex nonlinear relationships, particularly when no good ways of decomposing the joint distribution are known a priori. In such situations, previous research has generally focused on the use of discretization techniques in which each continuous variable has a single discretization that is used throughout the entire network. In this paper, we present and compare a wide variety of treebased algorithms for learning and evaluating conditional density estimates over continuous variables. These trees can be thought of as discretizations that vary according to the particular interactions being modeled; however, the density within a given leaf of the tree need not be assumed constant, and we show that such nonuniform leaf densities lead to more accurate density estimation. We have developed Bayesian network structurelearning algorithms that employ these treebased conditional density representations, and we show that they can be used to practically learn complex joint probability models over dozens of continuous variables from thousands of datapoints. We focus on nding models that are simultaneously accurate, fast to learn, and fast to evaluate once they are learned.
MultiView 3D Object Description with Uncertain Reasoning and Machine Learning
, 2001
"... xi Chapter 1. ..."
ShortTerm Load Forecasting in AirConditioned NonResidential Buildings
 In Proceedings of the 20th IEEE International Symposium on Industrial Electronics (ISIE
, 2011
"... Abstract—Shortterm load forecasting (STLF) has become an essential tool in the electricity sector. It has been classically object of vast research since energy load prediction is known to be nonlinear. In a previous work, we focused on nonresidential building STLF, an special case of STLF where we ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Shortterm load forecasting (STLF) has become an essential tool in the electricity sector. It has been classically object of vast research since energy load prediction is known to be nonlinear. In a previous work, we focused on nonresidential building STLF, an special case of STLF where weather has negligible influence on the load. Now we tackle more modern buildings in which the temperature does alter its energy consumption. This is, we address here fullyHVAC (Heating, Ventilating, and Air Conditioning) ones. Still, in this problem domain, the forecasting method selected must be simple, without tedious trialanderror configuring or parametrising procedures, work with scarce (or any) training data and be able to predict an evolving demand curve. Following our preceding research, we have avoided the inherent nonlinearity by using the work day schedule as daytype classifier. We have evaluated the most popular STLF systems in the literature, namely ARIMA (autoregressive integrated moving average) time series and Neural networks (NN), together with an Autoregressive Model (AR) time series and a Bayesian network (BN), concluding that the autoregressive time series outperforms its counterparts and suffices to fulfil the addressed requirements, even in a 6 dayahead horizon. I.
Variational inference for continuous sigmoidal Bayesian networks
 In Sixth International Workshop on Artificial Intelligence and Statistics
, 1996
"... Latent random variables can be useful for modelling covariance relationships between observed variables. The choice of whether specific latent variables ought to be continuous or discrete is often an arbitrary one. In a previous paper, I presented a "unit" that could adapt to be continuous ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Latent random variables can be useful for modelling covariance relationships between observed variables. The choice of whether specific latent variables ought to be continuous or discrete is often an arbitrary one. In a previous paper, I presented a "unit" that could adapt to be continuous or binary, as appropriate for the current problem, and showed how a Markov chain Monte Carlo method could be used for inference and parameter estimation in Bayesian networks of these units. In this paper, I develop a variational inference technique in the hope that it will prove to be more computationally efficient than Monte Carlo methods. After presenting promising inference results on a toy problem, I discuss why the variational technique does not work well for parameter estimation as compared to Monte Carlo.
Fast Factored Density Estimation and Compression with Bayesian Networks
"... Gaussian mixture models, compression, interpolating density trees, conditional density estimation To my family  especially my father, Donald. iv Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general approach to au ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Gaussian mixture models, compression, interpolating density trees, conditional density estimation To my family  especially my father, Donald. iv Many important data analysis tasks can be addressed by formulating them as probability estimation problems. For example, a popular general approach to automatic classication problems is to learn a probabilistic model of each class from data in which the classes are known, and then use Bayes's rule with these models to predict the correct classes of other data for which they are not known. Anomaly detection and scientic discovery tasks can often be addressed by learning probability models over possible events and then looking for events to which these models assign low probabilities. Many data compression algorithms such as Human coding and arithmetic coding rely on probabilistic models of the data