Results 21  30
of
155
A Bayesian Approach to Causal Discovery
, 1997
"... We examine the Bayesian approach to the discovery of directed acyclic causal models and compare it to the constraintbased approach. Both approaches rely on the Causal Markov assumption, but the two differ significantly in theory and practice. An important difference between the approaches is that t ..."
Abstract

Cited by 80 (1 self)
 Add to MetaCart
We examine the Bayesian approach to the discovery of directed acyclic causal models and compare it to the constraintbased approach. Both approaches rely on the Causal Markov assumption, but the two differ significantly in theory and practice. An important difference between the approaches is that the constraintbased approach uses categorical information about conditionalindependence constraints in the domain, whereas the Bayesian approach weighs the degree to which such constraints hold. As a result, the Bayesian approach has three distinct advantages over its constraintbased counterpart. One, conclusions derived from the Bayesian approach are not susceptible to incorrect categorical decisions about independence facts that can occur with data sets of finite size. Two, using the Bayesian approach, finer distinctions among model structuresboth quantitative and qualitativecan be made. Three, information from several models can be combined to make better inferences and to better ...
Parameter adjustment in Bayes networks. The generalized noisy ORgate
 IN PROCEEDINGS OF THE 9TH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1993
"... Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian networks and proposed three models for the representation of conditional probabilities. A forth model, shown here, assumes that the parameter distribution is given by a product of Gaussian functions and updates them from ..."
Abstract

Cited by 69 (12 self)
 Add to MetaCart
Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian networks and proposed three models for the representation of conditional probabilities. A forth model, shown here, assumes that the parameter distribution is given by a product of Gaussian functions and updates them from the and messages of evidence propagation. We also generalize the noisy ORgate for multivalued variables, develop the algorithm to compute probability in time proportional to the number of parents (even in networks with loops) and apply the learning model to this gate.
Bayesian indoor positioning systems
 In Infocom
, 2005
"... Abstract — In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model ach ..."
Abstract

Cited by 67 (13 self)
 Add to MetaCart
Abstract — In this paper, we introduce a new approach to location estimation where, instead of locating a single client, we simultaneously locate a set of wireless clients. We present a Bayesian hierarchical model for indoor location estimation in wireless networks. We demonstrate that our model achieves accuracy that is similar to other published models and algorithms. By harnessing prior knowledge, our model eliminates the requirement for training data as compared with existing approaches, thereby introducing the notion of a fully adaptive zero profiling approach to location estimation. Index Terms — Experimentation with real networks/Testbed, Statistics, WLAN, localization,
Causal independence for probability assessment and inference using Bayesian networks
 IEEE Trans. on Systems, Man and Cybernetics
, 1994
"... ABayesian network is a probabilistic representation for uncertain relationships, which has proven to be useful for modeling realworld problems. When there are many potential causes of a given e ect, however, both probability assessment and inference using a Bayesian network can be di cult. In this ..."
Abstract

Cited by 65 (2 self)
 Add to MetaCart
ABayesian network is a probabilistic representation for uncertain relationships, which has proven to be useful for modeling realworld problems. When there are many potential causes of a given e ect, however, both probability assessment and inference using a Bayesian network can be di cult. In this paper, we describe causal independence, a collection of conditional independence assertions and functional relationships that are often appropriate to apply to the representation of the uncertain interactions between causes and e ect. We show how the use of causal independence in a Bayesian network can greatly simplify probability assessment aswell as probabilistic inference. 1
Bayesian and Regularization Methods for Hyperparameter Estimation in Image Restoration
 IEEE Trans. Image Processing
, 1999
"... In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We ..."
Abstract

Cited by 65 (26 self)
 Add to MetaCart
In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We show analytically that the analysis provided by the evidence approach is more realistic and appropriate than the MAP approach for the image restoration problem. We furthermore study the relationship between the evidence and an iterative approach resulting from the set theoretic regularization approach for estimating the two hyperparameters, or their ratio, defined as the regularization parameter. Finally the proposed algorithms are tested experimentally.
A Bayesian approach to learning causal networks
 In Uncertainty in AI: Proceedings of the Eleventh Conference
, 1995
"... Whereas acausal Bayesian networks represent probabilistic independence, causal Bayesian networks represent causal relationships. In this paper, we examine Bayesian methods for learning both types of networks. Bayesian methods for learning acausal networks are fairly well developed. These methods oft ..."
Abstract

Cited by 57 (11 self)
 Add to MetaCart
Whereas acausal Bayesian networks represent probabilistic independence, causal Bayesian networks represent causal relationships. In this paper, we examine Bayesian methods for learning both types of networks. Bayesian methods for learning acausal networks are fairly well developed. These methods often employ assumptions to facilitate the construction of priors, including the assumptions of parameter independence, parameter modularity, and likelihood equivalence. We show that although these assumptions also can be appropriate for learning causal networks, we need additional assumptions in order to learn causal networks. We introduce two sufficient assumptions, called mechanism independence and component independence. We show that these new assumptions, when combined with parameter independence, parameter modularity, and likelihood equivalence, allow us to apply methods for learning acausal networks to learn causal networks. 1
ProgressBased Regulation of LowImportance Processes
 In Proceedings of the Seventeenth ACM Symposium on Operating Systems Principles
, 1999
"... MS Manners is a mechanism that employs progressbased regulation to prevent resource contention with lowimportance processes from degrading the performance of highimportance processes. The mechanism assumes that resource contention that degrades the performance of a highimportance process will als ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
MS Manners is a mechanism that employs progressbased regulation to prevent resource contention with lowimportance processes from degrading the performance of highimportance processes. The mechanism assumes that resource contention that degrades the performance of a highimportance process will also retard the progress of the lowimportance process. MS Manners detects this contention by monitoring the progress of the lowimportance process and inferring resource contention from a drop in the progress rate. This technique recognizes contention over any system resource, as long as the performance impact on contending processes is roughly symmetric. MS Manners employs statistical mechanisms to deal with stochastic progress measurements; it automatically calibrates a target progress rate, so no manual tuning is required; it supports multiple progress metrics from applications that perform several distinct tasks; and it orchestrates multiple lowimportance processes to prevent measurement i...
Robust Learning with Missing Data
, 1996
"... Bayesian methods are becoming increasingly popular in the development of intelligent machines. Bayesian Belief Networks (bbns) are nowaday a prominent reasoning method and, during the past few years, several efforts have been addressed to develop methods able to learn bbns directly from databases. H ..."
Abstract

Cited by 48 (5 self)
 Add to MetaCart
Bayesian methods are becoming increasingly popular in the development of intelligent machines. Bayesian Belief Networks (bbns) are nowaday a prominent reasoning method and, during the past few years, several efforts have been addressed to develop methods able to learn bbns directly from databases. However, all these methods assume that the database is complete or, at least, that unreported data are missing at random. Unfortunately, realworld databases are rarely complete and the "Missing at Random" assumption is often unrealistic. This paper shows that this assumption can dramatically affect the reliability of the learned bbn and introduces a robust method to learn conditional probabilities in a bbn, which does not rely on this assumption. In order to drop this assumption, we have to change the overall learning strategy used by traditional Bayesian methods: our method bounds the set of all posterior probabilities consistent with the database and proceed by refining this set as more i...
Asymptotic model selection for directed networks with hidden variables
, 1996
"... We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be used to select models given large samples of data. The standard BIC as well as our extension punishes the complexity of a ..."
Abstract

Cited by 46 (13 self)
 Add to MetaCart
We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be used to select models given large samples of data. The standard BIC as well as our extension punishes the complexity of a model according to the dimension of its parameters. We argue that the dimension of a Bayesian network with hidden variables is the rank of the Jacobian matrix of the transformation between the parameters of the network and the parameters of the observable variables. We compute the dimensions of several networks including the naive Bayes model with a hidden root node. 1
A variational approach to Bayesian logistic regression models and their extensions
, 1996
"... We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that accurate variational techniques can be used to obtain a closed form posterior distribution over the parameters given the data thereby yielding a posterior predictive model. The results are st ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that accurate variational techniques can be used to obtain a closed form posterior distribution over the parameters given the data thereby yielding a posterior predictive model. The results are straightforwardly extended to (binary) belief networks. For the belief networks we also derive closed form parameter posteriors in the presence of missing values. We show finally that the dual of the regression problem gives a latent variable density model the variational formulation of which leads to exactly solvable EM updates.