## Combining Multiple Perspectives (2000)

Venue: | In Proceedings of the Seventeenth International Conference on Machine Learning |

Citations: | 4 - 3 self |

### BibTeX

@INPROCEEDINGS{Banerjee00combiningmultiple,

author = {Bikramjit Banerjee and Sandip Debnath and Sandip Sen},

title = {Combining Multiple Perspectives},

booktitle = {In Proceedings of the Seventeenth International Conference on Machine Learning},

year = {2000},

pages = {33--40},

publisher = {Morgan Kaufmann}

}

### OpenURL

### Abstract

We consider a group of Bayesian learners whose interactions with the environment and other agents allow them to improve their model of the dependency among various factors that have influence on their interactions with the environment. Effective collaboration can improve the performance of isolated individual learners. We present a mechanism to pool together the knowledge of many modelers in the domain, each of whom may have only partial access to the environment. The application domain used in this study is a multiagent negotiation problem. We present results to compare the performance of such knowledge-composition against isolated learners, as also against a learner who has complete access to the environment.

### Citations

1140 | A Bayesian Method for the Induction of Probabilistic Networks from Data - Cooper, Herskovits - 1992 |

981 |
An Introduction to Bayesian Networks
- Jensen
- 1996
(Show Context)
Citation Context ...l falls short of the model that can be built if a single agent had full observation, i.e., could see all the factors in uencing B and observe all its actions. 2. Bayesian Networks A Bayesian network (=-=Jensen, 19-=-96; Charniak, 1991) is a graphical method of representing dependencies and independencies among dierent variables that together dene a model of a real-world situation. Technically, it is a directed ac... |

953 | Learning Bayesian networks: The combination of knowledge and statistical data - Heckerman, Geiger, et al. - 1995 |

685 | Approximating discrete probability distribution with dependence trees - Chow, Liu - 1968 |

562 | An efficient boosting algorithm for combining preferences - Freund, Iyer, et al. |

255 | Bayesian networks without tears
- Charniak
- 1991
(Show Context)
Citation Context ...of the model that can be built if a single agent had full observation, i.e., could see all the factors in uencing B and observe all its actions. 2. Bayesian Networks A Bayesian network (Jensen, 1996; =-=Charniak, 19-=-91) is a graphical method of representing dependencies and independencies among dierent variables that together dene a model of a real-world situation. Technically, it is a directed acyclic graph with... |

253 | Operations for learning with graphical models
- Buntine
- 1994
(Show Context)
Citation Context ...he probability of occurrence of the particular event for the given causes. 2.1 Learning Bayesian Networks The problem of learning Bayesian network from data has been addressed by several researchers (=-=Buntine, 1994-=-; Chow & Liu, 1968; Heckerman et al., 1995; Provan & Singh, 1996; Sprites et al., 1993). There are broadly a couple of trends in learning Bayesian networks from data. One is based on non-Bayesian appr... |

223 | The Bayesian Structural EM Algorithm - Friedman - 1998 |

214 | Minimum complexity density estimation - Barron, Cover - 1991 |

199 | Learning Bayesian belief networks: An approach based on the MDL principle - Lam, Bacchus - 1994 |

61 |
Bayesian Belief Networks: from construction to inference
- Bouckaert
- 1995
(Show Context)
Citation Context ...an Scoring Metrics (Cooper & Herskovits, 1992; Heckerman et al., 1995) that optimize a loglikelihood or an entropy score, over various possible structures. The minimum description length principle (B=-=ouckaert, 199-=-5; Lam & Bacchus, 1994). These techniques are asymptotically optimal, given complete data. Extended likelihood approaches, which use a different scoring criterion to be maximized rather than the maxi... |

25 | Learning Bayesian networks in the presence of missing values and hidden variables - Friedman - 1997 |

19 | A Fast Model Selection Procedure for Large Families of Models - Edwards, Havranek - 1987 |

13 |
Communication and Belief Changes in a Society of Agents: Towards a Formal Model of Autonomous Agent
- Gaspar
- 1991
(Show Context)
Citation Context ...bserved by both agents which skews the combined distribution). 6. Related Work There has been some study in multiagent belief revision in context of revising an agent's model of other agents' models (=-=Gaspar, 199-=-1; van der Meyden, 1994). More recently Kr-Dahav and Tennenholtz (1996) consider multi-agent belief revision in heterogeneous systems where each agent has some private and some shared knowledge about ... |

6 | Deriving a Minimal I-Map of a Belief Network Relative to a Target Ordering of its Nodes - Matzkevich, Abramson - 1993 |

4 | Three approaches to probability model selection - Poland, Shachter - 1994 |

3 | Using bayesian networks to aid negotiations among agents - Banerjee, Debnath, et al. - 1999 |

2 |
A tutorial on learning Bayesian networks (MSR-TR-95-06): Microsoft Research
- Heckerman
- 1995
(Show Context)
Citation Context ...rather than batch learning from incomplete data. Though maintaining the instance vectors in memory for batch learning is a possibility, it is not a viable solution. The concept of sucient statistics (=-=Heckerman, 199-=-5) enables one to compute Pr[X i jX i ], by keeping counts of each distinct instantiation of X i and its parents, or in general, the counts of distinct instantiations of the vector X, given by N(x). T... |

1 | Learning from incomplete data (Technical Report - Ghahramani - 1994 |

1 | Kutato: An entropy driven system for construction of probabilistic expert system from databases - Herskovitz - 1990 |

1 | Multiagent belief revision - E - 1996 |