## Parameter learning of logic programs for symbolic-statistical modeling (2001)

### Cached

### Download Links

- [www.cs.cmu.edu]
- [www.cs.umd.edu]
- [www.jair.org]
- [sato-www.cs.titech.ac.jp]
- [www.cs.washington.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | Journal of Artificial Intelligence Research |

Citations: | 91 - 19 self |

### BibTeX

@ARTICLE{Sato01parameterlearning,

author = {Taisuke Sato and Yoshitaka Kameya},

title = {Parameter learning of logic programs for symbolic-statistical modeling},

journal = {Journal of Artificial Intelligence Research},

year = {2001},

pages = {454}

}

### Years of Citing Articles

### OpenURL

### Abstract

We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. de nite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, thatrunsfora class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have beendeveloped independently in each research eld. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can signi cantly outperform the Inside-Outside algorithm. 1.