Learning in First-Order Probabilistic Representations
SVM HeaderParse 0.1
Ph.D. General Examination; Department of Computer Science and Engineering; University of Washington
Seattle, WA 98195
Learning probabilistic models has been an important direction of research in the machine learning community, as has been learning first-order logic models. Ideally, we would like to be able to combine the two, i.e., to learn first-order probabilistic models. Because of their ability to handle uncertainty and compactly model complex domains, these models are the object of growing research interest. This research comprises three main directions: knowledge-based model construction (KBMC), stochastic logic programs (SLPs), and probabilistic relational models (PRMs). This paper surveys these approaches, and suggests opportunities for further research and improvement, particularly with regard to modifying them so they may scale to handle large amounts of training data.