Results 1  10
of
42
A framework for incorporating general domain knowledge into latent Dirichlet allocation using firstorder logic
 In Proceedings of the 22nd International Joint Conferences on Artificial Intelligence
, 2011
"... Topic models have been used successfully for a variety of problems, often in the form of applicationspecific extensions of the basic Latent Dirichlet Allocation (LDA) model. Because deriving these new models in order to encode domain knowledge can be difficult and timeconsuming, we propose the Fold ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Topic models have been used successfully for a variety of problems, often in the form of applicationspecific extensions of the basic Latent Dirichlet Allocation (LDA) model. Because deriving these new models in order to encode domain knowledge can be difficult and timeconsuming, we propose the Fold·all model, which allows the user to specify general domain knowledge in FirstOrder Logic (FOL). However, combining topic modeling with FOL can result in inference problems beyond the capabilities of existing techniques. We have therefore developed a scalable inference technique using stochastic gradient descent which may also be useful to the Markov Logic Network (MLN) research community. Experiments demonstrate the expressive power of Fold·all, as well as the scalability of our proposed inference method. 1
Lifted inference for relational continuous models
 In Proc. of the 26th Conference on Uncertainty in Artificial Intelligence (UAI10
, 2010
"... Relational Continuous Models (RCMs) represent joint probability densities over attributes of objects, when the attributes have continuous domains. With relational representations, they can model joint probability distributions over large numbers of variables compactly in a natural way. This paper pr ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
Relational Continuous Models (RCMs) represent joint probability densities over attributes of objects, when the attributes have continuous domains. With relational representations, they can model joint probability distributions over large numbers of variables compactly in a natural way. This paper presents a new exact lifted inference algorithm for RCMs, thus it scales up to large models of real world applications. The algorithm applies to Relational Pairwise Models which are (relational) products of potentials of arity 2. Our algorithm is unique in two ways. First, it substantially improves the efficiency of lifted inference with variables of continuous domains. When a relational model has Gaussian potentials, it takes only lineartime compared to cubic time of previous methods. Second, it is the first exact inference algorithm which handles RCMs in a lifted way. The algorithm is illustrated over an example from econometrics. Experimental results show that our algorithm outperforms both a groundlevel inference algorithm and an algorithm built with previouslyknown lifted methods. 1
Collective Graph Identification
"... Data describing networks (communication networks, transaction networks, disease transmission networks, collaboration networks, etc.) is becoming increasingly ubiquitous. While this observational data is useful, it often only hints at the actual underlying social or technological structures which giv ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Data describing networks (communication networks, transaction networks, disease transmission networks, collaboration networks, etc.) is becoming increasingly ubiquitous. While this observational data is useful, it often only hints at the actual underlying social or technological structures which give rise to the interactions. For example, an email communication network provides useful insight but is not the same as the“real”social network among individuals. In this paper, we introduce the problem of graph identification, i.e., the discovery of the true graph structure underlying an observed network. We cast the problem as a probabilistic inference task, in which we must infer the nodes, edges, and node labels of a hidden graph, based on evidence provided by the observed network. This in turn corresponds to the problems of performing entity resolution, link prediction, and node labeling to infer the hidden graph. While each of these problems have been studied separately, they have never been considered together as a coherent task. We presentasimpleyetnovelapproachtoaddressallthreeproblems simultaneously. Our approach, called C 3, consists of Coupled Collective Classifiers that are iteratively applied to propagate information among solutions to the problems. We empirically demonstrate that C 3 is superior, in terms of both predictive accuracy and runtime, to stateoftheart probabilistic approaches on three realworld problems.
Locationbased reasoning about complex multiagent behavior
 In Journal of Artificial Intelligence Research. AI Access Foundation
, 2011
"... Recent research has shown that surprisingly rich models of human activity can be learned from GPS (positional) data. However, most effort to date has concentrated on modeling single individuals or statistical properties of groups of people. Moreover, prior work focused solely on modeling actual succ ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Recent research has shown that surprisingly rich models of human activity can be learned from GPS (positional) data. However, most effort to date has concentrated on modeling single individuals or statistical properties of groups of people. Moreover, prior work focused solely on modeling actual successful executions (and not failed or attempted executions) of the activities of interest. We, in contrast, take on the task of understanding human interactions, attempted interactions, and intentions from noisy sensor data in a fully relational multiagent setting. We use a realworld game of capture the flag to illustrate our approach in a welldefined domain that involves many distinct cooperative and competitive joint activities. We model the domain using Markov logic, a statisticalrelational language, and learn a theory that jointly denoises the data and infers occurrences of highlevel activities, such as a player capturing an enemy. Our unified model combines constraints imposed by the geometry of the game area, the motion model of the players, and by the rules and dynamics of the game in a probabilistically and logically sound fashion. We show that while it may be impossible to directly detect a multiagent activity due to sensor noise or malfunction, the occurrence of the activity can still be inferred by considering both its impact on the
Growing a Tree in the Forest: Constructing Folksonomies by Integrating Structured Metadata
"... Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photosharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better.
Just Add Weights: Markov Logic for the Semantic Web
"... Abstract. In recent years, it has become increasingly clear that the vision of the Semantic Web requires uncertain reasoning over rich, firstorder representations. Markov logic brings the power of probabilistic modeling to firstorder logic by attaching weights to logical formulas and viewing them a ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In recent years, it has become increasingly clear that the vision of the Semantic Web requires uncertain reasoning over rich, firstorder representations. Markov logic brings the power of probabilistic modeling to firstorder logic by attaching weights to logical formulas and viewing them as templates for features of Markov networks. This gives natural probabilistic semantics to uncertain or even inconsistent knowledge bases with minimal engineering effort. Inference algorithms for Markov logic draw on ideas from satisfiability, Markov chain Monte Carlo and knowledgebased model construction. Learning algorithms are based on the conjugate gradient algorithm, pseudolikelihood and inductive logic programming. Markov logic has been successfully applied to problems in entity resolution, link prediction, information extraction and others, and is the basis of the opensource Alchemy system. 1
Event Processing Under Uncertainty
"... Big data is recognized as one of the three technology trends at the leading edge a CEO cannot afford to overlook in 2012. Big data is characterized by volume, velocity, variety and veracity (“data in doubt”). As big data applications, many of the emerging event processing applications must process e ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Big data is recognized as one of the three technology trends at the leading edge a CEO cannot afford to overlook in 2012. Big data is characterized by volume, velocity, variety and veracity (“data in doubt”). As big data applications, many of the emerging event processing applications must process events that arrive from sources such as sensors and social media, which have inherent uncertainties associated with them. Consider, for example, the possibility of incomplete data streams and streams including inaccurate data. In this tutorial we classify the different types of uncertainty found in event processing applications and discuss the implications on event representation and reasoning. An area of research in which uncertainty has been studied is Artificial Intelligence. We discuss, therefore, the main Artificial Intelligencebased event processing systems that support probabilistic reasoning. The presented approaches are illustrated using an example concerning crime detection.
Probabilistic Similarity Logic
"... Many machine learning applications require the ability to learn from and reason about noisy multirelational data. To address this, several effective representations have been developed that provide both a language for expressing the structural regularities of a domain, and principled support for pr ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Many machine learning applications require the ability to learn from and reason about noisy multirelational data. To address this, several effective representations have been developed that provide both a language for expressing the structural regularities of a domain, and principled support for probabilistic inference. In addition to these two aspects, however, many applications also involve a third aspect–the need to reason about similarities–which has not been directly supported in existing frameworks. This paper introduces probabilistic similarity logic (PSL), a generalpurpose framework for joint reasoning about similarity in relational domains that incorporates probabilistic reasoning about similarities and relational structure in a principled way. PSL can integrate any existing domainspecific similarity measures and also supports reasoning about similarities between sets of entities. We provide efficient inference and learning techniques for PSL and demonstrate its effectiveness both in common relational tasks and in settings that require reasoning about similarity. 1
A Probabilistic Approach for Learning Folksonomies from Structured Data
"... Learning structured representations has emerged as an important problem in many domains, including document and Web data mining, bioinformatics, and image analysis. One approach to learning complex structures is to integrate many smaller, incomplete and noisy structure fragments. In this work, we pr ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Learning structured representations has emerged as an important problem in many domains, including document and Web data mining, bioinformatics, and image analysis. One approach to learning complex structures is to integrate many smaller, incomplete and noisy structure fragments. In this work, we present an unsupervised probabilistic approach that extends affinity propagation [7] to combine the small ontological fragments into a collection of integrated, consistent, and larger folksonomies. This is a challenging task because the method must aggregate similar structures while avoiding structural inconsistencies and handling noise. We validate the approach on a realworld social media dataset, comprised of shallow personal hierarchies specified by many individual users, collected from the photosharing website Flickr. Our empirical results show that our proposed approach is able to construct deeper and denser structures, compared to an approach using only the standard affinity propagation algorithm. Additionally, the approach yields better overall integration quality than a stateoftheart approach based on incremental relational clustering.
Unification Neural Networks: Unification by ErrorCorrection Learning
"... We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network c ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.