Results 1 
7 of
7
Modelling Relational Data using Bayesian Clustered Tensor Factorization
"... We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us “understand ” a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferen ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We consider the problem of learning probabilistic models for complex relational structures between various types of objects. A model can help us “understand ” a dataset of relational facts in at least two ways, by finding interpretable structure in the data, and by supporting predictions, or inferences about whether particular unobserved relations are likely to be true. Often there is a tradeoff between these two aims: clusterbased models yield more easily interpretable representations, while factorizationbased approaches have given better predictive performance on large data sets. We introduce the Bayesian Clustered Tensor Factorization (BCTF) model, which embeds a factorized representation of relations in a nonparametric Bayesian clustering framework. Inference is fully Bayesian but scales well to large data sets. The model simultaneously discovers interpretable clusters and yields predictive performance that matches or beats previous probabilistic models for relational data. 1
Learning a Theory of Causality
"... The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuit ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework, and the role for innate structure. We focus on knowledge about causality, seen as a domaingeneral intuitive theory, and ask whether this knowledge can be learned from cooccurrence of events. We begin by phrasing the causal Bayes nets theory of causality, and a range of alternatives, in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned—an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence, and find that a collection of simple “perceptual input analyzers ” can help to bootstrap abstract knowledge. Together these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality, but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion. Preprint June 2010—to appear in Psych. Review.
Abstraction and relational learning
"... Most models of categorization learn categories defined by characteristic features but some categories are described more naturally in terms of relations. We present a generative model that helps to explain how relational categories are learned and used. Our model learns abstract schemata that specif ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Most models of categorization learn categories defined by characteristic features but some categories are described more naturally in terms of relations. We present a generative model that helps to explain how relational categories are learned and used. Our model learns abstract schemata that specify the relational similarities shared by instances of a category, and our emphasis on abstraction departs from previous theoretical proposals that focus instead on comparison of concrete instances. Our first experiment suggests that abstraction can help to explain some of the findings that have previously been used to support comparisonbased approaches. Our second experiment focuses on oneshot schema learning, a problem that raises challenges for comparisonbased approaches but is handled naturally by our abstractionbased account. Categories such as family, sonnet, above, betray, and imitate differ in many respects but all of them depend critically on relational information. Members of a family are typically related by blood or marriage, and the lines that make up a sonnet must rhyme with each other according to a certain
Theory Acquisition as Stochastic Search
 In Proceedings of
, 2010
"... We present an algorithmic model for the development of children’s intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic contextfree grammar. Our algorithm performs stochastic search at two levels of abstraction ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We present an algorithmic model for the development of children’s intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic contextfree grammar. Our algorithm performs stochastic search at two levels of abstraction – an outer loop in the space of theories, and an inner loop in the space of explanations or models generated by each theory given a particular dataset – in order to discover the theory that best explains the observed data. We show that this model is capable of learning correct theories in several everyday domains, and discuss the dynamics of learning in the context of children’s cognitive development.
Modeling Semantic Cognition as Logical Dimensionality Reduction
 In Proceedings of Thirtieth Annual Meeting of the Cognitive Science Society
, 2008
"... Semantic knowledge is often expressed in the form of intuitive theories, which organize, predict and explain our observations of the world. How are these powerful knowledge structures represented and acquired? We present a framework, logical dimensionality reduction, that treats theories as compress ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Semantic knowledge is often expressed in the form of intuitive theories, which organize, predict and explain our observations of the world. How are these powerful knowledge structures represented and acquired? We present a framework, logical dimensionality reduction, that treats theories as compressive probabilistic models, attempting to express observed data as a sample from the logical consequences of the theory’s underlying laws and a small number of core facts. By performing Bayesian learning and inference on these models we combine important features of more familiar connectionist and symbolic approaches to semantic cognition: an ability to handle graded, uncertain inferences, together with systematicity and compositionality that support appropriate inferences from sparse observations in novel contexts.
Exploring the Conceptual Universe
"... Humans can learn to organize many kinds of domains into categories, including realworld domains such as kinsfolk and synthetic domains such as sets of geometric figures that vary along several dimensions. Psychologists have studied many individual domains in detail, but there have been few attempts ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Humans can learn to organize many kinds of domains into categories, including realworld domains such as kinsfolk and synthetic domains such as sets of geometric figures that vary along several dimensions. Psychologists have studied many individual domains in detail, but there have been few attempts to characterize or explore the full space of possibilities. This article provides a formal characterization that takes objects, features, and relations as primitives and specifies conceptual domains by combining these primitives in different ways. Explaining how humans are able to learn concepts within all of these domains is a challenge for computational models, but I argue that this challenge can be met by models that rely on a compositional representation language such as predicate logic. The article presents such a model and demonstrates that it accounts well for human concept learning across 11 different domains.
Articles Deep Transfer: A Markov Logic Approach
"... n Currently the largest gap between human and machine learning is learning algorithms’ inability to perform deep transfer, that is, generalize from one domain to another domain containing different objects, classes, properties, and relations. We argue that secondorder Markov logic is ideally suited ..."
Abstract
 Add to MetaCart
n Currently the largest gap between human and machine learning is learning algorithms’ inability to perform deep transfer, that is, generalize from one domain to another domain containing different objects, classes, properties, and relations. We argue that secondorder Markov logic is ideally suited for this purpose and propose an approach based on it. Our algorithm discovers structural regularities in the source domain in the form of Markov logic formulas with predicate variables and instantiates these formulas with predicates from the target domain. Our approach has successfully transferred learned knowledge among molecular biology, web, and social network domains. People are able to take knowledge learned in one domain and apply it to an entirely different one. For example, Wall Street firms often hire physicists to solve finance problems. Even though these two domains have superficially nothing in common, training as a physicist provides knowledge and skills that are highly applicable in finance (for example, solving differential equations and performing Monte Carlo simulations). Yet standard machinelearning approaches are unable to do this. For example, a model learned on physics data would not be applicable to finance data, because the variables in the two domains are different. Despite the recent interest in transfer learning, most approaches do not address this problem, instead focusing on modeling either a change of distributions over the same variables or minor variations of the same domain (for example, different numbers of objects). We call this shallow transfer. Our goal is to perform deep transfer, which involves generalizing across different domains (that is, between domains with different objects, classes, properties, and relations). Performing deep transfer requires discovering structural regularities that apply to many different domains, irrespective of their superficial descriptions. For example, two domains may be modeled by the same type of equation, and solution techniques learned in one can be applied in the other. The inability to do this is arguably the biggest gap between current learning systems and humans.