• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 483,710
Next 10 →

Learning Stochastic Logic Programs

by Stephen Muggleton , 2000
"... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic context-free grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a first-order r ..."
Abstract - Cited by 1181 (79 self) - Add to MetaCart
-order range-restricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform

Abduction in Logic Programming

by Marc Denecker, Antonis Kakas
"... Abduction in Logic Programming started in the late 80s, early 90s, in an attempt to extend logic programming into a framework suitable for a variety of problems in Artificial Intelligence and other areas of Computer Science. This paper aims to chart out the main developments of the field over th ..."
Abstract - Cited by 616 (76 self) - Add to MetaCart
Abduction in Logic Programming started in the late 80s, early 90s, in an attempt to extend logic programming into a framework suitable for a variety of problems in Artificial Intelligence and other areas of Computer Science. This paper aims to chart out the main developments of the field over

Inductive Logic Programming: Theory and Methods

by Stephen Muggleton, Luc De Raedt - JOURNAL OF LOGIC PROGRAMMING , 1994
"... ..."
Abstract - Cited by 530 (46 self) - Add to MetaCart
Abstract not found

Markov Logic Networks

by Matthew Richardson, Pedro Domingos - MACHINE LEARNING , 2006
"... We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the ..."
Abstract - Cited by 811 (39 self) - Add to MetaCart
learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach.

The Temporal Logic of Actions

by Leslie Lamport , 1993
"... ..."
Abstract - Cited by 943 (27 self) - Add to MetaCart
Abstract not found

Learning the Kernel Matrix with Semi-Definite Programming

by Gert R. G. Lanckriet, Nello Cristianini, Laurent El Ghaoui, Peter Bartlett, Michael I. Jordan , 2002
"... Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract - Cited by 780 (22 self) - Add to MetaCart
problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied

Literate programming

by Donald E. Knuth - THE COMPUTER JOURNAL , 1984
"... The author and his associates have been experimenting for the past several years with a programming language and documentation system called WEB. This paper presents WEB by example, and discusses why the new system appears to be an improvement over previous ones. ..."
Abstract - Cited by 549 (3 self) - Add to MetaCart
The author and his associates have been experimenting for the past several years with a programming language and documentation system called WEB. This paper presents WEB by example, and discusses why the new system appears to be an improvement over previous ones.

Nonmonotonic Reasoning, Preferential Models and Cumulative Logics

by Sarit Kraus, Daniel Lehmann, Menachem Magidor , 1990
"... Many systems that exhibit nonmonotonic behavior have been described and studied already in the literature. The general notion of nonmonotonic reasoning, though, has almost always been described only negatively, by the property it does not enjoy, i.e. monotonicity. We study here general patterns of ..."
Abstract - Cited by 624 (14 self) - Add to MetaCart
Many systems that exhibit nonmonotonic behavior have been described and studied already in the literature. The general notion of nonmonotonic reasoning, though, has almost always been described only negatively, by the property it does not enjoy, i.e. monotonicity. We study here general patterns of nonmonotonic reasoning and try to isolate properties that could help us map the field of nonmonotonic reasoning by reference to positive properties. We concentrate on a number of families of nonmonotonic consequence relations, defined in the style of Gentzen [13]. Both proof-theoretic and semantic points of view are developed in parallel. The former point of view was pioneered by D. Gabbay in [10], while the latter has been advocated by Y. Shoham in [38]. Five such families are defined and characterized by representation theorems, relating the two points of view. One of the families of interest, that of preferential relations, turns out to have been studied by E. Adams in [2]. The pr...

Parallel Networks that Learn to Pronounce English Text

by Terrence J. Sejnowski, Charles R. Rosenberg - COMPLEX SYSTEMS , 1987
"... This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed h ..."
Abstract - Cited by 548 (5 self) - Add to MetaCart
This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed

Locally weighted learning

by Christopher G. Atkeson, Andrew W. Moore , Stefan Schaal - ARTIFICIAL INTELLIGENCE REVIEW , 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract - Cited by 594 (53 self) - Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
Next 10 →
Results 1 - 10 of 483,710
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University