Results 1  10
of
230
Probabilistic Logic Programming
, 1992
"... Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian ..."
Abstract

Cited by 154 (9 self)
 Add to MetaCart
Of all scientific investigations into reasoning with uncertainty and chance, probability theory is perhaps the best understood paradigm. Nevertheless, all studies conducted thus far into the semantics of quantitative logic programming (cf. van Emden [51], Fitting [18, 19, 20], Blair and Subrahmanian [5, 6, 49, 50], Kifer et al [29, 30, 31]) have restricted themselves to nonprobabilistic semantical characterizations. In this paper, we take a few steps towards rectifying this situation. We define a logic programming language that is syntactically similar to the annotated logics of [5, 6], but in which the truth values are interpreted probabilistically. A probabilistic model theory and fixpoint theory is developed for such programs. This probabilistic model theory satisfies the requirements proposed by Fenstad [16] for a function to be called probabilistic. The logical treatment of probabilities is complicated by two facts: first, that the connectives cannot be interpreted truth function...
Feature Centrality and Conceptual Coherence
 Cognitive Science
, 1998
"... This paper has two objectives. First, we will argue that the mutability of conceptual fea tures can be represented as a single, multiplevalued dimension. We will show that the fea tures of a concept can be reliably ordered with respect to the degree to which people are willing to transform the fe ..."
Abstract

Cited by 91 (7 self)
 Add to MetaCart
This paper has two objectives. First, we will argue that the mutability of conceptual fea tures can be represented as a single, multiplevalued dimension. We will show that the fea tures of a concept can be reliably ordered with respect to the degree to which people are willing to transform the feature while retaining the integrity of a representation; i.e., that a number of conceptual tasks, all of which require people to transform conceptual features, produce similar orderings. Following Medin and Shoben (1988), these tasks have in common that they ask people to consider an object that is missing a feature but is otherwise intact (e.g., a real chair without a seat)
Reconciling simplicity and likelihood principles in perceptual organization
 Psychological Review
, 1996
"... Two principles of perceptual organization have been proposed. The likelihood principle, following H. L. E yon Helmholtz ( 1910 / 1962), proposes that perceptual organization is chosen to correspond to the most likely distal layout. The simplicity principle, following Gestalt psychology, suggests tha ..."
Abstract

Cited by 86 (17 self)
 Add to MetaCart
Two principles of perceptual organization have been proposed. The likelihood principle, following H. L. E yon Helmholtz ( 1910 / 1962), proposes that perceptual organization is chosen to correspond to the most likely distal layout. The simplicity principle, following Gestalt psychology, suggests that perceptual organization is chosen to be as simple as possible. The debate between these two views has been a central topic in the study of perceptual organization. Drawing on mathematical results in A. N. Kolmogorov's ( 1965)complexity heory, the author argues that simplicity and likelihood are not in competition, but are identical. Various implications for the theory of perceptual organization and psychology more generally are outlined. How does the perceptual system derive a complex and structured description of the perceptual world from patterns of activity at the sensory receptors? Two apparently competing theories of perceptual organization have been influential. The first, initiated by Helmholtz ( 1910/1962), advocates the likelihood principle: Sensory input will be organized into the most probable distal object or event consistent with that input. The second, initiated by Wertheimer and developed by other Gestalt psychologists, advocates what Pomerantz and Kubovy (1986) called the simplicity principle: The perceptual system is viewed as finding the simplest, rather than the most likely, perceptual organization consistent with the sensory input '. There has been considerable theoretical nd empirical controversy concerning whether likelihood or simplicity is the governing principle of perceptual organization (e.g., Hatfield, &
Decidability of Model Checking for InfiniteState Concurrent Systems
 Acta Informatica
"... We study the decidability of the model checking problem for linear and branching time logics, and two models of concurrent computation, namely Petri nets and Basic Parallel Processes. 1 Introduction Most techniques for the verification of concurrent systems proceed by an exhaustive traversal of the ..."
Abstract

Cited by 66 (1 self)
 Add to MetaCart
(Show Context)
We study the decidability of the model checking problem for linear and branching time logics, and two models of concurrent computation, namely Petri nets and Basic Parallel Processes. 1 Introduction Most techniques for the verification of concurrent systems proceed by an exhaustive traversal of the state space. Therefore, they are inherently incapable of considering systems with infinitely many states. Recently, some new methods have been developed in order to at least palliate this problem. Using them, several verification problems for some restricted infinitestate models have been shown to be decidable. These results can be classified into those showing the decidability of equivalence relations [8, 9, 24, 26], and those showing the decidability of model checking for different modal and temporal logics. In this paper, we contribute to this second group. The model checking problem has been studied so far for three infinitestate models: contextfree processes, pushdown processes, and...
Is the Brain a Digital Computer?
, 2004
"... This paper is about Cognitivism, and I had better say at the beginning what motivates it. If you read books about the brain (say Shepherd (1983) or Kuffler and Nicholls (1976)) you get a certain picture of what is going on in the brain. If you then turn to books about computation (say Boolos and Jef ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
(Show Context)
This paper is about Cognitivism, and I had better say at the beginning what motivates it. If you read books about the brain (say Shepherd (1983) or Kuffler and Nicholls (1976)) you get a certain picture of what is going on in the brain. If you then turn to books about computation (say Boolos and Jeffrey, 1989) you get a picture of the logical structure of the theory of computation. If you then turn to books about cognitive science, (say Pylyshyn, 1985) they tell you that what the brain books describe is really the same as what the computability books were describing. Philosophically speaking, this does not smell right to me and I have learned, at least at the beginning of an investigation, to follow my sense of smell. II. The Primal Story I want to begin the discussion by trying to state as strongly as I can why cognitivism has seemed intuitively appealing. There is a story about the relation of human intelligence to computation that goes back at least to Turing's classic paper (1950), and I believe it is the foundation of the Cognitivist view. I will call it the Primal Story: We begin with two results in mathematical logic, the ChurchTuring thesis (or equivalently, the Churchs's thesis) and Turing's theorem. For our purposes, the ChurchTuring thesis states that for any algorithm there is some Turing machine that can implement that algorithm. Turing's thesis says that there is a Universal Turing Machine which can simulate any Turing Machine. Now if we put these two together we have the result that a Universal Turing Machine can implement any algorithm whatever. But now, what made this result so exciting? What made it send shivers up and down the spines of a whole generation of young workers in artificial intelligence is the following thought: Suppose the brain is a Un...
NonTuring Computers and NonTuring Computability
, 1994
"... possible to perform computational supertasks — that is, an infinite number of computational steps in a finite span of time — in a kind of relativistic spacetime that Earman and Norton (1993) have dubbed a MalamentHogarth spacetime1. ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
possible to perform computational supertasks — that is, an infinite number of computational steps in a finite span of time — in a kind of relativistic spacetime that Earman and Norton (1993) have dubbed a MalamentHogarth spacetime1.
A universal logic approach to adaptive logics. Logica universalis
, 2007
"... In this paper, adaptive logics are studied from the viewpoint of universal logic (in the sense of the study of common structures of logics). The common structure of a large set of adaptive logics is described. It is shown that this structure determines the proof theory as well as the semantics of t ..."
Abstract

Cited by 42 (11 self)
 Add to MetaCart
(Show Context)
In this paper, adaptive logics are studied from the viewpoint of universal logic (in the sense of the study of common structures of logics). The common structure of a large set of adaptive logics is described. It is shown that this structure determines the proof theory as well as the semantics of the adaptive logics, and moreover that most properties of the logics can be proved by relying solely on the structure, viz. without invoking any specific properties of the logics themselves. 1 Aim and Preliminaries In this paper the common features of a wide variety of logics is studied. The logics, viz. adaptive logics, are very different both in nature and in application context. Of the adaptive logics studied until now, some are close to CL (Classical Logic), others are many valued, still others modal, and there clearly are adaptive logics of a still very different nature. The application contexts too are very varied: handling inconsistency, inductive generalization, abduction, handling plausible inferences, interpreting a person’s changing position in an ongoing discussion, compatibility, etc. I shall show that all these logics have a common structure, which determines their proofs as well as their semantics, and moreover their metatheory. Specific adaptive logics will not even be mentioned, except as illustrative examples. Adaptive logics adapt themselves to specific premise sets. To be more precise, they interpret a premise set “as normally as possible ” with respect to some standard of normality. They explicate reasoning processes that display an internal and possibly an external dynamics. The external dynamics provides from the nonmonotonicity of the inference relation: if premises are added, some consequences may not be derivable any more—formally: there are Γ, ∆ and A such that Γ ` A and Γ ∪ ∆ 0 A. The internal dynamics plays at the level of ∗Research for this paper was supported by subventions from Ghent University and from the Fund for Scientific Research – Flanders. I am indebted to Peter Verdée for comments to a previous version. 1
Algernon  A Tractable System for KnowledgeRepresentation
 SIGART BULLETIN
, 1991
"... AccessLimited Logic (ALL) is a theory of knowledge representation which formalizes the access limitations inherent in a network structured knowledgebase. Where a deductive method such as resolution would retrieve all assertions that satisfy a given pattern, an accesslimited logic retrieves ..."
Abstract

Cited by 37 (10 self)
 Add to MetaCart
AccessLimited Logic (ALL) is a theory of knowledge representation which formalizes the access limitations inherent in a network structured knowledgebase. Where a deductive method such as resolution would retrieve all assertions that satisfy a given pattern, an accesslimited logic retrieves only those assertions reachable by following an available access path. The time complexity of inference in ALL is a polynomial function of the size of the accessible portion of the knowledgebase, rather than an exponential function of the size of the entire knowledgebase (as in much past work). AccessLimited Logic, though incomplete, still has a well defined semantics and a weakened form of completeness, Socratic Completeness, which guarantees that for any fact which is a logical consequence of the knowledgebase, there is a series of preliminary queries and assumptions after which a query of the fact will succeed. Algernon implements AccessLimited Logic. Algernon is impo...