Results 21  30
of
619
Topological Queries in Spatial Databases
 Journal of Computer and System Sciences
, 1996
"... We study topological queries over twodimensional spatial databases. First, we show that the topological properties of semialgebraic spatial regions can be completely specified using a classical finite structure, essentially the embedded planar graph of the region boundaries. This provides an invar ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
We study topological queries over twodimensional spatial databases. First, we show that the topological properties of semialgebraic spatial regions can be completely specified using a classical finite structure, essentially the embedded planar graph of the region boundaries. This provides an invariant characterizing semialgebraic regions up to homeomorphism. All topological queries on semialgebraic regions can be answered by queries on the invariant whose complexity is polynomially related to the original. Also, we show that for the purpose of answering topological queries, semialgebraic regions can always be represented simply as polygonal regions. We then study query languages for topological properties of twodimensional spatial databases, starting from the topological relationships between pairs of planar regions introduced by Egenhofer. We show that the closure of these relationships under appropriate logical operators yields languages which are complete for topological prope...
Informationtheoretic Limitations of Formal Systems
 JOURNAL OF THE ACM
, 1974
"... An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these tasks. This is applied to measuring the difficulty of proving a given set of theorems, in terms of the number of bits of axioms that are assumed, and the size of the proofs needed to deduce the theorems from the axioms.
A DomainTheoretic Approach to Computability on the Real Line
, 1997
"... In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications in fractal geometry, several extensions of the programming language PCF with a real number data type, and ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
In recent years, there has been a considerable amount of work on using continuous domains in real analysis. Most notably are the development of the generalized Riemann integral with applications in fractal geometry, several extensions of the programming language PCF with a real number data type, and a framework and an implementation of a package for exact real number arithmetic. Based on recursion theory we present here a precise and direct formulation of effective representation of real numbers by continuous domains, which is equivalent to the representation of real numbers by algebraic domains as in the work of StoltenbergHansen and Tucker. We use basic ingredients of an effective theory of continuous domains to spell out notions of computability for the reals and for functions on the real line. We prove directly that our approach is equivalent to the established Turingmachine based approach which dates back to Grzegorczyk and Lacombe, is used by PourEl & Richards in their found...
The Isomorphism Conjecture Fails Relative to a Random Oracle
 J. ACM
, 1996
"... Berman and Hartmanis [BH77] conjectured that there is a polynomialtime computable isomorphism between any two languages complete for NP with respect to polynomialtime computable manyone (Karp) reductions. Joseph and Young [JY85] gave a structural definition of a class of NPcomplete setsthe kc ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
Berman and Hartmanis [BH77] conjectured that there is a polynomialtime computable isomorphism between any two languages complete for NP with respect to polynomialtime computable manyone (Karp) reductions. Joseph and Young [JY85] gave a structural definition of a class of NPcomplete setsthe kcreative setsand defined a class of sets (the K k f 's) that are necessarily kcreative. They went on to conjecture that certain of these K k f 's are not isomorphic to the standard NPcomplete sets. Clearly, the BermanHartmanis and JosephYoung conjectures cannot both be correct. We introduce a family of strong oneway functions, the scrambling functions. If f is a scrambling function, then K k f is not isomorphic to the standard NPcomplete sets, as Joseph and Young conjectured, and the BermanHartmanis conjecture fails. Indeed, if scrambling functions exist, then the isomorphism also fails at higher complexity classes such as EXP and NEXP. As evidence for the existence of scramb...
Decidability and Expressiveness for FirstOrder Logics of Probability
 Information and Computation
, 1989
"... We consider decidability and expressiveness issues for two firstorder logics of probability. In one, the probability is on possible worlds, while in the other, it is on the domain. It turns out that in both cases it takes very little to make reasoning about probability highly undecidable. We show t ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
We consider decidability and expressiveness issues for two firstorder logics of probability. In one, the probability is on possible worlds, while in the other, it is on the domain. It turns out that in both cases it takes very little to make reasoning about probability highly undecidable. We show that when the probability is on the domain, if the language contains only unary predicates then the validity problem is decidable. However, if the language contains even one binary predicate, the validity problem is \Pi 2 1 complete, as hard as elementary analysis with free predicate and function symbols. With equality in the language, even with no other symbol, the validity problem is at least as hard as that for elementary analysis, \Pi 1 1 hard. Thus, the logic cannot be axiomatized in either case. When we put the probability on the set of possible worlds, the validity problem is \Pi 2 1 complete with as little as one unary predicate in the language, even without equality. With equalit...
Incremental concept learning for bounded data mining
 Information and Computation
, 1999
"... Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning ma ..."
Abstract

Cited by 39 (29 self)
 Add to MetaCart
Important re nements of concept learning in the limit from positive data considerably restricting the accessibility of input data are studied. Let c be any concept; every in nite sequence of elements exhausting c is called positive presentation of c. In all learning models considered the learning machine computes a sequence of hypotheses about the target concept from a positive presentation of it. With iterative learning, the learning machine, in making a conjecture, has access to its previous conjecture and the latest data item coming in. In kbounded examplememory inference (k is a priori xed) the learner is allowed to access, in making a conjecture, its previous hypothesis, its memory of up to k data items it has already seen, and the next element coming in. In the case of kfeedback identi cation, the learning machine, in making a conjecture, has access to its previous conjecture, the latest data item coming in, and, on the basis of this information, it can compute k items and query the database of previous data to nd out, for each of the k items, whether or not it is in the database (k is again a priori xed). In all cases, the sequence of conjectures has to converge to a hypothesis
The Stable Models of a Predicate Logic Program
 Journal of Logic Programming
, 1992
"... this paper we investigate and solve the problem classifying the Turing complexity of stable models of finite and recursive predicate logic programs. GelfondLifschitz [7] introduced the concept of a stable model M of a Predicate Logic Program P . Here we show that, up to a recursive 11 coding, the ..."
Abstract

Cited by 35 (13 self)
 Add to MetaCart
this paper we investigate and solve the problem classifying the Turing complexity of stable models of finite and recursive predicate logic programs. GelfondLifschitz [7] introduced the concept of a stable model M of a Predicate Logic Program P . Here we show that, up to a recursive 11 coding, the set of all stable models of finite Predicate Logic Programs and the 5
Informationtheoretic computational complexity
 IEEE Transactions on Information Theory
, 1974
"... This paper attempts to describe, in nontechnical language, some of the concepts and methods of one school of thought regarding computational complexity. It applies the viewpoint of information theory to computers. This will first lead us to a definition of the degree of randomness of individual bina ..."
Abstract

Cited by 35 (10 self)
 Add to MetaCart
This paper attempts to describe, in nontechnical language, some of the concepts and methods of one school of thought regarding computational complexity. It applies the viewpoint of information theory to computers. This will first lead us to a definition of the degree of randomness of individual binary strings, and then to an informationtheoretic version of Gödel's theorem on the limitations of the axiomatic method. Finally, we will examine in the light of these ideas the scientific method and von Neumann's views on the basic conceptual problems of biology. This field's fundamental concept is the complexity of a binary string, that is, a string of bits, of zeros and ones. The complexity of a binary string is the minimum quantity of information needed to define the string. For example, the string of length n consisting entirely of ones is of complexity approximately log 2 n, because only log 2 n bits of information are required to specify n in binary notation. However, this is rather vague. Exactly what is meant by the definition of a string? To make this idea precise a computer is used. One says that a string defines another when the first string gives instructions for constructing the second string. In other words, one string defines another when it is a
The pitfalls of verifying floatingpoint computations
 ACM Transactions on programming languages and systems
"... Current critical systems often use a lot of floatingpoint computations, and thus the testing or static analysis of programs containing floatingpoint operators has become a priority. However, correctly defining the semantics of common implementations of floatingpoint is tricky, because semantics ma ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
Current critical systems often use a lot of floatingpoint computations, and thus the testing or static analysis of programs containing floatingpoint operators has become a priority. However, correctly defining the semantics of common implementations of floatingpoint is tricky, because semantics may change according to many factors beyond sourcecode level, such as choices made by compilers. We here give concrete examples of problems that can appear and solutions for implementing in analysis software. 1