Results 1  10
of
10
Computing With FirstOrder Logic
, 1995
"... We study two important extensions of firstorder logic (FO) with iteration, the fixpoint and while queries. The main result of the paper concerns the open problem of the relationship between fixpoint and while: they are the same iff ptime = pspace. These and other expressibility results are obtaine ..."
Abstract

Cited by 53 (13 self)
 Add to MetaCart
We study two important extensions of firstorder logic (FO) with iteration, the fixpoint and while queries. The main result of the paper concerns the open problem of the relationship between fixpoint and while: they are the same iff ptime = pspace. These and other expressibility results are obtained using a powerful normal form for while which shows that each while computation over an unordered domain can be reduced to a while computation over an ordered domain via a fixpoint query. The fixpoint query computes an equivalence relation on tuples which is a congruence with respect to the rest of the computation. The same technique is used to show that equivalence of tuples and structures with respect to FO formulas with bounded number of variables is definable in fixpoint. Generalizing fixpoint and while, we consider more powerful languages which model arbitrary computation interacting with a database using a finite set of FO queries. Such computation is modeled by a relational machine...
Analysis and application of adaptive sampling
 in Proc. of the 19th ACM SIGMODSIGACTSIGART Symposium on Principles of Database Systems (PODS'99), ACM
, 1999
"... An estimation algorithm for a query is a probabilistic algorithm that computes an approximation for the size (number of tuples) of the query. The main question that is studied is which classes of logically de nable queries have fast estimation algorithms. Evidence from descriptive complexity theory ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
An estimation algorithm for a query is a probabilistic algorithm that computes an approximation for the size (number of tuples) of the query. The main question that is studied is which classes of logically de nable queries have fast estimation algorithms. Evidence from descriptive complexity theory is provided that indicates not all such queries have fast estimation algorithms. However, it is shown that on classes of structures of bounded degree, all rstorder queries have fast estimation algorithms. These estimation algorithms use a form of statistical sampling known as adaptive sampling. Several versions of adaptive sampling have been developed by other researchers. The original version has been surpassed in some ways by a newer version and a more specialized MonteCarlo algorithm.
A Probabilistic View of Datalog Parallelization
 Procs. Intl. Conf. on Database Theory
, 1993
"... We explore an approach to developing Datalog parallelization strategies that aims at good expected rather than worstcase performance. To illustrate, we consider a very simple parallelization strategy that applies to all Datalog programs. We prove that this has very good expected performance under e ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We explore an approach to developing Datalog parallelization strategies that aims at good expected rather than worstcase performance. To illustrate, we consider a very simple parallelization strategy that applies to all Datalog programs. We prove that this has very good expected performance under equal distribution of inputs. This is done using an extension of 01 laws adapted to this context. The analysis is confirmed by experimental results on randomly generated data. 1 Introduction The performance requirements of databases for advanced applications, and the increased availability of cheap parallel processing, have naturally lend great importance to the development of parallel processing techniques for databases. Much of the existing research in this direction has focused on parallelization of Datalog queries. In this paper we investigate parallel processing of Datalog from a probabilistic viewpoint. In contrast to existing work, we propose to guide the design and evaluation of para...
Optimizing Active Databases using the Split Technique
 Proceedings 4th Intl. Conference on Database Theory (ICDT '92), LNCS 646
, 1992
"... A method to perform nonmonotonic relational rule computations is presented, called the split technique, The goal is to avoid redundant computations with rules that can insert and delete sets of tuples specified by the rule body. The method is independent of the control strategy that governs rule fir ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
A method to perform nonmonotonic relational rule computations is presented, called the split technique, The goal is to avoid redundant computations with rules that can insert and delete sets of tuples specified by the rule body. The method is independent of the control strategy that governs rule firing. Updatable relations are partitioned, as the computation progresses, into blocks of tuples such that tuples within a block are indiscernible from each other based on the computation so far. Results of previous rule firings are remembered as "relational equations" so that a new rule firing does not recompute parts of the result that can be determined from the existing equations. Seminaive evaluation falls out as a special case when all rules specify inserts. The method is amenable to parallelization.
The Kolmogorov Expressive Power of Boolean Query Languages
 IN PROC. ICDT'95. SPRINGERVERLAG LECTURE NOTES IN COMPUTER SCIENCE #893
, 1996
"... We address the question "How much of the information stored in a given database can be retrieved by all Boolean queries in a given query language?". In order to answer it we develop a Kolmogorov complexity based measure of expressive power of Boolean query languages over finite structures. This t ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We address the question "How much of the information stored in a given database can be retrieved by all Boolean queries in a given query language?". In order to answer it we develop a Kolmogorov complexity based measure of expressive power of Boolean query languages over finite structures. This turns the above informal question into a precisely defined mathematical one. This notion gives a meaningful definition of the expressive power of a Boolean query language in a single finite database. The notion of Kolmogorov expressive power of a Boolean query language L in a finite database A is defined by considering two values: the Kolmogorov complexity of the isomorphism type of A; equal to the length of the shortest description of this type, and the number of bits of this description that can be reconstructed from truth values of all queries from L in A: The closer is the second value to the first, the more expressive is the query language. After giving the definitions and provin...
Efficient Approximations of Conjunctive Queries
"... When finding exact answers to a query over a large database is infeasible, it is natural to approximate the query by a more efficient one that comes from a class with good bounds on the complexity of query evaluation. In this paper we study such approximations for conjunctive queries. These queries ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
When finding exact answers to a query over a large database is infeasible, it is natural to approximate the query by a more efficient one that comes from a class with good bounds on the complexity of query evaluation. In this paper we study such approximations for conjunctive queries. These queries are of special importance in databases, and we have a very good understanding of the classes that admit fast query evaluation, such as acyclic, or bounded (hyper)treewidth queries. We define approximations of a given query Q as queries from one of those classes that disagree with Q as little as possible. We mostly concentrate on approximations that are guaranteed to return correct answers. We prove that for the above classes of tractable conjunctive queries, approximations always exist, and are at most polynomial in the size of the original query. This follows from general results we establish that relate closure properties of classes of conjunctive queries to the existence of approximations. We also show that in many cases, the size of approximations is bounded by the size of the query they approximate. We establish a number of results showing how combinatorial properties of queries affect properties of their approximations, study bounds on the number of approximations, as well as the complexity of finding and identifying approximations. We also look at approximations that return all correct answers and study their properties.
On the Indiscernibility of Individuals in Logic Programming
 Journal of Logic and Computation
, 1997
"... According to Leibniz' principle, two individuals a and b are indiscernible, if they share the same properties. Indiscernibility of objects provides a potential for optimization in deductive systems, and has e.g. been exploited in the area of active database systems. In this paper, we address the iss ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
According to Leibniz' principle, two individuals a and b are indiscernible, if they share the same properties. Indiscernibility of objects provides a potential for optimization in deductive systems, and has e.g. been exploited in the area of active database systems. In this paper, we address the issue of indiscernibility in logic programs and outline possible benefits for computation. After a formal definition of the notion of indiscernibility, we investigate some basic properties. The main contribution is then an analysis of the computational cost of checking indiscernibility of individuals (i.e. constants) in logic programs without function symbols, which we pursue in detail for ground logic programs. For the concern of query optimization, they show that online computation of indiscernibility is expensive, and thus suggest to adopt an offline strategy, which may pay off for certain computational tasks.
Computational Model Theory: An Overview
 LOGIC JOURNAL OF THE IGPL
, 1998
"... The computational complexity of a problem is the amount of resources, such as time or space, required by a machine that solves the problem. The descriptive complexity of problems is the complexity of describing problems in some logical formalism over finite structures. One of the exciting developmen ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The computational complexity of a problem is the amount of resources, such as time or space, required by a machine that solves the problem. The descriptive complexity of problems is the complexity of describing problems in some logical formalism over finite structures. One of the exciting developments in complexity theory is the discovery of a very intimate connection between computational and descriptive complexity. It is this connection between complexity theory and finitemodel theory that we term computational model theory. In this overview paper we o#er one perspective on computational model theory. Two important observationsunderly our perspective: (1) while computationaldevices work on encodingsof problems, logic is applied directly to the underlying mathematical structures, and this "mismatch" complicates the relationship between logic and complexity significantly, and (2) firstorder logic has severely limited expressive power on finite structures, and one way to increase the...
Author manuscript, published in "ICDT (2012) 4660" Highly Expressive Query Languages for Unordered Data Trees ∗
, 2012
"... We study highly expressive query languages for unordered data trees, using as formal vehicles Active XML and extensions of languages in the while family. All languages may be seen as adding some form of control on top of a set of basic pattern queries. The results highlight the impact and interplay ..."
Abstract
 Add to MetaCart
We study highly expressive query languages for unordered data trees, using as formal vehicles Active XML and extensions of languages in the while family. All languages may be seen as adding some form of control on top of a set of basic pattern queries. The results highlight the impact and interplay of different factors: the expressive power of basic queries, the embedding of computation into data (as in Active XML), and the use of deterministic vs. nondeterministic control. All languages are Turing complete, but not necessarily query complete in the sense of Chandra and Harel. Indeed, we show that some combinations of features yield serious limitations, analogous to FO k definability in the relational context. On the other hand, the limitations come with benefits such as the existence of powerful normal forms. Other languages are “almost ” complete, but fall short because of subtle limitations reminiscent of the copy elimination problem in object databases.