Results 1  10
of
10
Feature Logics
 HANDBOOK OF LOGIC AND LANGUAGE, EDITED BY VAN BENTHEM & TER MEULEN
, 1994
"... Feature logics form a class of specialized logics which have proven especially useful in classifying and constraining the linguistic objects known as feature structures. Linguistically, these structures have their origin in the work of the Prague school of linguistics, followed by the work of Chom ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Feature logics form a class of specialized logics which have proven especially useful in classifying and constraining the linguistic objects known as feature structures. Linguistically, these structures have their origin in the work of the Prague school of linguistics, followed by the work of Chomsky and Halle in The Sound Pattern of English [16]. Feature structures have been reinvented several times by computer scientists: in the theory of data structures, where they are known as record structures, in artificial intelligence, where they are known as frame or slotvalue structures, in the theory of data bases, where they are called "complex objects", and in computati
Probabilistic data exchange
 In Proc. ICDT
, 2010
"... The work reported here lays the foundations of data exchange in the presence of probabilistic data. This requires rethinking the very basic concepts of traditional data exchange, such as solution, universal solution, and the certain answers of target queries. We develop a framework for data exchange ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
The work reported here lays the foundations of data exchange in the presence of probabilistic data. This requires rethinking the very basic concepts of traditional data exchange, such as solution, universal solution, and the certain answers of target queries. We develop a framework for data exchange over probabilistic databases, and make a case for its coherence and robustness. This framework applies to arbitrary schema mappings, and finite or countably infinite probability spaces on the source and target instances. After establishing this framework and formulating the key concepts, we study the application of the framework to a concrete and practical setting where probabilistic databases are compactly encoded by means of annotations formulated over random Boolean variables. In this setting, we study the problems of testing for the existence of solutions and universal solutions, materializing such solutions, and evaluating target queries (for unions of conjunctive queries) in both the exact sense and the approximate sense. For each of the problems, we carry out a complexity analysis based on properties of the annotation, in various classes of dependencies. Finally, we show that the framework and results easily and completely generalize to allow not only the data, but also the schema mapping itself to be probabilistic.
Consistency Checking in Complex Object Database Schemata with Integrity Constraints
, 1998
"... Integrity constraints are rules which should guarantee the integrity of a database. Provided that an adequate mechanism to express them is available, the following question arises: is there any way to populate a database which satisfies the constraints supplied by a database designer? i.e., does the ..."
Abstract

Cited by 20 (13 self)
 Add to MetaCart
Integrity constraints are rules which should guarantee the integrity of a database. Provided that an adequate mechanism to express them is available, the following question arises: is there any way to populate a database which satisfies the constraints supplied by a database designer? i.e., does the database schema, including constraints, admit at least a nonempty model? This work gives an answer to the above question in a complex object database environment, providing a theoretical framework including the following ingredients: two alternative formalisms, able to express a relevant set of state integrity constraints with a declarative style; two specialized reasoners, based on the tableaux calculus, able to check the consistency of complex objects database schemata expressed with the two formalisms. The proposed formalisms share a common kernel, which supports complex objects and object identifiers, and allow the expression of acyclic descriptions of: classes, nested relati...
The inference problem for template dependencies
, 1982
"... A template dependency is a formalized integrity constraint on a relational database, stating that whenever tuples exist in the database that agree on certain attributes, an additional tuple must also be present that agrees with the others in a specified way. It is shown that the inference problem fo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
A template dependency is a formalized integrity constraint on a relational database, stating that whenever tuples exist in the database that agree on certain attributes, an additional tuple must also be present that agrees with the others in a specified way. It is shown that the inference problem for template dependencies is undecidable, that is, there can be no algorithm for determining whether a given dependency is a logical consequence of a given finite set of dependencies. The undecidability result holds whether or not databases are considered to be necessarily finite. INTROD UCTION The goal of dependency theory is to formalize constraints on the data comprising a relational database. In general, a dependency is a statement to the effect that when certain tuples are present in the database, so are certain others. Such statements can be used, for example, to capture the idea that attributes are functionally related or independent in some way. Many varieties of dependencies have been proposed in the literature; see the discussions in Fagin (1980) and Yannakakis and Papadimitriou (1980), for example. The proliferation of varieties is due in part to the desire to balance two opposing forces: on the one hand, dependencies should be of a form general enough to express interesting properties, but on the other hand, the form should not be so general that natural questions about dependencies become undecidable or computationally intractable. A significant question about any class of dependencies is its inference problem: Given a finite set D of dependencies and a single dependency D 0, to determine whether D O is true in every database in which each member of D is true. A solution to the inference problem carries with it the ability to determine whether two sets of
Databases and FiniteModel Theory
 IN DESCRIPTIVE COMPLEXITY AND FINITE MODELS
, 1997
"... Databases provide one of the main concrete scenarios for finitemodel theory within computer science. This paper presents an informal overview of database theory aimed at finitemodel theorists, emphasizing the specificity of the database area. It is argued that the area of databases is a rich sourc ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Databases provide one of the main concrete scenarios for finitemodel theory within computer science. This paper presents an informal overview of database theory aimed at finitemodel theorists, emphasizing the specificity of the database area. It is argued that the area of databases is a rich source of questions and vitality for finitemodel theory.
Type inference for recursive definitions
 In Proc. 14th Ann. IEEE Symp. Logic in Comput. Sci
, 1999
"... We consider type systems that combine universal types, recursive types, and object types. We study type inference in these systems under a rank restriction, following Leivant's notion of rank. To motivate our work, we present several examples showing how our systems can be used to type programs ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider type systems that combine universal types, recursive types, and object types. We study type inference in these systems under a rank restriction, following Leivant's notion of rank. To motivate our work, we present several examples showing how our systems can be used to type programs encountered in practice. We show that type inference in the rankk system is decidable for k ≤ 2 and undecidable for k ≥ 3. (Similar results based on different techniques are known to hold for System F, without recursive types and object types.) Our undecidability result is obtained by a reduction from a particular adaptation (which we call “regular”) of the semiunification problem and whose undecidability is, interestingly, obtained by methods totally different from those used in the case of standard (or finite) semiunification.
Type Inference with Recursive Types AT DIFFERENT RANKS
, 1999
"... We consider a polymorphic type system (System F) with recursive types for the lambda calculus with constants. We use Leivant’s notion of rank to delimit the boundaries for decidable and undecidable type inference in our system. More precisely, we show that type inference in our system is undecidable ..."
Abstract
 Add to MetaCart
We consider a polymorphic type system (System F) with recursive types for the lambda calculus with constants. We use Leivant’s notion of rank to delimit the boundaries for decidable and undecidable type inference in our system. More precisely, we show that type inference in our system is undecidable at rank k ≥ 3. Similar results are known to hold for System F without recursive types. Our undecidability result is obtained by a reduction from a particular adaptation of the semiunification problem (socalled “regular semiunification”) whose undecidability is, interestingly, obtained by methods totally different from those used in the case of standard (or finite) semiunification. Type inference for a rank1 restriction of our system is known to be decidable. A simple modification to the way unification is performed (socalled “regular unification”) is all that is required. We also conjecture that type inference at rank2 is decidable and equivalent to finding regular solutions for instances of a particular form of semiunification known as “acyclic semiunification”.
Type Inference with Recursive Types at Different Ranks
, 1994
"... We consider a polymorphic type system (System F) with recursive types for the lambda calculus with constants. We use Leivant's notion of rank to delimit the boundaries for decidable and undecidable type inference in our system. More precisely, we show that type inference in our system is undeci ..."
Abstract
 Add to MetaCart
We consider a polymorphic type system (System F) with recursive types for the lambda calculus with constants. We use Leivant's notion of rank to delimit the boundaries for decidable and undecidable type inference in our system. More precisely, we show that type inference in our system is undecidable at rank k # 3. Similar results are known to hold for System F without recursive types. Our undecidability result is obtained by a reduction from a particular adaptation of the semiunification problem (socalled "regular semiunification") whose undecidability is, interestingly, obtained by methods totally di#erent from those used in the case of standard (or finite) semiunification. Type inference for a rank1 restriction of our system is known to be decidable. A simple modification to the way unification is performed (socalled "regular unification") is all that is required. We also conjecture that type inference at rank2 is decidable and equivalent to finding regular solutions for instances of a particular form of semiunification known as "acyclic semiunification". iii Contents 1
RELATIONAL LATTICES: AN INTRODUCTION
"... We study an interpretation of lattice connectives as natural join and inner union between database relations with nonuniform headers. To the best of our knowledge, this interpretation was proposed first by database researchers in [Tropashko, 2005, Spight and Tropashko, 2006]. It does not seem to ..."
Abstract
 Add to MetaCart
We study an interpretation of lattice connectives as natural join and inner union between database relations with nonuniform headers. To the best of our knowledge, this interpretation was proposed first by database researchers in [Tropashko, 2005, Spight and Tropashko, 2006]. It does not seem to have attracted the at