Results 1 - 10
of
19
A vision for management of complex models
- SIGMOD Record
, 2000
"... Many problems encountered when building applications of database systems involve the manipulation of models. By “model, ” we mean a complex structure that represents a design artifact, such as a relational schema, object-oriented interface, UML model, XML DTD, web-site schema, semantic network, comp ..."
Abstract
-
Cited by 172 (23 self)
- Add to MetaCart
(Show Context)
Many problems encountered when building applications of database systems involve the manipulation of models. By “model, ” we mean a complex structure that represents a design artifact, such as a relational schema, object-oriented interface, UML model, XML DTD, web-site schema, semantic network, complex document, or software configuration. Many uses of models involve managing changes in models and transformations of data from one model into another. These uses require an explicit representation of “mappings ” between models. We propose to make database systems easier to use for these applications by making “model ” and “model mapping ” first-class objects with special operations that simplify their use. We call this capability model management. In addition to making the case for model management, our main contribution is a sketch of a proposed data model. The data model consists of formal, object-oriented structures for representing models and model mappings, and of high-level algebraic operations on those structures, such as matching, differencing, merging, function application, selection, inversion and instantiation. We focus on structure and semantics, not implementation. 1
Efficient Incremental Validation of XML Documents
- In ICDE
, 2004
"... We discuss incremental validation of XML documents with respect to DTDs and XML Schema definitions. We consider insertions and deletions of subtrees, as opposed to leaf nodes only, and we also consider the validation of ID and IDREF attributes. For arbitrary schemas, we give a worstcase time an ..."
Abstract
-
Cited by 39 (2 self)
- Add to MetaCart
(Show Context)
We discuss incremental validation of XML documents with respect to DTDs and XML Schema definitions. We consider insertions and deletions of subtrees, as opposed to leaf nodes only, and we also consider the validation of ID and IDREF attributes. For arbitrary schemas, we give a worstcase time and linear space algorithm, and show that it often is far superior to revalidation from scratch. We present two classes of schemas, which capture most reallife DTDs, and show that they admit a logarithmic time incremental validation algorithm that, in many cases, requires only constant auxiliary space. We then discuss an implementation of these algorithms that is independent of, and can be customized for different storage mechanisms for XML. Finally, we present extensive experimental results showing that our approach is highly efficient and scalable.
Finite differencing of logical formulas for static analysis
- IN PROC. 12TH ESOP
, 2003
"... This paper concerns mechanisms for maintaining the value of an instrumentationpredicate (a.k.a. derived predicate or view), defined via a logical formula over core predicates, in response to changes in the values of the core predicates. It presents an algorithm fortransforming the instrumentation p ..."
Abstract
-
Cited by 37 (17 self)
- Add to MetaCart
This paper concerns mechanisms for maintaining the value of an instrumentationpredicate (a.k.a. derived predicate or view), defined via a logical formula over core predicates, in response to changes in the values of the core predicates. It presents an algorithm fortransforming the instrumentation predicate's defining formula into a predicate-maintenance formula that captures what the instrumentation predicate's new value should be.This technique applies to program-analysis problems in which the semantics of statements is expressed using logical formulas that describe changes to core-predicate values,and provides a way to reflect those changes in the values of the instrumentation predicates.
Incremental XPath Evaluation
"... We study the problem of incrementally maintaining the result of an XPath query on an XML database under updates. In its most general form, this problem asks to maintain a materialized XPath view over an XML database. It assumes an underlying XML database D and a query Q. One is given a sequence of u ..."
Abstract
-
Cited by 15 (4 self)
- Add to MetaCart
(Show Context)
We study the problem of incrementally maintaining the result of an XPath query on an XML database under updates. In its most general form, this problem asks to maintain a materialized XPath view over an XML database. It assumes an underlying XML database D and a query Q. One is given a sequence of updates U to D and the problem is to compute the result of Q(U(D)), i.e., the result of evaluating query Q on the database D after having applied the updates U. In order to quickly answer this question, we are allowed to maintain an auxiliary data structure, and the complexity of the maintenance algorithms is measured in (i) the size of the auxiliary data structure, (ii) the worst-case time per update needed to compute Q(U(D)) and (iii) the worst-case time per update needed to bring the auxiliary data structure up to date. We allow three kinds of updates: node insertion, node deletion, and node relabeling. Our main results are that downward XPath queries can be incrementally maintained in time O(depth(D) · poly(|Q|)) per update and conjunctive forward XPath queries in time O(depth(D)·log(width(D))·poly(|Q|)) per update, where |Q | is the size of the query, and depth(D) and width(D) are the nesting depth and maximum number of siblings in the database D, respectively. The auxiliary data structures for maintenance are linear in |D | and polynomial in |Q | in all these cases.
Effectively-Propositional Reasoning about Reachability in Linked Data Structures ⋆
"... Abstract. This paper proposes a novel method of harnessing existing SAT solvers to verify reachability properties of programs that manipulate linked-list data structures. Such properties are essential for proving program termination, correctness of data structure invariants, and other safety propert ..."
Abstract
-
Cited by 9 (2 self)
- Add to MetaCart
(Show Context)
Abstract. This paper proposes a novel method of harnessing existing SAT solvers to verify reachability properties of programs that manipulate linked-list data structures. Such properties are essential for proving program termination, correctness of data structure invariants, and other safety properties. Our solution is complete, i.e., a SAT solver produces a counterexample whenever a program does not satisfy its specification. This result is surprising since even first-order theorem provers usually cannot deal with reachability in a complete way, because doing so requires reasoning about transitive closure. Our result is based on the following ideas: (1) Programmers must write assertions in a restricted logic without quantifier alternation or function symbols. (2) The correctness of many programs can be expressed in such restricted logics, although we explain the tradeoffs. (3) Recent results in descriptive complexity can be utilized to show that every program that manipulates potentially cyclic, singly- and doubly-linked lists and that is annotated with assertions written in this restricted logic, can be verified with a SAT solver. We implemented a tool atop Z3 and used it to show the correctness of several linked list programs. 1
Optimizing recursive queries in SQL
- In SIGMOD Conference
, 2005
"... Recursion represents an important addition to the SQL lan-guage. This work focuses on the optimization of linear re-cursive queries in SQL. To provide an abstract framework for discussion, we focus on computing the transitive closure of a graph. Three optimizations are studied: (1) Early eval-uation ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
(Show Context)
Recursion represents an important addition to the SQL lan-guage. This work focuses on the optimization of linear re-cursive queries in SQL. To provide an abstract framework for discussion, we focus on computing the transitive closure of a graph. Three optimizations are studied: (1) Early eval-uation of row selection conditions. (2) Eliminating duplicate rows in intermediate tables. (3) Dening an enhanced index to accelerate join computation. Optimizations are evaluated on two types of graphs: binary trees and sparse graphs. Bi-nary trees represent an ideal graph with no cycles and a linear number of edges. Sparse graphs represent an aver-age case with some cycles and a linear number of edges. In general, the proposed optimizations produce a signicant reduction in the evaluation time of recursive queries. 1.
Information preservation in XML-to-relational mappings
- Proc. of the Second International XML Database Symposium, XSym 2004
, 2004
"... We study the problem of storing XML documents using relational mappings. We propose a formalization of classes of mapping schemes based on the languages used for defining functions that assign relational databases to XML documents and vice-versa. We also discuss notions of information preservation f ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
We study the problem of storing XML documents using relational mappings. We propose a formalization of classes of mapping schemes based on the languages used for defining functions that assign relational databases to XML documents and vice-versa. We also discuss notions of information preservation for mapping schemes; we define lossless mapping schemes as those that preserve the structure and content of the documents, and validating mapping schemes as those in which valid documents can be mapped into legal databases, and all legal databases are (equivalent to) mappings of valid documents. We define one natural class of mapping schemes that captures all mappings in the literature, and show negative results for testing whether such mappings are lossless or validating. Finally, we propose a lossless and validating mapping scheme, and show that it performs well in the presence of updates. 1 1
Incremental maintenance of shortest distance and transitive closure in first-order logic and sql
- ACM Trans. Database Syst
"... Given a database, the view maintenance problem is concerned with the efficient computation of the new contents of a given view when updates to the database happen. We consider the view maintenance problem for the situation when the database contains a (weighted) graph and the view is either the tran ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Given a database, the view maintenance problem is concerned with the efficient computation of the new contents of a given view when updates to the database happen. We consider the view maintenance problem for the situation when the database contains a (weighted) graph and the view is either the transitive closure or the answer to the all-pairs shortest-distance problem (APSD). We give incremental algorithms for (APSD), which support both edge insertions and deletions. For transitive closure, the algorithm is applicable to a more general class of graphs than those previously explored. Our algorithms use first-order queries, along with addition (+) and less-than (<) operations (F O(+, <)); they store O(n 2) number of tuples, where n is the number of vertices, and have AC 0 data complexity for integer weights. Since F O(+, <) is a sublanguage of SQL and is supported by almost all current database systems, our maintenance algorithms are more appropriate for database applications than non-database query type of maintenance algorithms.
A Relaxed Approach to Integrity and Inconsistency in Databases
"... Abstract. We demonstrate that many, though not all integrity checking methods are able to tolerate inconsistency, without having been aware of it. We show that it is possible to use them to beneficial effect and without further ado, not only for preserving integrity in consistent databases, but also ..."
Abstract
-
Cited by 5 (5 self)
- Add to MetaCart
(Show Context)
Abstract. We demonstrate that many, though not all integrity checking methods are able to tolerate inconsistency, without having been aware of it. We show that it is possible to use them to beneficial effect and without further ado, not only for preserving integrity in consistent databases, but also in databases that violate their constraints. This apparently relaxed attitude toward integrity and inconsistency stands in contrast to approaches that are much more cautious wrt the prevention, identification, removal, repair and tolerance of inconsistent data that violate integrity. We assess several well-known methods in terms of inconsistency tolerance and give examples and counter-examples thereof. 1
Refinement-Based Verification for Possibly-Cyclic Lists
"... In earlier work, we presented an abstraction-refinement mechanism that was successful in verifying automatically the partial correctness of in-situ list reversal when applied to an acyclic linked list [10]. This paper reports on the automatic verification of the total correctness (partial correctne ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
In earlier work, we presented an abstraction-refinement mechanism that was successful in verifying automatically the partial correctness of in-situ list reversal when applied to an acyclic linked list [10]. This paper reports on the automatic verification of the total correctness (partial correctness and termination) of the same list-reversal algorithm, when applied to a possibly-cyclic linked list. A key contribution that made this result possible is an extension of the finite-differencing technique [14] to enable the maintenance of reachability information for a restricted class of possibly-cyclic data structures, which includes possiblycyclic linked lists.