Results 1  10
of
45
Query optimization in database systems
 ACM Computing Surveys
, 1984
"... Efficient methods of processing unanticipated queries are a crucial prerequisite for the success of generalized database management systems. A wide variety of approaches to improve the performance of query evaluation algorithms have been proposed: logicbased and semantic transformations, fast imple ..."
Abstract

Cited by 207 (0 self)
 Add to MetaCart
Efficient methods of processing unanticipated queries are a crucial prerequisite for the success of generalized database management systems. A wide variety of approaches to improve the performance of query evaluation algorithms have been proposed: logicbased and semantic transformations, fast implementations of basic operations, and combinatorial or heuristic algorithms for generating alternative access plans and choosing among them. These methods are presented in the framework of a general query evaluation procedure using the relational calculus representation of queries. In addition, nonstandard query optimization issues such as higher level query evaluation, query optimization in distributed databases, and use of database machines are addressed. The focus, however, is on query optimization in centralized database systems.
MultipleQuery Optimization
 ACM Transactions on Database Systems
, 1988
"... Some recently proposed extensions to relational database systems, as well as to deductive database systems, require support for multiplequery processing. For example, in a database system enhanced with inference capabilities, a simple query involving a rule with multiple definitions may expand to m ..."
Abstract

Cited by 203 (3 self)
 Add to MetaCart
Some recently proposed extensions to relational database systems, as well as to deductive database systems, require support for multiplequery processing. For example, in a database system enhanced with inference capabilities, a simple query involving a rule with multiple definitions may expand to more than one actual query that has to be run over the database. It is an interesting problem then to come up with algorithms that process these queries together instead of one query at a time. The main motivation for performing such an interquery optimization lies in the fact that queries may share common data. We examine the problem of multiplequery optimization in this paper. The first major contribution of the paper is a systematic look at the problem, along with the presentation and analysis of algorithms that can be used for multiplequery optimization. The second contribution lies in the presentation of experimental results. Our results show that using multiplequery processing algorithms may reduce execution cost considerably.
Updating Derived Relations: Detecting Irrelevant and Autonomously Computable Updates
 ACM Transactions on Database Systems
, 1989
"... Consider a database containing not only base relations but also stored derived relations (also called materialized or concrete views). When a base relation is updated, it may also be necessary to update some of the derived relations. This paper gives sufficient and necessary conditions for detecting ..."
Abstract

Cited by 160 (2 self)
 Add to MetaCart
Consider a database containing not only base relations but also stored derived relations (also called materialized or concrete views). When a base relation is updated, it may also be necessary to update some of the derived relations. This paper gives sufficient and necessary conditions for detecting when an update of a base relation cannot affect a derived relation (an irrelevant update), and for detecting when a derived relation can be correctly updated using no data other than the derived relation itself and the given update operation (an autonomously computable update). The class of derived relations considered is restricted to those defined by PSJexpressions, that is, any relational algebra expression constructed from an arbitrary number of project, select and join operations (but containing no selfjoins). The class of update operations consists of insertions, deletions, and modifications, where the set of tuples to be deleted or modified is specified by a selection condition on ...
The implementation and performance evaluation of the ADMS query optimizer: Integrating query result caching and matching
 IN PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON EXTENDING DATABASE TECHNOLOGY
, 1994
"... In this paper, we describe the design and implementation of the ADMS query optimizer. This optimizer integrates query matching into optimization and generates more e cient query plans using cached results. It features data caching and pointer caching, alternative cache replacement strategies, and d ..."
Abstract

Cited by 74 (8 self)
 Add to MetaCart
In this paper, we describe the design and implementation of the ADMS query optimizer. This optimizer integrates query matching into optimization and generates more e cient query plans using cached results. It features data caching and pointer caching, alternative cache replacement strategies, and di erent cache update methods. A comprehensive set of experiments were conducted using a benchmark database and synthetic queries. The results showed that pointer caching and dynamic cache update strategies substantially saved query execution time and, thus, increased query throughput under situations with fair query correlation and update load. The requirement of the disk cache space is relatively small, and the extra optimization overhead introduced is more than o set by the time saved in query evaluation.
DBProxy: A dynamic data cache for Web applications
 In Proc. ICDE
, 2003
"... The majority of web pages served today are generated dynamically, usually by an application server querying a backend database. To enhance the scalability of dynamic content serving in large sites, application servers are offloaded to frontend nodes, called edge servers. The improvement from such ..."
Abstract

Cited by 72 (0 self)
 Add to MetaCart
The majority of web pages served today are generated dynamically, usually by an application server querying a backend database. To enhance the scalability of dynamic content serving in large sites, application servers are offloaded to frontend nodes, called edge servers. The improvement from such application offloading is marginal, however, if data is still fetched from the origin database system. To further improve scalability and cut response times, data must be effectively cached on such edge servers. The scale of deployment of edge servers and the rising costs of their administration demand that such caches be selfmanaging and adaptive. In this paper, we describe DBProxy, an edgeofnetwork semantic data cache for web applications. DBProxy is designed to adapt to changes in the workload in a transparent and graceful fashion by caching a large number of overlapping and dynamically changing "materialized views". New "views" are added automatically while others may be discarded to save space. In this paper, we discuss the challenges of designing and implementing such a dynamic edge data cache, and describe our proposed solutions.
ConstraintGenerating Dependencies
 Journal of Computer and System Sciences
, 1995
"... Traditionally, dependency theory has been developed for uninterpreted data. Specifically, the only assumption that is made about the data domains is that data values can be compared for equality. However, data is often interpreted and there can be advantages in considering it as such, for instan ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
Traditionally, dependency theory has been developed for uninterpreted data. Specifically, the only assumption that is made about the data domains is that data values can be compared for equality. However, data is often interpreted and there can be advantages in considering it as such, for instance obtaining more compact representations as done in constraint databases. This paper considers dependency theory in the context of interpreted data. Specifically, it studies constraintgenerating dependencies. These are a generalization of equalitygenerating dependencies where equality requirements are replaced by constraints on an interpreted domain. The main technical results in the paper are a general decision procedure for the implication and consistency problems for constraintgenerating dependencies, and complexity results for specific classes of such dependencies over given domains. The decision procedure proceeds by reducing the dependency problem to a decision problem for the constraint theory of interest, and is applicable as soon as the underlying constraint theory is decidable. The complexity results are, in some cases, directly lifted from the constraint theory; in other cases, optimal complexity bounds are obtained by taking into account the specific form of the constraint decision problem obtained by reducing the dependency implication problem.
The Complexity of Querying Indefinite Data about Linearly Ordered Domains
 In The Proceedings of the Eleventh ACM SIGACTSIGMODSIGART Symposium on Principles of Database Systems
, 1992
"... In applications dealing with ordered domains, the available data is frequently indefinite. While the domain is actually linearly ordered, only some of the order relations holding between points in the data are known. Thus, the data provides only a partial order, and query answering involves determin ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
In applications dealing with ordered domains, the available data is frequently indefinite. While the domain is actually linearly ordered, only some of the order relations holding between points in the data are known. Thus, the data provides only a partial order, and query answering involves determining what holds under all the compatible linear orders. In this paper we study the complexity of evaluating queries in logical databases containing such indefinite information. We show that in this context queries are intractable even under the data complexity measure, but identify a number of PTIME subproblems. Data complexity in the case of monadic predicates is one of these PTIME cases, but for disjunctive queries the proof is nonconstructive, using wellquasiorder techniques. We also show that the query problem we study is equivalent to the problem of containment of conjunctive relational database queries containing inequalities. One of our results implies that the latter is \Pi p 2 ...
Solving Satisfiability and Implication Problems in Database Systems
 ACM Transactions on Database Systems
, 1996
"... Satisfiability, implication, and equivalence problems involving conjunctive inequalities are important and widely encountered database problems that need to be efficiently and effectively processed. In this article we consider two popular types of arithmetic inequalities, (X op Y) and (X op C), wher ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
Satisfiability, implication, and equivalence problems involving conjunctive inequalities are important and widely encountered database problems that need to be efficiently and effectively processed. In this article we consider two popular types of arithmetic inequalities, (X op Y) and (X op C), where X and Y are attributes, C is a constant of the domain or X, and op � {�, �, �, �, �, �}. These inequalities are most frequently used in a database system, inasmuch as the former type of inequality represents a �—join, and the latter is a selection. We study the satisfiability and implication problems under the integer domain and the real domain, as well as under two different operator sets ({�, �, �, �, �} and {�,�,�,�,�,�}). Our results show that solutions under different domains and/or different operator sets are quite different. Out of these eight cases, excluding two cases that had been shown to be NPhard, we either report the first necessary and sufficient conditions for these problems as well as their efficient algorithms with complexity analysis (for four cases), or provide an improved algorithm (for two cases). These iff conditions and algorithms are essential to database designers, practitioners, and researchers. These algorithms have been implemented and an experimental study comparing the proposed algorithms and those previously known is conducted. Our experiments show that the proposed algorithms are more efficient than previously known algorithms even for small input. Categories and Subject Descriptors: H.2.4 [Database Management]: Systems—query processing;
Beyond Finite Domains
, 1994
"... Introduction A finite domain constraint system can be viewed as an linear integer constraint system in which each variable has an upper and lower bound. Finite domains have been used successfully in Constraint Logic Programming (CLP) languages, for example CHIP [4], to attack combinatorial problems ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
Introduction A finite domain constraint system can be viewed as an linear integer constraint system in which each variable has an upper and lower bound. Finite domains have been used successfully in Constraint Logic Programming (CLP) languages, for example CHIP [4], to attack combinatorial problems such as resource allocation, digital circuit verification, etc. In these problems, finite domains allow a natural expression of the problem constraints because bounds on the problem variables are explicit in the problem. In other problems however, for example in temporal reasoning and some scheduling problems, there may not be natural bounds. For these problems, a standard approach has been to use ad hoc bounds, giving rise to a twofold problem. If a bound is too tight, then important solutions could be lost. If a bound is too loose, then significant inefficiency may result. This is because the algorithms used in finite domains work by propagating bounds on variables 1<F12.
An optimizing PROLOG frontend to a relational query system
, 1984
"... An optimizing translation mechanism for the dynamic interaction between a logicbased expert system written in PROLOG and a relational database accessible through SQL is presented. The mechanism makes use of an intermediate language that decomposes the optimization problem and makes the proposed ap ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
An optimizing translation mechanism for the dynamic interaction between a logicbased expert system written in PROLOG and a relational database accessible through SQL is presented. The mechanism makes use of an intermediate language that decomposes the optimization problem and makes the proposed approach targetlanguage independent. It can either facilitate expert systemdatabase interaction, e.g., when integrating expert systems into business systems, or augment existing database with (external) deductive capabilities.