Results 1 - 10
of
180
The design and implementation of hierarchical software systems with reusable components
- ACM Transactions on Software Engineering and Methodology
, 1992
"... We present a domain-independent model of hierarchical software system design and construction that is based on interchangeable software components and largescale reuse. The model unifies the conceptualizations of two independent projects, Genesis and Avoca, that are successful examples of software c ..."
Abstract
-
Cited by 412 (72 self)
- Add to MetaCart
(Show Context)
We present a domain-independent model of hierarchical software system design and construction that is based on interchangeable software components and largescale reuse. The model unifies the conceptualizations of two independent projects, Genesis and Avoca, that are successful examples of software component/building-block technologies and domain modeling. Building-block technologies exploit large-scale reuse, rely on open architecture software, and elevate the granularity of programming to the subsystem level. Domain modeling formalizes the similarities and differences among systems of a domain. We believe our model is a blue-print for achieving software component technologies in many domains.
Eddies: Continuously Adaptive Query Processing
- In SIGMOD
, 2000
"... In large federated and shared-nothing databases, resources can exhibit widely fluctuating characteristics. Assumptions made at the time a query is submitted will rarely hold throughout the duration of query processing. As a result, traditional static query optimization and execution techniques are i ..."
Abstract
-
Cited by 411 (21 self)
- Add to MetaCart
In large federated and shared-nothing databases, resources can exhibit widely fluctuating characteristics. Assumptions made at the time a query is submitted will rarely hold throughout the duration of query processing. As a result, traditional static query optimization and execution techniques are ineffective in these environments. In this paper we introduce a query processing mechanism called an eddy, which continuously reorders operators in a query plan as it runs. We characterize the moments of symmetry during which pipelined joins can be easily reordered, and the synchronization barriers that require inputs from different sources to be coordinated. By combining eddies with appropriate join algorithms, we merge the optimization and execution phases of query processing, allowing each tuple to have a flexible ordering of the query operators. This flexibility is controlled by a combination of fluid dynamics and a simple learning algorithm. Our initial implementation demonstrates prom...
Extending the Database Relational Model to Capture More Meaning
- ACM Transactions on Database Systems
, 1979
"... During the last three or four years several investigators have been exploring “semantic models ” for formatted databases. The intent is to capture (in a more or less formal way) more of the meaning of the data so that database design can become more systematic and the database system itself can beha ..."
Abstract
-
Cited by 338 (2 self)
- Add to MetaCart
(Show Context)
During the last three or four years several investigators have been exploring “semantic models ” for formatted databases. The intent is to capture (in a more or less formal way) more of the meaning of the data so that database design can become more systematic and the database system itself can behave more intelligently. Two major thrusts are clear: (I) the search for meaningful units that are as small as possible--atomic semantics; (2) the search for meaningful units that are larger than the usual n-ary relation-molecular semantics. In this paper we propose extensions to the relational model to support certain atomic and molecular semantics. These extensions represent a synthesis of many ideas from the published work in semantic modeling plus the introduction of new rules for insertion, update, and deletion, as well as new algebraic operators.
Fjording the Stream: An Architecture for Queries over Streaming Sensor Data
, 2002
"... If industry visionaries are correct, our lives will soon be full of sensors, connected together in loose conglomerations via wireless networks, each monitoring and collecting data about the environment at large. These sensors behave very differently from traditional database sources: they have inter ..."
Abstract
-
Cited by 281 (8 self)
- Add to MetaCart
(Show Context)
If industry visionaries are correct, our lives will soon be full of sensors, connected together in loose conglomerations via wireless networks, each monitoring and collecting data about the environment at large. These sensors behave very differently from traditional database sources: they have intermittent connectivity, are limited by severe power constraints, and typically sample periodically and push immediately, keeping no record of historical information. These limitations make traditional database systems inappropriate for queries over sensors. We present the Fjords architecture for managing multiple queries over many sensors, and show how it can be used to limit sensor resource demands while maintaining high query throughput. We evaluate our architecture using traces from a network of traffic sensors deployed on Interstate 80 near Berkeley and present performance results that show how query throughput, communication costs, and power consumption are necessarily coupled in sensor environments.
Multiple-Query Optimization
- ACM Transactions on Database Systems
, 1988
"... Some recently proposed extensions to relational database systems, as well as to deductive database systems, require support for multiple-query processing. For example, in a database system enhanced with inference capabilities, a simple query involving a rule with multiple definitions may expand to m ..."
Abstract
-
Cited by 263 (4 self)
- Add to MetaCart
(Show Context)
Some recently proposed extensions to relational database systems, as well as to deductive database systems, require support for multiple-query processing. For example, in a database system enhanced with inference capabilities, a simple query involving a rule with multiple definitions may expand to more than one actual query that has to be run over the database. It is an interesting problem then to come up with algorithms that process these queries together instead of one query at a time. The main motivation for performing such an interquery optimization lies in the fact that queries may share common data. We examine the problem of multiple-query optimization in this paper. The first major contribution of the paper is a systematic look at the problem, along with the presentation and analysis of algorithms that can be used for multiple-query optimization. The second contribution lies in the presentation of experimental results. Our results show that using multiple-query processing algorithms may reduce execution cost considerably.
Semantic database modeling: Survey, applications, and research issues
- ACM Computing Surveys
, 1987
"... Most common database management systems represent information in a simple record-based format. Semantic modeling provides richer data structuring capabilities for database applications. In particular, research in this area has articulated a number of constructs that provide mechanisms for representi ..."
Abstract
-
Cited by 262 (3 self)
- Add to MetaCart
Most common database management systems represent information in a simple record-based format. Semantic modeling provides richer data structuring capabilities for database applications. In particular, research in this area has articulated a number of constructs that provide mechanisms for representing structurally complex interrelations among data typically arising in commercial applications. In general terms, semantic modeling complements work on knowledge representation (in artificial intelligence) and on the new generation of database models based on the object-oriented paradigm of programming languages. This paper presents an in-depth discussion of semantic data modeling. It reviews the philosophical motivations of semantic models, including the need for high-level modeling abstractions and the reduction of semantic overloading of data type constructors. It then provides a tutorial introduction to the primary components of semantic models, which are the explicit representation of objects, attributes of and relationships among objects, type constructors for building complex types, ISA relationships, and derived schema components. Next, a survey of the prominent semantic models in the literature is presented. Further, since a broad area of research has developed around semantic modeling, a number of related topics based on these models are discussed, including data languages, graphical interfaces, theoretical investigations, and physical implementation strategies.
Update semantics of relational views
- ACM Transactions on Database Systems
, 1981
"... A database view is a portion of the data structured in a way suitable to a specific application. Updates on views must be translated into updates on the underlying database. This paper studies the translation process in the relational model. The procedure is as follows: first, a “complete ” set of u ..."
Abstract
-
Cited by 247 (4 self)
- Add to MetaCart
(Show Context)
A database view is a portion of the data structured in a way suitable to a specific application. Updates on views must be translated into updates on the underlying database. This paper studies the translation process in the relational model. The procedure is as follows: first, a “complete ” set of updates is defined such that (i) together with every update the set contains a “return ” update, that is, one that brings the view back to the original state; (ii) given two updates in the set, their composition is also in the set. To translate a complete set, we define a mapping called a “translator, ” that associates with each view update a unique database update called a “translation. ” The constraint on a translation is to take the database to a state mapping onto the updated view. The constraint on the translator is to be a morphism. We propose a method for defining translators. Together with the user-defined view, we define a “complementary ” view such that the database could be computed from the view and its complement. We show that a view can have many different complements and that the choice of a complement determines an update policy. Thus, we fix a view complement and we define the translation of a given view update in such a way that the complement remains invariant (“translation under constant complemen$‘). The main result of the paper states that, given a complete set U of view updates, U has a translator if and only if U is translatable under constant complement.
Query optimization in database systems
- ACM Computing Surveys
, 1984
"... Efficient methods of processing unanticipated queries are a crucial prerequisite for the success of generalized database management systems. A wide variety of approaches to improve the performance of query evaluation algorithms have been proposed: logic-based and semantic transformations, fast imple ..."
Abstract
-
Cited by 231 (0 self)
- Add to MetaCart
Efficient methods of processing unanticipated queries are a crucial prerequisite for the success of generalized database management systems. A wide variety of approaches to improve the performance of query evaluation algorithms have been proposed: logic-based and semantic transformations, fast implementations of basic operations, and combinatorial or heuristic algorithms for generating alternative access plans and choosing among them. These methods are presented in the framework of a general query evaluation procedure using the relational calculus representation of queries. In addition, nonstandard query optimization issues such as higher level query evaluation, query optimization in distributed databases, and use of database machines are addressed. The focus, however, is on query optimization in centralized database systems.
Complex queries in dht-based peer-to-peer networks
- in Proceedings of IPTPS’02
, 2002
"... Recently a new generation of P2P systems, offering distributed hash table (DHT) functionality, have been proposed. These systems greatly improve the scalability and exact-match accuracy of P2P systems, but offer only the exact-match query facility. This paper outlines a research agenda for building ..."
Abstract
-
Cited by 198 (4 self)
- Add to MetaCart
(Show Context)
Recently a new generation of P2P systems, offering distributed hash table (DHT) functionality, have been proposed. These systems greatly improve the scalability and exact-match accuracy of P2P systems, but offer only the exact-match query facility. This paper outlines a research agenda for building complex query facilities on top of these DHT-based P2P systems. We describe the issues involved and outline our research plan and
The exodus optimizer generator
- In Proceedings of the 1987 ACM International Conference on Management ofData (S1GMOD'87
, 1987
"... This paper presents the design and an mmal perfor-mance evaluation of the query ophrmzer generator designed for the EXODUS extensible database system. Algetic transformaaon rules are translated mto an executable query optmuzer. which transforms query trees and selects methods for executmg operattons ..."
Abstract
-
Cited by 177 (7 self)
- Add to MetaCart
This paper presents the design and an mmal perfor-mance evaluation of the query ophrmzer generator designed for the EXODUS extensible database system. Algetic transformaaon rules are translated mto an executable query optmuzer. which transforms query trees and selects methods for executmg operattons accordmg to cost funcaons associ-ated with the methods The search strategy avoids exhaus-ave search and it mties Itself to take advantage of past expenence Computattonal results show that an opmzer generated for a relational system produces access plans almost as good as those produced by exhaushve search, ~rlth the search tune cut to a small fraction 1