Results 1  10
of
21
A Load Balancing Strategy For Prioritized Execution of Tasks
, 1993
"... Load balancing is a critical factor in achieving optimal performance in parallel applications where tasks are created in a dynamic fashion. In many computations, such as state space search problems, tasks have priorities, and solutions to the computation may be achieved more efficiently if these pri ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
Load balancing is a critical factor in achieving optimal performance in parallel applications where tasks are created in a dynamic fashion. In many computations, such as state space search problems, tasks have priorities, and solutions to the computation may be achieved more efficiently if these priorities are adhered to in the parallel execution of the tasks. For such tasks, a load balancing scheme that only seeks to balance load, without balancing high priority tasks over the entire system, might result in the concentration of high priority tasks (even in a balancedload environment) on a few processors, thereby leading to low priority work being done. In such situations a load balancing scheme is desired which would balance both load and high priority tasks over the system. In this paper, we describe the development of a more efficient prioritized load balancing strategy. 1 Introduction Load balancing is a critical factor in achieving optimal performance in parallel applications w...
Physical Database Design Decision Algorithms and Concurrent Reorganization for Parallel Database Systems
, 1997
"... Stringent performance requirements in DB applications have led to the use of parallelism for database processing. To allow the database system to take advantage of the performance of parallel sharednothing systems, the physical DB design must be appropriate for the DB structure and the workload. We ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Stringent performance requirements in DB applications have led to the use of parallelism for database processing. To allow the database system to take advantage of the performance of parallel sharednothing systems, the physical DB design must be appropriate for the DB structure and the workload. We develop decision algorithms that will select a good physical DB design both when the DB is first loaded into the system (static decision) and while the DB is being used by the workload (dynamic decision). Our decision algorithms take the database structure, workload, and system characteristics as inputs. The static (or initial) physical DB design decision algorithm involves: • selecting a partitioning attribute for each relation that determines how the relation is fragmented across the nodes (allowing for high I/O bandwidth); • selecting indexes on the relation attributes to allow faster accesses compared to sequential file scans; • selecting the attributes by which to cluster a relation in order to take advantage of the prefetching and caching involved in I/O access; • grouping of relations to allow DB operations (joins) on relation pairs to be executed locally
Polynomial Solvability of CostBased Abduction
 Artificial Intelligence
, 1996
"... Recent empirical [9, 2] studies have shown that many interesting costbased abduction problems can be solved efficiently by considering the linear program relaxation of their integer program formulation. We tie this to the concept of total unimodularity from network flow analysis, a fundamental resu ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Recent empirical [9, 2] studies have shown that many interesting costbased abduction problems can be solved efficiently by considering the linear program relaxation of their integer program formulation. We tie this to the concept of total unimodularity from network flow analysis, a fundamental result in polynomial solvability. From this, we can determine the polynomial solvability of abduction problems and, in addition, present a new heuristic for branch and bound in the nonpolynomial cases.
A Linear Constraint Satisfaction Approach to Cyclicity
 ARTIFICIAL INTELLIGENCE
, 1992
"... Abductive reasoning (or explanation) is basically a backwardchaining process on a collection of causal rules. When given an observed event, we attempt to determine the set of causes which brought about this event. Costbased abduction is a model for abductive reasoning which provides a concrete ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abductive reasoning (or explanation) is basically a backwardchaining process on a collection of causal rules. When given an observed event, we attempt to determine the set of causes which brought about this event. Costbased abduction is a model for abductive reasoning which provides a concrete formulation of the explanation process. However, it restricts itself to acyclic causal knowledge bases. The existence of cyclicity results in anamolous behavior by the model. For example, assume in our knowledge base that A can cause B and B can cause A. When faced with having to explain the occurrence of A, we could postulate B. Now, since A already exists, we can use it the explain B. Our backwardchaining process to find an explanation can certainly fall into this trap. In this paper, we present a new model called generalized costbased abduction for general causal knowledge bases. Furthermore, we provide an approach for solving this model by using linear constraint satisfaction.
Information Sharing Mechanisms in Parallel Programs
 Proceedings of the 8th International Parallel Processing Sym posium
, 1994
"... Most parallel programming models provide a single generic mode in which processes can exchange information with each other. However, empirical observation of parallel programs suggest that processes share data in a few distinct and specific modes. We argue that such modes should be identified and ex ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Most parallel programming models provide a single generic mode in which processes can exchange information with each other. However, empirical observation of parallel programs suggest that processes share data in a few distinct and specific modes. We argue that such modes should be identified and explicitly supported in parallel languages and their associated models. The paper describes a set of information sharing abstractions that have been identified and implemented in the parallel programming language Charm. It can be seen that using these abstractions leads to improved clarity and expressiveness of user programs. In addition, the specificity provided by these abstractions can be exploited at compiletime and at runtime to provide the user with highly refined performance feedback and intelligent debugging tools. 1 Introduction A typical parallel computation can be characterized as a collection of processes running on multiple processors. Depending on the programming model and lan...
A Hamilton Path Heuristic with Applications to the Middle Two Levels Problem
 IN: PROCEEDINGS OF THE THIRTIETH SOUTHEASTERN INTERNATIONAL CONFERENCE ON COMBINATORICS, GRAPH THEORY, AND COMPUTING (BOCA
, 1999
"... The notorious middle two levels problem is to find a Hamilton cycle in the middle two levels, M 2k+1 , of the Hasse diagram of B2k+1 (the partially ordered set of subsets of a 2k + 1element set ordered by inclusion). Previously, the best known result, due to Moews and Reid [11] in 1990, was tha ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
The notorious middle two levels problem is to find a Hamilton cycle in the middle two levels, M 2k+1 , of the Hasse diagram of B2k+1 (the partially ordered set of subsets of a 2k + 1element set ordered by inclusion). Previously, the best known result, due to Moews and Reid [11] in 1990, was that M2k+1 is Hamiltonian for all positive k through k = 11. We show that if a Hamilton path between two distinguished vertices exists in a reduced graph then a Hamilton cycle can be constructed in the middle two levels. We describe a heuristic for finding Hamilton paths and apply it to the reduced graph to extend the previous best known results. This also improves the best lower bound on the length of a longest cycle in M 2k+1 for any k.
The Crossover Landscape for the Onemax Problem
, 1996
"... In seeking to understand how and why genetic algorithms (GAs) work, attention has been focussed on the landscapes on which they search. While it is relatively simple to analyse the landscapes induced by traditional neighbourhood search operators, the position is considerably complicated for the oper ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In seeking to understand how and why genetic algorithms (GAs) work, attention has been focussed on the landscapes on which they search. While it is relatively simple to analyse the landscapes induced by traditional neighbourhood search operators, the position is considerably complicated for the operators normally used by a GA. Problems which have a single global optimum on a standard `Hamming' landscape such as the familiar Onemax function actually possess an exponentially increasing number of local optima on the landscape associated with crossover. Nevertheless, GAs can solve such problems: the obvious question is how they succeed. In this paper, an attempt is made to answer this question for the case of the wellstudied Onemax problem. 1 Introduction It is probably fair to say that a complete understanding of how genetic algorithms (GAs) work still eludes us. One popular approach in the GA community has been to restrict the analysis to a particular class of problems in an endeavou...
A SpectralFactorization CombinatorialSearch Algorithm Unifying the Systematized Collection of Daubechies Wavelets
 In Proceedings of the IAAMSAD International Conference on Systems, Signals, Control, and Computers
, 1998
"... A spectralfactorization combinatorialsearch algorithm has been developed for unifying a systematized collection of Daubechies minimum length maximum flatness wavelet filters optimized for a diverse assortment of criteria. This systematized collection comprises real and complex orthogonal and biort ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
A spectralfactorization combinatorialsearch algorithm has been developed for unifying a systematized collection of Daubechies minimum length maximum flatness wavelet filters optimized for a diverse assortment of criteria. This systematized collection comprises real and complex orthogonal and biorthogonal wavelets in families within which the filters are indexed by the number of roots at z = 1. The main algorithm incorporates spectral factorization of the Daubechies polynomial with a combinatorial search of spectral factor root sets indexed by binary codes. The selected spectral factors are found by optimizing the desired criterion characterizing either the filter roots or coefficients. Daubechies wavelet filter families have been systematized to include those optimized for time domain regularity, frequency domain selectivity, time frequency uncertainty, and phase nonlinearity. The latter criterion permits construction of the orthogonal least and most asymmetric real and least and ...
The Systematized Collection of Daubechies Wavelets
 Tech. Rep. CT199806, Computational Toolsmiths
, 1998
"... A single unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelets with the maximal number K of vanishing moments for their finite length. Named and indexed famili ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
A single unifying algorithm has been developed to systematize the collection of compact Daubechies wavelets. This collection comprises all classes of real and complex orthogonal and biorthogonal wavelets with the maximal number K of vanishing moments for their finite length. Named and indexed families of wavelet filters were generated by spectral factorization of a product filter in which the optimal subset of roots was selected by a defining criterion within a combinatorial search of subsets meeting required constraints. Several new families have been defined some of which were demonstrated to be equivalent to families with roots selected solely by geometric criteria that do not require an optimizing search. Extensive experimental results are tabulated for 1 # K # 24 for each of the families and most of the filter characteristics defined in both time and frequency domains. For those families requiring optimization, a conjecture for K>24 is provided for a search pattern t...
A Generalized Approach for Image Indexing and Retrieval Based on 2D Strings
 In ShiKuo Chang, Erland Jungert, and Genoveffa Tortora, editors, Visual Reasoning. Plenum Publishing Co
, 1993
"... D strings is one of a few representation structures originally designed for use in an IDB environment. In this chapter, a generalized approach for 2D string based indexing, which avoids the exhaustive search through the entire database of previous 2D string based techniques, is proposed. The cl ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
D strings is one of a few representation structures originally designed for use in an IDB environment. In this chapter, a generalized approach for 2D string based indexing, which avoids the exhaustive search through the entire database of previous 2D string based techniques, is proposed. The classical framework of representation of 2D strings is also specialized to the cases of scaled and unscaled images. Index structures for supporting retrieval by content, utilizing the 2D string representation framework, are also discussed. The performance of the proposed method is evaluated using a database of simulated images and compared with the performance of existing techniques of 2D string indexing and retrieval. The results demonstrate a very significant improvement in retrieval performance.