Results 1  10
of
109
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 125 (3 self)
 Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Parallel simulation today
 Annals of Operations Research
, 1994
"... ej 4r.,,D I " h",' _ k,) r,m '3'. IC,.4 Z _ O ..."
Abstract

Cited by 78 (16 self)
 Add to MetaCart
ej 4r.,,D I " h",' _ k,) r,m '3'. IC,.4 Z _ O
Distributed hierarchical control for parallel processing
 Computer
, 1990
"... he development of operating systems for parallel computers has closely followed that for serial computers. At first, the most advanced parallel computers ran in batch mode or singleuser mode. At best, they allowed a static partitioning among a number of users. They were typically designed with a sp ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
he development of operating systems for parallel computers has closely followed that for serial computers. At first, the most advanced parallel computers ran in batch mode or singleuser mode. At best, they allowed a static partitioning among a number of users. They were typically designed with a specific computational task in mind or for a certain class of computations. Like serial computers, they are currently evolving towards generalpurpose, interactive, multiuser parallel systems. To explain the underlying motivation for our work, we note that a generalpurpose, interactive, multiuser, multiprogramming parallel environment has the following advantages (in addition to the traditional advantages in uniprocessor environments, such as cost effectiveness): l This environment provides users with a spectrum of computational powers, covering the range from personal computers to supercomputers. A user requiring more computational power can simply use more processors. Thus, a short response time for both simple and computationally intensive tasks is possible. l The spectrum of powers also aids program development and evaluation. Initially, only one processor is needed. Additional processors can be added later with
Customized Dynamic Load Balancing for a Network of Workstations
, 1997
"... this paper we show that different load balancing schemes are best for different applications under varying program and system parameters. Therefore, applicationdriven customized dynamic load balancing becomes essential for good performance. We present a hybrid compiletime and runtime modeling and ..."
Abstract

Cited by 70 (0 self)
 Add to MetaCart
this paper we show that different load balancing schemes are best for different applications under varying program and system parameters. Therefore, applicationdriven customized dynamic load balancing becomes essential for good performance. We present a hybrid compiletime and runtime modeling and decision process which selects (customizes) the best scheme, along with automatic generation of parallel code with calls to a runtime library for load balancing. 1997 Academic Press 1.
A Practical Approach to Dynamic Load Balancing
, 1995
"... algorithm for load balancing. The following sections elaborate on each step in the above algorithm, presenting various design decisions that one encounters. 2.1 Load Evaluation The efficacy of any load balancing scheme is directly dependent on the quality of load evaluation. Good load measurement i ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
algorithm for load balancing. The following sections elaborate on each step in the above algorithm, presenting various design decisions that one encounters. 2.1 Load Evaluation The efficacy of any load balancing scheme is directly dependent on the quality of load evaluation. Good load measurement is necessary both to determine that a load imbalance exists and to calculate how much work should be transferred to alleviate that imbalance. One can determine the load associated with a given task analytically, empirically or by a combination of those two methods. 6 CHAPTER 2. METHODOLOGY 2.1.1 Analytic Load Evaluation The load for a task is estimated based on knowledge of the time complexity of the algorithm(s) that task is executing along with the data structures on which it is operating. For example, if one knew that a task involved merge sorting a list of 64 elements, one might estimate the load to be 384, since merge sort is an O(N log 2 N) sorting algorithm, and since 64 log 2 (64) ...
ACDS: Adapting Computational Data Streams for High Performance
 IN PROCEEDINGS OF INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS
, 2000
"... Dataintensive, interactive applications are an important class of metacomputing (Grid) applications. They are characterized by large dataflows between data providers and consumers, like scientific simulations and remote visualization clients of simulation output. Such dataflows vary at runtime, ..."
Abstract

Cited by 55 (27 self)
 Add to MetaCart
Dataintensive, interactive applications are an important class of metacomputing (Grid) applications. They are characterized by large dataflows between data providers and consumers, like scientific simulations and remote visualization clients of simulation output. Such dataflows vary at runtime, due to changes in consumers' data needs, changes in the nature of the data being transmitted, or changes in the availability of computing resources used by flows. The topic
TIGHT ANALYSES OF TWO LOCAL LOAD BALANCING ALGORITHMS
 SIAM J. COMPUT.
, 1999
"... This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d + 1 fewer tokens, where d is the maximum degree of any node in the network. We ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
This paper presents an analysis of the following load balancing algorithm. At each step, each node in a network examines the number of tokens at each of its neighbors and sends a token to each neighbor with at least 2d + 1 fewer tokens, where d is the maximum degree of any node in the network. We show that within O(∆/α) steps, the algorithm reduces the maximum difference in tokens between any two nodes to at most O((d 2 log n)/α), where ∆ is the global imbalance in tokens (i.e., the maximum difference between the number of tokens at any node initially and the average number of tokens), n is the number of nodes in the network, and α is the edge expansion of the network. The time bound is tight in the sense that for any graph with edge expansion α, and for any value ∆, there exists an initial distribution of tokens with imbalance ∆ for which the time to reduce the imbalance to even ∆/2 is at least Ω(∆/α). The bound on the final imbalance is tight in the sense that there exists a class of networks that can be locally balanced everywhere (i.e., the maximum difference in tokens between any two neighbors is at most 2d), while the global imbalance remains Ω((d 2 log n)/α). Furthermore, we show that upon reaching a state with a global imbalance of O((d 2 log n)/α), the time for this algorithm to locally balance the network can be as large as Ω(n 1/2). We extend our analysis to a variant of this algorithm for dynamic and asynchronous
A Dynamic Distributed Load Balancing Algorithm with Provable Good Performance
 In Proceedings of the 5th Annual ACM Symposium on Parallel Algorithms and Architectures
, 1993
"... The overall efficiency of parallel algorithms is most decisively effected by the strategy applied for the mapping of workload. Strategies for balancing dynamically generated workload on a processor network which are also useful for practical applications have intensively been investigated by simulat ..."
Abstract

Cited by 44 (5 self)
 Add to MetaCart
The overall efficiency of parallel algorithms is most decisively effected by the strategy applied for the mapping of workload. Strategies for balancing dynamically generated workload on a processor network which are also useful for practical applications have intensively been investigated by simulations and by direct applications. This paper presents the complete theoretical analysis of a dynamically distributed load balancing strategy. The algorithm is adaptive by nature and is therefore useful for a broad range of applications. A similar algorithmic principle has already been implemented for a number of applications in the areas of combinatorial optimization, parallel programming languages and graphical animation. The algorithm performed convincingly for all these applications. In our analysis we will prove that the expected number of packets on each processor varies only by a constant factor compared with that on any other processor, independent of the generation and consumption of ...
Analysis of The Generalized Dimension Exchange Method for Dynamic Load Balancing
 Journal of Parallel and Distributed Computing
, 1992
"... The dimension exchange method is a distributed load balancing method for pointtopoint networks. We add a parameter, called the exchange parameter, to the method to control the splitting of load between a pair of directly connected processors, and call this parameterized version the generalized di ..."
Abstract

Cited by 42 (7 self)
 Add to MetaCart
The dimension exchange method is a distributed load balancing method for pointtopoint networks. We add a parameter, called the exchange parameter, to the method to control the splitting of load between a pair of directly connected processors, and call this parameterized version the generalized dimension exchange (GDE) method. The rationale for the introduction of this parameter is that splitting the workload into equal halves does not necessarily lead to an optimal result (in terms of the convergence rate) for certain structures. We carry out an analysis of this new method, emphasizing on its termination aspects and potential efficiency. Given a specific structure, one needs to determine a value to use for the exchange parameter that would lead to an optimal result. To this end, we first derive a sufficient and necessary condition for the termination of the method. We then show that equal splitting, proposed originally by others as a heuristic strategy, indeed yields optimal efficie...
Load Balancing in Large Networks: A Comparative Study (Extended Abstract)
 In Proceedings of the 3rd IEEE Symposium on Parallel and Distributed Processing
, 1991
"... ) R. Luling, B. Monien, F. Ramme Department of Mathematics and Computer Science University of Paderborn, Germany email : rl@unipaderborn.de, bm@unipaderborn.de, ram@unipaderborn.de Abstract In this paper we compare six well known and two new load balancing strategies on torus and ring topol ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
) R. Luling, B. Monien, F. Ramme Department of Mathematics and Computer Science University of Paderborn, Germany email : rl@unipaderborn.de, bm@unipaderborn.de, ram@unipaderborn.de Abstract In this paper we compare six well known and two new load balancing strategies on torus and ring topologies of different sizes and workload characteristics. Through simulations on a large transputer network, we show that all strategies behave differently under the workload of process and data migration. The two new algorithms based on the gradient model method are shown to be robust to both kinds of workloads. Thus, these new algorithms are good candidates for distributed operating systems running on large networks, where the workload characteristics can not be determined in advance. 1 Introduction We study load balancing algorithms on large MIMD multiprocessor systems. The systems we consider are homogeneous and consist of autonomous processing elements (324 transputers in our case), which...