Results 1  10
of
220
Nearest Neighbor Queries
, 1995
"... A frequently encountered type of query in Geographic Information Systems is to find the k nearest neighbor objects to a given point in space. Processing such queries requires substantially different search algorithms than those for location or range queries. In this paper we present an efficient bra ..."
Abstract

Cited by 520 (1 self)
 Add to MetaCart
(Show Context)
A frequently encountered type of query in Geographic Information Systems is to find the k nearest neighbor objects to a given point in space. Processing such queries requires substantially different search algorithms than those for location or range queries. In this paper we present an efficient branchandbound Rtree traversal algorithm to find the nearest neighbor object to a point, and then generalize it to finding the k nearest neighbors. We also discuss metrics for an optimistic and a pessimistic search ordering strategy as well as for pruning. Finally, we present the results of several experiments obtained using the implementation of our algorithm and examine the behavior of the metrics and the scalability of the algorithm.
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 131 (3 self)
 Add to MetaCart
(Show Context)
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Scalable Load Balancing Techniques for Parallel Computers
, 1994
"... In this paper we analyze the scalability of a number of load balancing algorithms which can be applied to problems that have the following characteristics : the work done by a processor can be partitioned into independent work pieces; the work pieces are of highly variable sizes; and it is not po ..."
Abstract

Cited by 106 (16 self)
 Add to MetaCart
(Show Context)
In this paper we analyze the scalability of a number of load balancing algorithms which can be applied to problems that have the following characteristics : the work done by a processor can be partitioned into independent work pieces; the work pieces are of highly variable sizes; and it is not possible (or very difficult) to estimate the size of total work at a given processor. Such problems require a load balancing scheme that distributes the work dynamically among different processors. Our goal here is to determine the most scalable load balancing schemes for different architectures such as hypercube, mesh and network of workstations. For each of these architectures, we establish lower bounds on the scalability of any possible load balancing scheme. We present the scalability analysis of a number of load balancing schemes that have not been analyzed before. This gives us valuable insights into their relative performance for different problem and architectural characteristi...
Evaluating Evolutionary Algorithms
 Artificial Intelligence
, 1996
"... Test functions are commonly used to evaluate the effectiveness of different search algorithms. However, the results of evaluation are as dependent on the test problems as they are on the algorithms that are the subject of comparison. Unfortunately, developing a test suite for evaluating competing se ..."
Abstract

Cited by 100 (14 self)
 Add to MetaCart
(Show Context)
Test functions are commonly used to evaluate the effectiveness of different search algorithms. However, the results of evaluation are as dependent on the test problems as they are on the algorithms that are the subject of comparison. Unfortunately, developing a test suite for evaluating competing search algorithms is difficult without clearly defined evaluation goals. In this paper we discuss some basic principles that can be used to develop test suites and we examine the role of test suites as they have been used to evaluate evolutionary search algorithms. Current test suites include functions that are easily solved by simple search methods such as greedy hillclimbers. Some test functions also have undesirable characteristics that are exaggerated as the dimensionality of the search space is increased. New methods are examined for constructing functions with different degrees of nonlinearity, where the interactions and the cost of evaluation scale with respect to the dimensionality of...
Domino treewidth
 DISCRETE MATH. THEOR. COMPUT. SCI
, 1994
"... We consider a special variant of treedecompositions, called domino treedecompositions, and the related notion of domino treewidth. In a domino treedecomposition, each vertex of the graph belongs to at most two nodes of the tree. We prove that for every k, d, there exists a constant ck;d such that ..."
Abstract

Cited by 78 (3 self)
 Add to MetaCart
We consider a special variant of treedecompositions, called domino treedecompositions, and the related notion of domino treewidth. In a domino treedecomposition, each vertex of the graph belongs to at most two nodes of the tree. We prove that for every k, d, there exists a constant ck;d such that a graph with treewidth at most k and maximum degree at most d has domino treewidth at most ck;d. The domino treewidth of a tree can be computed in O(n 2 log n) time. There exist polynomial time algorithms that  for fixed k  decide whether a given graph G has domino treewidth at most k. If k is not fixed, this problem is NPcomplete. The domino treewidth problem is hard for the complexity classes W [t] for all t 2 N, and hence the problem for fixed k is unlikely to be solvable in O(n c), where c is a constant, not depending on k.
Data allocation in distributed database systems
 ACM Transactions on Database Systems
, 1988
"... The problem of allocating the data of a database to the sites of a communication network is investigated. This problem deviates from the wellknown file allocation problem in several aspects. First, the objects to be allocated are not known a priori; second, these objects are accessed by schedules t ..."
Abstract

Cited by 73 (1 self)
 Add to MetaCart
The problem of allocating the data of a database to the sites of a communication network is investigated. This problem deviates from the wellknown file allocation problem in several aspects. First, the objects to be allocated are not known a priori; second, these objects are accessed by schedules that contain transmissions between objects to produce the result. A model that makes it possible to compare the cost of allocations is presented, the cost can be computed for different cost functions and for processing schedules produced by arbitrary query processing algorithms. For minimizing the total transmission cost, a method is proposed to determine the fragments to be allocated from the relations in the conceptual schema and the queries and updates executed by the users. For the same cost function, the complexity of the data allocation problem is investigated. Methods for obtaining optimal and heuristic solutions under various ways of computing the cost of an allocation are presented and compared. Two different approaches to the allocation management problem are presented and their merits are discussed.
ApproximatelyStrategyproof and Tractable MultiUnit Auctions
, 2004
"... We present an approximatelyefficient and approximatelystrategyproof auction mechanism for a singlegood multiunit allocation problem. The bidding language allows marginaldecreasing piecewise constant curves and quantitybased side constraints. We develop a fully polynomialtime approximation sch ..."
Abstract

Cited by 56 (11 self)
 Add to MetaCart
We present an approximatelyefficient and approximatelystrategyproof auction mechanism for a singlegood multiunit allocation problem. The bidding language allows marginaldecreasing piecewise constant curves and quantitybased side constraints. We develop a fully polynomialtime approximation scheme for the multiunit allocation problem, which computes a approximation in worstcase time , given bids each with a constant number of pieces. We integrate this approximation scheme within a VickreyClarke Groves mechanism and compute payments for an asymptotic cost of ! . The maximal possible gain from manipulation to a bidder in the combined scheme is bounded by 429416716 " is the total surplus in the efficient outcome.
Order statistics in digital image processing
 Proc. IEEE 80
, 1992
"... In recent years significant advances have been made in the development of nonlinear image processing techniques. Such techniques are used in digital image filtering, image enhancement, and edge detection. One of the most important families of nonlinear image filters is based on order shztktics. The ..."
Abstract

Cited by 55 (14 self)
 Add to MetaCart
(Show Context)
In recent years significant advances have been made in the development of nonlinear image processing techniques. Such techniques are used in digital image filtering, image enhancement, and edge detection. One of the most important families of nonlinear image filters is based on order shztktics. The widely used median filter is the best known filter of this family. Nonlinear filters based on order statistics have excellent robustness properties in the presence of impulsive noise. They tend to preserve edge information, which is very important to human perception. Their computation is relatively easy and fast compared with some linear filters. All these features make them very popular in the imageprocessing community. Their theoretical analysis is relatively difficult compared with that of the linear filters. However, several new tools have been developed in recent years that make this analysis easier. In this review paper an analysis of their properties as well as their algorithmic computation will be presented. I.
Concurrent access of priority queues
 IEEE Transactions on Computers
, 1988
"... ..."
(Show Context)