Results 1  10
of
26
Designing Programs That Check Their Work
, 1989
"... A program correctness checker is an algorithm for checking the output of a computation. That is, given a program and an instance on which the program is run, the checker certifies whether the output of the program on that instance is correct. This paper defines the concept of a program checker. It d ..."
Abstract

Cited by 307 (17 self)
 Add to MetaCart
A program correctness checker is an algorithm for checking the output of a computation. That is, given a program and an instance on which the program is run, the checker certifies whether the output of the program on that instance is correct. This paper defines the concept of a program checker. It designs program checkers for a few specific and carefully chosen problems in the class FP of functions computable in polynomial time. Problems in FP for which checkers are presented in this paper include Sorting, Matrix Rank and GCD. It also applies methods of modern cryptography, especially the idea of a probabilistic interactive proof, to the design of program checkers for group theoretic computations. Two strucural theorems are proven here. One is a characterization of problems that can be checked. The other theorem establishes equivalence classes of problems such that whenever one problem in a class is checkable, all problems in the class are checkable.
The NPcompleteness column: an ongoing guide
 Journal of Algorithms
, 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co ..."
Abstract

Cited by 188 (0 self)
 Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
Optimal expectedtime algorithms for closest point problems
 ACM Transactions of Mathematical Software
, 1980
"... Geometric closest potnt problems deal with the proxLmity relationships in kdimensional point sets. Examples of closest point problems include building minimum spanning trees, nearest neighbor searching, and triangulation constructmn Shamos and Hoey [17] have shown how the Voronoi dtagram can be use ..."
Abstract

Cited by 89 (0 self)
 Add to MetaCart
Geometric closest potnt problems deal with the proxLmity relationships in kdimensional point sets. Examples of closest point problems include building minimum spanning trees, nearest neighbor searching, and triangulation constructmn Shamos and Hoey [17] have shown how the Voronoi dtagram can be used to solve a number of planar closest point problems in optimal worst case tune. In this paper we extend thmr work by giving optimal expected.trine algorithms for solving a number of closest point problems in kspace, including nearest neighbor searching, finding all nearest neighbors, and computing planar minimum spanning trees. In addition to establishing theoretical bounds, the algorithms in this paper can be implemented to solve practical problems very efficiently. Key Words and Phrases ' computational geometry, closest point problems, minunum spanning trees, nearest neighbor searching, optimal algorithms, probabfllstm analysis of algorithms, Voronoi diagrams CR Categories: 3.74, 5 25, 5.31, 5.32 1.
Collision Detection for Deformable Objects
, 2004
"... Interactive environments for dynamically deforming objects play an important role in surgery simulation and entertainment technology. These environments require fast deformable models and very efficient collision handling techniques. While collision detection for rigid bodies is wellinvestigated, c ..."
Abstract

Cited by 77 (14 self)
 Add to MetaCart
Interactive environments for dynamically deforming objects play an important role in surgery simulation and entertainment technology. These environments require fast deformable models and very efficient collision handling techniques. While collision detection for rigid bodies is wellinvestigated, collision detection for deformable objects introduces additional challenging problems. This paper focusses on these aspects and summarizes recent research in the area of deformable collision detection. Various approaches based on bounding volume hierarchies, distance fields, and spatial partitioning are discussed. Further, imagespace techniques and stochastic methods are considered. Applications in cloth modeling and surgical simulation are presented.
ClosestPoint Problems in Computational Geometry
, 1997
"... This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, th ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
This is the preliminary version of a chapter that will appear in the Handbook on Computational Geometry, edited by J.R. Sack and J. Urrutia. A comprehensive overview is given of algorithms and data structures for proximity problems on point sets in IR D . In particular, the closest pair problem, the exact and approximate postoffice problem, and the problem of constructing spanners are discussed in detail. Contents 1 Introduction 1 2 The static closest pair problem 4 2.1 Preliminary remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Algorithms that are optimal in the algebraic computation tree model . 5 2.2.1 An algorithm based on the Voronoi diagram . . . . . . . . . . . 5 2.2.2 A divideandconquer algorithm . . . . . . . . . . . . . . . . . . 5 2.2.3 A plane sweep algorithm . . . . . . . . . . . . . . . . . . . . . . 6 2.3 A deterministic algorithm that uses indirect addressing . . . . . . . . . 7 2.3.1 The degraded grid . . . . . . . . . . . . . . . . . . ...
Intersection of Convex Objects in Two and Three Dimensions
, 1987
"... One of the basic geometric operations involves determining whether a pair of convex objects intersect. This problem is well understood in a model of computation in which the objects are given as input and their intersection is returned as output. For many applications, however, it may be assumed tha ..."
Abstract

Cited by 49 (3 self)
 Add to MetaCart
One of the basic geometric operations involves determining whether a pair of convex objects intersect. This problem is well understood in a model of computation in which the objects are given as input and their intersection is returned as output. For many applications, however, it may be assumed that the objects already exist within the computer and that the only output desired is a single piece of data giving a common point if the objects intersect or reporting no intersection if they are disjoint. For this problem, none of the previous lower bounds are valid and algorithms are proposed requiring sublinear time for their solution in two and three dimensions.
Some theories of reasoned assumptions: An essay in rational psychology
, 1983
"... not be interpreted as representing the official policies, ..."
Abstract

Cited by 44 (26 self)
 Add to MetaCart
not be interpreted as representing the official policies,
Derivation of Randomized Sorting and Selection Algorithms, in Parallel Algorithm Derivation And Program Transformation, edited by
, 1993
"... In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are ..."
Abstract

Cited by 22 (18 self)
 Add to MetaCart
In this paper we systematically derive randomized algorithms (both sequential and parallel) for sorting and selection from basic principles and fundamental techniques like random sampling. We prove several sampling lemmas which will find independent applications. The new algorithms derived here are the most efficient known. From among other results, we have an efficient algorithm for sequential sorting. The problem of sorting has attracted so much attention because of its vital importance. Sorting with as few comparisons as possible while keeping the storage size minimum is a long standing open problem. This problem is referred to as ‘the minimum storage sorting ’ [10] in the literature. The previously best known minimum storage sorting algorithm is due to Frazer and McKellar [10]. The expected number of comparisons made by this algorithm is n log n + O(n log log n). The algorithm we derive in this paper makes only an expected n log n + O(n ω(n)) number of comparisons, for any function ω(n) that tends to infinity. A variant of this algorithm makes no more than n log n + O(n log log n) comparisons on any input of size n with overwhelming probability. We also prove high probability bounds for several randomized algorithms for which only expected bounds have been proven so far.
Tarjan, On a greedy heuristic for complete matching
 SIAM Journal on Computing
, 1981
"... Abstract, Finding a minimum weighted complete matching on a set of vertices in which the distances satisfy the triangle inequality is of general interest and of particular importance when drawing graphs on a mechanical plotter. The "greedy " heuristic of repeatedly matching the two closest unmatched ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Abstract, Finding a minimum weighted complete matching on a set of vertices in which the distances satisfy the triangle inequality is of general interest and of particular importance when drawing graphs on a mechanical plotter. The "greedy " heuristic of repeatedly matching the two closest unmatched points can be implemented in worstcase time O(n log n), a reasonable savings compared to the general minimum weighted matching algorithm which requires time proportional to n to find the minimum cost matching in a weighted graph. We show that, for an even number n of vertices whose distances satisfy the triangle inequality, the ratio of the cost of the matching produced by this greedy heuristic to the cost of the minimal lg matching is at most 3n 1, lg 0.58496, and there are examples that achieve this bound. We conclude that this greedy heuristic, although desirable because of its simplicity, would be a poor choice for this problem. Key words, graph algorithms, matching, greedy heuristic, analysis of algorithms Introduction, We begin with some motivation, the connection of which to our central topic will become clear later. The problem of drawing a graph G (V, E) on a mechanical plotter with prespecified vertex locations arises in numerous applications [7]. For example, in the
Energy Aware Computing Through Probabilistic Switching: A Study of Limits
 IEEE Transactions on Computers
, 2005
"... The mathematical technique of randomization yielding probabilistic algorithms is shown, for the first time, through a physical interpretation based on statistical thermodynamics, to be a basis for energy savings in computing. Concretely, at the fundamental limit, it is shown that the energy needed ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
The mathematical technique of randomization yielding probabilistic algorithms is shown, for the first time, through a physical interpretation based on statistical thermodynamics, to be a basis for energy savings in computing. Concretely, at the fundamental limit, it is shown that the energy needed to compute a single probabilistic bit or PBIT is proportional to the probability p of computing a PBIT accurately. This result is established through the introduction of an idealized switch, for computing a PBIT, using which a network of switches can be constructed. Interesting examples of such networks including AND, OR and NOT gates (or as functions, boolean conjunction, disjunction and negation respectively), are constructed and the potential for energy savings through randomization is established. To quantify these savings, novel measures of "technology independent" energy complexity are introducedthese parallel conventional machineindependent measures of computational complexity such as the algorithm's running time. Networks of switches can be shown to be equivalent to Turing machines and to boolean circuits, both of which are widelyknown and wellunderstood models of computation. These savings are realized using a novel way of representing a PBIT in the physical domain through a group of classical microstates. A measurement and thus detection of a microstate yields the value of the PBIT. While the eventual goal of this work is to lead to the physical realization of these theoretical constructs through the innovation of randomized (CMOS based) devices, the current goal is to rigorously establish the potential for energy savings through probabilistic computing at a fundamental physical level, based on the canonical thermodynamic models of idealized monoa...