Results 1  10
of
76
SelfTesting/Correcting with Applications to Numerical Problems
, 1990
"... Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f . Should we trust that P works correctly? A selftesting/correcting pair allows us to: (1) estimate the probability that P (x) 6= f(x) when x is randomly chosen; (2) on any input x, compute ..."
Abstract

Cited by 340 (26 self)
 Add to MetaCart
Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f . Should we trust that P works correctly? A selftesting/correcting pair allows us to: (1) estimate the probability that P (x) 6= f(x) when x is randomly chosen; (2) on any input x, compute f(x) correctly as long as P is not too faulty on average. Furthermore, both (1) and (2) take time only slightly more than Computer Science Division, U.C. Berkeley, Berkeley, California 94720, Supported by NSF Grant No. CCR 8813632. y International Computer Science Institute, Berkeley, California 94704 z Computer Science Division, U.C. Berkeley, Berkeley, California 94720, Supported by an IBM Graduate Fellowship and NSF Grant No. CCR 8813632. the original running time of P . We present general techniques for constructing simple to program selftesting /correcting pairs for a variety of numerical problems, including integer multiplication, modular multiplication, matrix multiplicatio...
Approximating probabilistic inference in Bayesian belief networks is NPhard
, 1991
"... Abstract A belief network comprises a graphical representation of dependencies between variables of a domain and a set of conditional probabilities associated with each dependency. Unless P=NP, an efficient, exact algorithm does not exist to compute probabilistic inference in belief networks. Stoch ..."
Abstract

Cited by 256 (3 self)
 Add to MetaCart
Abstract A belief network comprises a graphical representation of dependencies between variables of a domain and a set of conditional probabilities associated with each dependency. Unless P=NP, an efficient, exact algorithm does not exist to compute probabilistic inference in belief networks. Stochastic simulation methods, which often improve run times, provide an alternative to exact inference algorithms. We present such a stochastic simulation algorithm 2)BNRAS that is a randomized approximation scheme. To analyze the run time, we parameterize belief networks by the dependence value PE, which is a measure of the cumulative strengths of the belief network dependencies given background evidence E. This parameterization defines the class of fdependence networks. The run time of 2)BNRAS is polynomial when f is a polynomial function. Thus, the results of this paper prove the existence of a class of belief networks for which inference approximation is polynomial and, hence, provably faster than any exact algorithm. I.
Polynomial Time Approximation Schemes for Dense Instances of NPHard Problems
, 1995
"... We present a unified framework for designing polynomial time approximation schemes (PTASs) for "dense" instances of many NPhard optimization problems, including maximum cut, graph bisection, graph separation, minimum kway cut with and without specified terminals, and maximum 3satisfiability. By d ..."
Abstract

Cited by 174 (28 self)
 Add to MetaCart
We present a unified framework for designing polynomial time approximation schemes (PTASs) for "dense" instances of many NPhard optimization problems, including maximum cut, graph bisection, graph separation, minimum kway cut with and without specified terminals, and maximum 3satisfiability. By dense graphs we mean graphs with minimum degree Ω(n), although our algorithms solve most of these problems so long as the average degree is Ω(n). Denseness for nongraph problems is defined similarly. The unified framework begins with the idea of exhaustive sampling: picking a small random set of vertices, guessing where they go on the optimum solution, and then using their placement to determine the placement of everything else. The approach then develops into a PTAS for approximating certain smooth integer programs where the objective function and the constraints are "dense" polynomials of constant degree.
An Optimal Approximation Algorithm For Bayesian Inference
 Artificial Intelligence
, 1997
"... Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all p ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NPhard. This result holds for belief networks that are allowed to contain extreme conditional probabilitiesthat is, conditional probabilities arbitrarily close to 0. Nevertheless, all previous approximation algorithms have failed to approximate efficiently many inferences, even for belief networks without extreme conditional probabilities. We prove that we can approximate efficiently probabilistic inference in belief networks without extreme conditional probabilities. We construct a randomized approximation algorithmthe boundedvariance algorithmthat is a variant of the known likelihoodweighting algorithm. The boundedvariance algorithm is the first algorithm with provably fast inference approximation on all belief networks without extreme conditional probabilities. From the boundedvariance algorithm, we construct a deterministic approximation algorithm u...
Monte Carlo Model Checking
 In Proc. of Tools and Algorithms for Construction and Analysis of Systems (TACAS 2005), volume 3440 of LNCS
, 2005
"... Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, a ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, an LTL (Linear Temporal Logic) formula ϕ, and parameters ɛ and δ, MC 2 takes N = ln(δ) / ln(1 − ɛ) random samples (random walks ending in a cycle, i.e lassos) from the Büchi automaton B = BS × B¬ϕ to decide if L(B) = ∅. Should a sample reveal an accepting lasso l, MC 2 returns false with l as a witness. Otherwise, it returns true and reports that with probability less than δ, pZ < ɛ, where pZ is the expectation of an accepting lasso in B. It does so in time O(N · D) and space O(D), where D is B’s recurrence diameter, using a number of samples N that is optimal to within a constant factor. Our experimental results demonstrate that MC 2 is fast, memoryefficient, and scales very well.
Conditioning Probabilistic Databases
"... Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
Past research on probabilistic databases has studied the problem of answering queries on a static database. Application scenarios of probabilistic databases however often involve the conditioning of a database using additional information in the form of new evidence. The conditioning problem is thus to transform a probabilistic database of priors into a posterior probabilistic database which is materialized for subsequent query processing or further refinement. It turns out that the conditioning problem is closely related to the problem of computing exact tuple confidence values. It is known that exact confidence computation is an NPhard problem. This has lead researchers to consider approximation techniques for confidence computation. However, neither conditioning nor exact confidence computation can be solved using such techniques. In this paper we present efficient techniques for both problems. We study several problem decomposition methods and heuristics that are based on the most successful search techniques from constraint satisfaction, such as the variable elimination rule of the DavisPutnam algorithm. We complement this with a thorough experimental evaluation of the algorithms proposed. Our experiments show that our exact algorithms scale well to realistic database sizes and can in some scenarios compete with the most efficient previous approximation algorithms.
On the Relative Complexity of Approximate Counting Problems
, 2000
"... Two natural classes of counting problems that are interreducible under approximationpreserving reductions are: (i) those that admit a particular kind of ecient approximation algorithm known as an \FPRAS," and (ii) those that are complete for #P with respect to approximationpreserving reducibili ..."
Abstract

Cited by 36 (12 self)
 Add to MetaCart
Two natural classes of counting problems that are interreducible under approximationpreserving reductions are: (i) those that admit a particular kind of ecient approximation algorithm known as an \FPRAS," and (ii) those that are complete for #P with respect to approximationpreserving reducibility. We describe and investigate not only these two classes but also a third class, of intermediate complexity, that is not known to be identical to (i) or (ii). The third class can be characterised as the hardest problems in a logically dened subclass of #P. Research Report 370, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK. This work was supported in part by the EPSRC Research Grant \Sharper Analysis of Randomised Algorithms: a Computational Approach" and by the ESPRIT Projects RANDAPX and ALCOMFT. y dyer@scs.leeds.ac.uk, School of Computer Studies, University of Leeds, Leeds LS2 9JT, United Kingdom. z leslie@dcs.warwick.ac.uk, http://www.dcs.warw...
A Bayesian Approach to Relevance in Game Playing
 ARTIFICIAL INTELLIGENCE
, 1997
"... The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computational e ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computational efficiency of the alphabeta algorithm. Our approach is to form a Bayesian model of our uncertainty. We adopt an evaluation function that returns a probability distribution estimating the probability of various errors in valuing each position. These estimates are obtained by training from data. We thus use additional information at each leaf not available to the standard approach. We utilize this information in three ways: to evaluate which move is best after we are done expanding, to allocate additional thinking time to moves where additional time is most relevant to game outcome, and, perhaps most importantly, to expand the tree along the most relevant lines. Our measure of the relevan...
On Unapproximable Versions of NPComplete Problems
"... . We prove that all of Karp's 21 original NPcomplete problems have a version that's hard to approximate. These versions are obtained from the original problems by adding essentially the same, simple constraint. We further show that these problems are absurdly hard to approximate. In fact, no polyn ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
. We prove that all of Karp's 21 original NPcomplete problems have a version that's hard to approximate. These versions are obtained from the original problems by adding essentially the same, simple constraint. We further show that these problems are absurdly hard to approximate. In fact, no polynomialtime algorithm can even approximate log (k) of the magnitude of these problems to within any constant factor, where log (k) denotes the logarithm iterated k times, unless NP is recognized by slightly superpolynomial randomized machines. We use the same technique to improve the constant ffl such that MAX CLIQUE is hard to approximate to within a factor of n ffl . Finally, we show that it is even harder to approximate two counting problems: counting the number of satisfying assignments to a monotone 2SAT formula and computing the permanent of1,0,1 matrices. Key words. NPcomplete, unapproximable, randomized reduction, clique, counting problems, permanent, 2SAT AMS subject clas...