Results 1  10
of
16
The Structure and Complexity of Nash Equilibria for a Selfish Routing Game
, 2002
"... In this work, we study the combinatorial structure and the computational complexity of Nash equilibria for a certain game that models sel sh routing over a network consisting of m parallel links. We assume a collection of n users, each employing a mixed strategy, which is a probability distribu ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
In this work, we study the combinatorial structure and the computational complexity of Nash equilibria for a certain game that models sel sh routing over a network consisting of m parallel links. We assume a collection of n users, each employing a mixed strategy, which is a probability distribution over links, to control the routing of its own assigned trac. In a Nash equilibrium, each user sel shly routes its trac on those links that minimize its expected latency cost, given the network congestion caused by the other users. The social cost of a Nash equilibrium is the expectation, over all random choices of the users, of the maximum, over all links, latency through a link.
Extracting randomness from samplable distributions
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications. Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e. can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large minentropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist. We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting. One of our uniform constructions is as follows: assuming that there are problems in���ÌÁÅ�ÇÒthat are not solvable by subexponentialsize circuits with¦� gates, there is an efficient extractor that transforms any samplable distribution of lengthÒand minentropy Ò into an output distribution of length ÇÒ, whereis any sufficiently small constant. The running time of the extractor is polynomial inÒand the circuit complexity of the sampler. These extractors are based on a connection be
An Optimal Algorithm for Monte Carlo Estimation
, 1995
"... A typical approach to estimate an unknown quantity is to design an experiment that produces a random variable Z distributed in [0; 1] with E[Z] = , run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
A typical approach to estimate an unknown quantity is to design an experiment that produces a random variable Z distributed in [0; 1] with E[Z] = , run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [0; 1]. We describe an approximation algorithm AA which, given ffl and ffi, when running independent experiments with respect to any Z, produces an estimate that is within a factor 1 + ffl of with probability at least 1 \Gamma ffi. We prove that the expected number of experiments run by AA (which depends on Z) is optimal to within a constant factor for every Z. An announcement of these results appears in P. Dagum, D. Karp, M. Luby, S. Ross, "An optimal algorithm for MonteCarlo Estimation (extended abstract)", Proceedings of the Thirtysixth IEEE Symposium on Foundations of Computer Science, 1995, pp. 142149 [3]. Section ...
On Sparse Approximations To Randomized Strategies And Convex Combinations
, 1994
"... A randomized strategy or a convex combination may be represented by a probability vector p = (p 1 ; : : : ; pm ) . p is called sparse if it has only few positive entries. This paper presents an Approximation Lemma and applies it to matrix games, linear programming, computer chess, and uniform sampli ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
A randomized strategy or a convex combination may be represented by a probability vector p = (p 1 ; : : : ; pm ) . p is called sparse if it has only few positive entries. This paper presents an Approximation Lemma and applies it to matrix games, linear programming, computer chess, and uniform sampling spaces. In all cases arbitrary probability vectors can be substituted by sparse ones (with only logarithmically many positive entries) without loosing too much performance.
Approximate counting and quantum computation
 Combinatorics, Probability and Computing
, 2006
"... Motivated by the result that an ‘approximate ’ evaluation #P have of the Jones polynomial of a braid at a 5 th root of unity can be used to simulate the quantum part of any algorithm in the quantum complexity class BQP, and results relating BQP to the counting class GapP, we introduce a form of addi ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Motivated by the result that an ‘approximate ’ evaluation #P have of the Jones polynomial of a braid at a 5 th root of unity can be used to simulate the quantum part of any algorithm in the quantum complexity class BQP, and results relating BQP to the counting class GapP, we introduce a form of additive approximation which can be used to simulate a function in BQP. We show that all functions in the classes #P intersection and GapP have such an approximation scheme under certain natural normalizations. However we are unable to determine whether the particular functions we are motivated by, such as the above evaluation of the Jones polynomial, can be approximated in this way. We close with some open problems motivated by this work. 1
Compression of samplable sources
 In IEEE Conference on Computational Complexity
, 2004
"... Abstract. We study the compression of polynomially samplable sources. In particular, we give efficient prefixfree compression and decompression algorithms for three classes of such sources (whose support is a subset of {0,1} n). 1. We show how to compress sources X samplable by logspace machines to ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. We study the compression of polynomially samplable sources. In particular, we give efficient prefixfree compression and decompression algorithms for three classes of such sources (whose support is a subset of {0,1} n). 1. We show how to compress sources X samplable by logspace machines to expected length H(X) + O(1). Our next results concern flat sources whose support is in P. 2. If H(X) ≤ k = n − O(log n), we show how to compress to length k + polylog(n − k). 3. If the support of X is the witness set for a selfreducible NP relation, then we show how to compress to expected length H(X)+5.
Efficient Approximation of Spectral and Autocorrelation Coefficients
 Department of Computer Science at James Cook University, Townsville Australia
, 1996
"... In this paper we provide polynomial time approximation techniques which allow us to calculate, to arbitrary levels of accuracy and with high probability of success, the spectral coefficients and autocorrelation coefficients of Boolean functions, given that those functions are expressed in either Sum ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we provide polynomial time approximation techniques which allow us to calculate, to arbitrary levels of accuracy and with high probability of success, the spectral coefficients and autocorrelation coefficients of Boolean functions, given that those functions are expressed in either SumofProducts or ProductofSums form. 1 Introduction The focus of this paper is on the generation of spectral coefficients and autocorrelation coefficients for Boolean functions. Efficient calculation of those coefficients would allow digital logic analysts to draw on the large body of research already effectively employed in the area of signal processing. Utilisation of coefficientbased techniques in areas such as logic testing and synthesis has traditionally been hampered by the computational requirements for coefficient calculation. To reduce the computational demands, we use an approximation technique to estimate the coefficient values. As arbitrary levels of accuracy can still be obta...
unknown title
"... 10.1 Unbiased Estimators Suppose we are given two sets R and A in the plane such that R ae A, and we wish to estimate the area of R, given that of A (or compute the size of R given that of A if R and A are finite point sets). One approachis the MonteCarlo method: pick an element of A uniformly at r ..."
Abstract
 Add to MetaCart
10.1 Unbiased Estimators Suppose we are given two sets R and A in the plane such that R ae A, and we wish to estimate the area of R, given that of A (or compute the size of R given that of A if R and A are finite point sets). One approachis the MonteCarlo method: pick an element of A uniformly at random and check to see if this element is in R, and repeat this procedure several times (independently) to estimate the size of R relative to that of to A. Let p1, p2,..., pt be t i.i.d elements sampled uniformly from A. Define Xi = ae 1 if pi 2 R;0 otherwise, and the sample mean
Randomised Techniques to Efficiently Approximate Spectral Coefficients and Autocorrelation Coefficients
"... In this paper we provide polynomial time approximation techniques which allow us to calculate, to arbitrary levels of accuracy and with high probability of success, the spectral coefficients and autocorrelation coefficients of Boolean functions, given that those functions are expressed in either Sum ..."
Abstract
 Add to MetaCart
In this paper we provide polynomial time approximation techniques which allow us to calculate, to arbitrary levels of accuracy and with high probability of success, the spectral coefficients and autocorrelation coefficients of Boolean functions, given that those functions are expressed in either SumofProducts or ProductofSums form. 1 Introduction The focus of this paper is on the generation of spectral coefficients and autocorrelation coefficients for Boolean functions. Efficient calculation of those coefficients would allow digital logic analysts to draw on the large body of research already effectively employed in the area of signal processing. Utilisation of coefficientbased techniques in areas such as logic testing and synthesis has traditionally been hampered by the computational requirements for coefficient calculation. To reduce the computational demands, we use an approximation technique to estimate the coefficient values. As any specified level of accuracy can still be o...
CS375011 Randomized Algorithms Winter 2003
"... For clarity we prove the theorem for m = n. Before we get on to the proof we introduce some notation and provides some intuition. We need the following notation. 71 Notation 7.3 k (t) := # bins with load k at time t k (t) := # balls with height k at time t B(n; p) := Binomial distribution wi ..."
Abstract
 Add to MetaCart
For clarity we prove the theorem for m = n. Before we get on to the proof we introduce some notation and provides some intuition. We need the following notation. 71 Notation 7.3 k (t) := # bins with load k at time t k (t) := # balls with height k at time t B(n; p) := Binomial distribution with n Bernoulli trials with probability p Intuition: Clearly, N k (t) N k (t). The following is also obvious: k (1) N k (2) N k (n) Suppose for some B k we have N k (n) B k . Then we have the following bound Pr ball i has height k + 1 j N So, k+1 (n) j j N B(n; ) j The following lemma gives a bound on the number of successes of n Bernoulli trials. Roughly speaking, if the success probability of a trial is less than p, the distribution of total number of successes will be bounded above by B(n; p). Lemma 7.4 (Basic Lemma) Let X 1 ; : : : ; Xn be arbitrary random variables. Let Y 1 ; : : : ; Yn be 0 1 random variables, where Y i =