Results 1  10
of
170
NonUniform Random Variate Generation
, 1986
"... Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various ..."
Abstract

Cited by 620 (21 self)
 Add to MetaCart
Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
Quantum complexity theory
 in Proc. 25th Annual ACM Symposium on Theory of Computing, ACM
, 1993
"... Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This constructi ..."
Abstract

Cited by 482 (5 self)
 Add to MetaCart
Abstract. In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch’s model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97–117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that O(log T) bits of precision suffice to support a T step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church–Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a boundederror probabilistic Turing machine, and thus not in the class BPP. The class BQP of languages that are efficiently decidable (with small errorprobability) on a quantum Turing machine satisfies BPP ⊆ BQP ⊆ P ♯P. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.
Sequential Monte Carlo Methods for Dynamic Systems
 Journal of the American Statistical Association
, 1998
"... A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ..."
Abstract

Cited by 453 (8 self)
 Add to MetaCart
A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ingredients: importance sampling and resampling, rejection sampling, and Markov chain iterations. We deliver a guideline on how they should be used and under what circumstance each method is most suitable. Through the analysis of differences and connections, we consolidate these methods into a generic algorithm by combining desirable features. In addition, we propose a general use of RaoBlackwellization to improve performances. Examples from econometrics and engineering are presented to demonstrate the importance of RaoBlackwellization and to compare different Monte Carlo procedures. Keywords: Blind deconvolution; Bootstrap filter; Gibbs sampling; Hidden Markov model; Kalman filter; Markov...
On the Length of Programs for Computing Finite Binary Sequences
 Journal of the ACM
, 1966
"... The use of Turing machines for calculating finite binary sequences is studied from the point of view of information theory and the theory of recursive functions. Various results are obtained concerning the number of instructions in programs. A modified form of Turing machine is studied from the same ..."
Abstract

Cited by 226 (7 self)
 Add to MetaCart
The use of Turing machines for calculating finite binary sequences is studied from the point of view of information theory and the theory of recursive functions. Various results are obtained concerning the number of instructions in programs. A modified form of Turing machine is studied from the same point of view. An application to the problem of defining a patternless sequence is proposed in terms of the concepts here 2 G. J. Chaitin developed. Introduction In this paper the Turing machine is regarded as a general purpose computer and some practical questions are asked about programming it. Given an arbitrary finite binary sequence, what is the length of the shortest program for calculating it? What are the properties of those binary sequences of a given length which require the longest programs? Do most of the binary sequences of a given length require programs of about the same length? The questions posed above are answered in Part 1. In the course of answering them, the logical ...
Unbiased Bits from Sources of Weak Randomness and Probabilistic Communication Complexity
, 1988
"... , Introduction and References only) Benny Chor Oded Goldreich MIT \Gamma Laboratory for Computer Science Cambridge, Massachusetts 02139 ABSTRACT \Gamma A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g. the Santha and Vazirani model [2 ..."
Abstract

Cited by 182 (4 self)
 Add to MetaCart
, Introduction and References only) Benny Chor Oded Goldreich MIT \Gamma Laboratory for Computer Science Cambridge, Massachusetts 02139 ABSTRACT \Gamma A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g. the Santha and Vazirani model [24]). The sources considered output strings according to probability distributions in which no single string is too probable. The new model provides a fruitful viewpoint on problems studied previously as: ffl Extracting almost perfect bits from sources of weak randomness: the question of possibility as well as the question of efficiency of such extraction schemes are addressed. ffl Probabilistic Communication Complexity: it is shown that most functions have linear communication complexity in a very strong probabilistic sense. ffl Robustness of BPP with respect to sources of weak randomness (generalizing a result of Vazirani and Vazirani [27]). The paper has appeared in SIAM Journal o...
Computational Complexity  A Modern Approach
, 2009
"... Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these incl ..."
Abstract

Cited by 151 (2 self)
 Add to MetaCart
Not to be reproduced or distributed without the authors ’ permissioniiTo our wives — Silvia and RavitivAbout this book Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990 alone could fill a book: these include new probabilistic definitions of classical complexity classes (IP = PSPACE and the PCP Theorems) and their implications for the field of approximation algorithms; Shor’s algorithm to factor integers using a quantum computer; an understanding of why current approaches to the famous P versus NP will not be successful; a theory of derandomization and pseudorandomness based upon computational hardness; and beautiful constructions of pseudorandom objects such as extractors and expanders. This book aims to describe such recent achievements of complexity theory in the context of more classical results. It is intended to both serve as a textbook and as a reference for selfstudy. This means it must simultaneously cater to many audiences, and it is carefully designed with that goal. We assume essentially no computational background and very minimal mathematical background, which we review in Appendix A. We have also provided a web site for this book at
Random number generation
"... Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables dis ..."
Abstract

Cited by 136 (30 self)
 Add to MetaCart
Random numbers are the nuts and bolts of simulation. Typically, all the randomness required by the model is simulated by a random number generator whose output is assumed to be a sequence of independent and identically distributed (IID) U(0, 1) random variables (i.e., continuous random variables distributed uniformly over the interval
Simulating BPP Using a General Weak Random Source
 ALGORITHMICA
, 1996
"... We show how to simulate BPP and approximation algorithms in polynomial time using the output from a ffisource. A ffisource is a weak random source that is asked only once for R bits, and must output an Rbit string according to some distribution that places probability no more than 2 \GammaffiR on ..."
Abstract

Cited by 106 (19 self)
 Add to MetaCart
We show how to simulate BPP and approximation algorithms in polynomial time using the output from a ffisource. A ffisource is a weak random source that is asked only once for R bits, and must output an Rbit string according to some distribution that places probability no more than 2 \GammaffiR on any particular string. We also give an application to the unapproximability of Max Clique.
CostSensitive Learning by CostProportionate Example Weighting
, 2003
"... We propose and evaluate a family of methods for converting classifier learning algorithms and classification theory into costsensitive algorithms and theory. The proposed conversion is based on costproportionate weighting of the training examples, which can be realized either by feeding the weight ..."
Abstract

Cited by 106 (13 self)
 Add to MetaCart
We propose and evaluate a family of methods for converting classifier learning algorithms and classification theory into costsensitive algorithms and theory. The proposed conversion is based on costproportionate weighting of the training examples, which can be realized either by feeding the weights to the classification algorithm (as often done in boosting), or by careful subsampling. We give some theoretical performance guarantees on the proposed methods, as well as empirical evidence that they are practical alternatives to existing approaches. In particular, we propose costing, a method based on costproportionate rejection sampling and ensemble aggregation, which achieves excellent predictive performance on two publicly available datasets, while drastically reducing the computation required by other methods.
Metropolized Independent Sampling with Comparisons to Rejection Sampling and Importance Sampling
, 1996
"... this paper, a special MetropolisHastings type algorithm, Metropolized independent sampling, proposed firstly in Hastings (1970), is studied in full detail. The eigenvalues and eigenvectors of the corresponding Markov chain, as well as a sharp bound for the total variation distance between the nth ..."
Abstract

Cited by 96 (3 self)
 Add to MetaCart
this paper, a special MetropolisHastings type algorithm, Metropolized independent sampling, proposed firstly in Hastings (1970), is studied in full detail. The eigenvalues and eigenvectors of the corresponding Markov chain, as well as a sharp bound for the total variation distance between the nth updated distribution and the target distribution, are provided. Furthermore, the relationship between this scheme, rejection sampling, and importance sampling are studied with emphasizes on their relative efficiencies. It is shown that Metropolized independent sampling is superior to rejection sampling in two aspects: asymptotic efficiency and ease of computation. Key Words: Coupling, Delta method, Eigen analysis, Importance ratio. 1 1 Introduction