Results 1  10
of
170
NonUniform Random Variate Generation
, 1986
"... Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various ..."
Abstract

Cited by 646 (21 self)
 Add to MetaCart
Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
A Critique and Improvement of an Evaluation Metric for Text Segmentation
 COMPUTATIONAL LINGUISTICS
, 2002
"... ..."
When are QuasiMonte Carlo Algorithms Efficient for High Dimensional Integrals?
 J. Complexity
, 1997
"... Recently quasiMonte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasiMonte Carlo algorithms does not explain this phenomenon. ..."
Abstract

Cited by 106 (19 self)
 Add to MetaCart
Recently quasiMonte Carlo algorithms have been successfully used for multivariate integration of high dimension d, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasiMonte Carlo algorithms does not explain this phenomenon. This paper presents a partial answer to why quasiMonte Carlo algorithms can work well for arbitrarily large d. It is done by identifying classes of functions for which the effect of the dimension d is negligible. These are weighted classes in which the behavior in the successive dimensions is moderated by a sequence of weights. We prove that the minimal worst case error of quasiMonte Carlo algorithms does not depend on the dimension d iff the sum of the weights is finite. We also prove that under this assumption the minimal number of function values in the worst case setting needed to reduce the initial error by " is bounded by C " \Gammap , where the exponent p 2 [1; 2], and C depends ...
Group reaction time distributions and an analysis of distribution statistics
 Psychological Bulletin
, 1979
"... A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subj ..."
Abstract

Cited by 84 (23 self)
 Add to MetaCart
A method of obtaining an average reaction time distribution for a group of subjects is described. The method is particularly useful for cases in which data from many subjects are available but there are only 1020 reaction time observations per subject cell. Essentially, reaction times for each subject are organized in ascending order, and quantiles are calculated. The quantiles are then averaged over subjects to give group quantiles (cf. Vincent learning curves). From the group quantiles, a group reaction time distribution can be constructed. It is shown that this method of averaging is exact for certain distributions (i.e., the resulting distribution belongs to the same family as the individual distributions). Furthermore, Monte Carlo studies and application of the method to the combined data from three large experiments provide evidence that properties derived from the group reaction time distribution are much the same as average properties derived from the data of individual subjects. This article also examines how to quantitatively describe the shape of reaction time distributions. The use of moments and cumulants as sources of information about distribution shape is evaluated and rejected because of extreme dependence on long, outlier reaction times. As an alternative, the use of explicit distribution functions as approximations to reaction time distributions is considered. Despite the recent popularity of reaction time research, the use of reaction time distributions for both model testing and model development has been largely ignored. This is surprising in view of the fact that properties of distributions can prove decisive in discriminating among models i(Sternberg, Note 1) and can falsify models that quite adequately describe the behavior of mean reaction time (Ratcliff & Murdock, 1976). Two methods have been used to obtain distributional or shape information. One
Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
, 1992
"... Our problem is to compute an approximation to the largest eigenvalue of an n \Theta n large symmetric positive definite matrix with relative error at most ". We consider only algorithms that use Krylov information [b; Ab; : : : ; A k b] consisting of k matrixvector multiplications for some ..."
Abstract

Cited by 47 (3 self)
 Add to MetaCart
Our problem is to compute an approximation to the largest eigenvalue of an n \Theta n large symmetric positive definite matrix with relative error at most ". We consider only algorithms that use Krylov information [b; Ab; : : : ; A k b] consisting of k matrixvector multiplications for some unit vector b. If the vector b is chosen deterministically then the problem cannot be solved no matter how many matrixvector multiplications are performed and what algorithm is used. If, however, the vector b is chosen randomly with respect to the uniform distribution over the unit sphere, then the problem can be solved on the average and probabilistically. More precisely, for a randomly chosen vector b we study the power and Lanczos algorithms. For the power algorithm (method) we prove sharp bounds on the average relative error and on the probabilistic relative failure. For the Lanczos algorithm we present only upper bounds. In particular, ln(n)=k characterizes the average relative error of ...
Optimization by learning and simulation of Bayesian and Gaussian networks
, 1999
"... Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses  organ ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
Estimation of Distribution Algorithms (EDA) constitute an example of stochastics heuristics based on populations of individuals every of which encode the possible solutions to the optimization problem. These populations of individuals evolve in succesive generations as the search progresses  organized in the same way as most evolutionary computation heuristics. In opposition to most evolutionary computation paradigms which consider the crossing and mutation operators as essential tools to generate new populations, EDA replaces those operators by the estimation and simulation of the joint probability distribution of the selected individuals. In this work, after making a review of the different approaches based on EDA for problems of combinatorial optimization as well as for problems of optimization in continuous domains, we propose new approaches based on the theory of probabilistic graphical models to solve problems in both domains. More precisely, we propose to adapt algorit...
Ant colony optimization for continuous domains
, 2008
"... In this paper we present an extension of ant colony optimization (ACO) to continuous domains. We show how ACO, which was initially developed to be a metaheuristic for combinatorial optimization, can be adapted to continuous optimization without any major conceptual change to its structure. We presen ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
In this paper we present an extension of ant colony optimization (ACO) to continuous domains. We show how ACO, which was initially developed to be a metaheuristic for combinatorial optimization, can be adapted to continuous optimization without any major conceptual change to its structure. We present the general idea, implementation, and results obtained. We compare the results with those reported in the literature for other continuous optimization methods: other antrelated approaches and other metaheuristics initially developed for combinatorial optimization and later adapted to handle the continuous case. We discuss how our extended ACO compares to those algorithms, and we present some analysis of its efficiency and robustness.
Lévy Processes in Finance: Theory, Numerics, and Empirical Facts
, 2000
"... Lévy processes are an excellent tool for modelling price processes in mathematical finance. On the one hand, they are very flexible, since for any time increment ∆t any infinitely divisible distribution can be chosen as the increment distribution over periods of time ∆t. On the other hand, they have ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
Lévy processes are an excellent tool for modelling price processes in mathematical finance. On the one hand, they are very flexible, since for any time increment ∆t any infinitely divisible distribution can be chosen as the increment distribution over periods of time ∆t. On the other hand, they have a simple structure in comparison with general semimartingales. Thus stochastic models based on Lévy processes often allow for analytically or numerically tractable formulas. This is a key factor for practical applications. This thesis is divided into two parts. The first, consisting of Chapters 1, 2, and 3, is devoted to the study of stock price models involving exponential Lévy processes. In the second part, we study term structure models driven by Lévy processes. This part is a continuation of the research that started with the author's diploma thesis Raible (1996) and the article Eberlein and Raible (1999). The content of the chapters is as follows. In Chapter 1, we study a general stock price model where the price of a single stock follows an exponential Lévy process. Chapter 2 is devoted to the study of the Lévy measure of infinitely divisible distributions, in particular of generalized hyperbolic distributions. This yields information about what changes in the distribution of a generalized hyperbolic Lévy motion can be achieved by a locally equivalent change of the underlying probability measure. Implications for
Particle Swarm Optimizer In Noisy And Continuously Changing Environments
 M.H. Hamza (Ed.), Arti cial Intelligence and Soft Computing, IASTED/ACTA
, 2001
"... In this paper we study the performance of the recently proposed Particle Swarm optimization method in the presence of noisy and continuously changing environments. Experimental results for well known and widely used optimization test functions are given and discussed. Conclusions for its ability to ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
In this paper we study the performance of the recently proposed Particle Swarm optimization method in the presence of noisy and continuously changing environments. Experimental results for well known and widely used optimization test functions are given and discussed. Conclusions for its ability to cope with such environments as well as real life applications are also derived.