Results 1  10
of
510
Designing Efficient And Accurate Parallel Genetic Algorithms
, 1999
"... Parallel implementations of genetic algorithms (GAs) are common, and, in most cases, they succeed to reduce the time required to find acceptable solutions. However, the effect of the parameters of parallel GAs on the quality of their search and on their efficiency are not well understood. This insuf ..."
Abstract

Cited by 222 (5 self)
 Add to MetaCart
Parallel implementations of genetic algorithms (GAs) are common, and, in most cases, they succeed to reduce the time required to find acceptable solutions. However, the effect of the parameters of parallel GAs on the quality of their search and on their efficiency are not well understood. This insufficient knowledge limits our ability to design fast and accurate parallel GAs that reach the desired solutions in the shortest time possible. The goal of this dissertation is to advance the understanding of parallel GAs and to provide rational guidelines for their design. The research reported here considered three major types of parallel GAs: simple masterslave algorithms with one population, more sophisticated algorithms with multiple populations, and a hierarchical combination of the first two types. The investigation formulated simple models that predict accurately the quality of the solutions with different parameter settings. The quality predictors were transformed into populationsizing equations, which in turn were used to estimate the execution time of the algorithms.
The Gambler's Ruin Problem, Genetic Algorithms, and the Sizing of Populations
, 1997
"... This paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial supply of building blocks in a novel way. The result is an equation that accurately predicts the quality of ..."
Abstract

Cited by 210 (88 self)
 Add to MetaCart
This paper presents a model for predicting the convergence quality of genetic algorithms. The model incorporates previous knowledge about decision making in genetic algorithms and the initial supply of building blocks in a novel way. The result is an equation that accurately predicts the quality of the solution found by a GA using a given population size. Adjustments for different selection intensities are considered and computational experiments demonstrate the effectiveness of the model. I. Introduction The size of the population in a genetic algorithm (GA) is a major factor in determining the quality of convergence. The question of how to choose an adequate population size for a particular domain is difficult and has puzzled GA practitioners for a long time. Hard questions are better approached using a divideandconquer strategy and the population sizing issue is no exception. In this case, we can identify two factors that influence convergence quality: the initial supply of build...
Sum Rules For Jacobi Matrices And Their Applications To Spectral Theory
 Ann. of Math
"... We discuss the proof of and systematic application of Case's sum rules for Jacobi matrices. Of special interest is a linear combination of two of his sum rules which has strictly positive terms. Among our results are a complete classification of the spectral measures of all Jacobi matrices J for whi ..."
Abstract

Cited by 99 (38 self)
 Add to MetaCart
We discuss the proof of and systematic application of Case's sum rules for Jacobi matrices. Of special interest is a linear combination of two of his sum rules which has strictly positive terms. Among our results are a complete classification of the spectral measures of all Jacobi matrices J for which J J0 is HilbertSchmidt, and a proof of Nevai's conjecture that the SzegĂ¶ condition holds if J J0 is trace class.
Fast multipleprecision evaluation of elementary functions
 Journal of the ACM
, 1976
"... XI3STnXC'r. Let f(x) be one of the usual elementary functions (exp, log, artan, sin, cosh, etc.), and let M(n) be the number of singleprecision operations reqmred to multiply nbit integers. It is shown that f(x) can be evaluated, with relative error 0(2'), m O(M(n)log (n)) operations as n ~ ~, ..."
Abstract

Cited by 87 (5 self)
 Add to MetaCart
XI3STnXC'r. Let f(x) be one of the usual elementary functions (exp, log, artan, sin, cosh, etc.), and let M(n) be the number of singleprecision operations reqmred to multiply nbit integers. It is shown that f(x) can be evaluated, with relative error 0(2'), m O(M(n)log (n)) operations as n ~ ~, for any floatingpoint number x (with an nbit fraction) in a suitable finite interval. From the SehonbageStrassen bound on M(n), it follows that an nbit approximation to f(x) may be evaluated in O(n logS(n) log log(n)) operations. Special cases include the evaluation of constants such as f, e, and e'. The algorithms depend on the theory of elhptic integrals, using the arithmeticgeometric mean iteration and ascending Landen transformations. Itsr wol~os Ar~o en~As~s: multipleprecision arithmetic, analytic complexity, arithmeticgeometric mean, computational complexity, elementary function, elliptic integral, evaluation of x, exponentml, Landen transformation, logarithm, trigonometric funetmn CR CATEGORIES: 5.12, 5.15, 5.25 1.
Wrapper Maintenance: A Machine Learning Approach
 Journal of Artificial Intelligence Research
, 2003
"... The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and e#cient generation of wrappers, the development of tools for wrapper maintenance has received less attention. ..."
Abstract

Cited by 62 (15 self)
 Add to MetaCart
The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and e#cient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an e#cient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year.
Evaluationrelaxation schemes for genetic and evolutionary algorithms
, 2002
"... Genetic and evolutionary algorithms have been increasingly applied to solve complex, large scale search problems with mixed success. Competent genetic algorithms have been proposed to solve hard problems quickly, reliably and accurately. They have rendered problems that were difficult to solve by th ..."
Abstract

Cited by 60 (28 self)
 Add to MetaCart
Genetic and evolutionary algorithms have been increasingly applied to solve complex, large scale search problems with mixed success. Competent genetic algorithms have been proposed to solve hard problems quickly, reliably and accurately. They have rendered problems that were difficult to solve by the earlier GAs to be solvable, requiring only a subquadratic number of function evaluations. To facilitate solving largescale complex problems, and to further enhance the performance of competent GAs, various efficiencyenhancement techniques have been developed. This study investigates one such class of efficiencyenhancement technique called evaluation relaxation. Evaluationrelaxation schemes replace a highcost, lowerror fitness function with a lowcost, higherror fitness function. The error in fitness functions comes in two flavors: Bias and variance. The presence of bias and variance in fitness functions is considered in isolation and strategies for increasing efficiency in both cases are developed. Specifically, approaches for choosing between two fitness functions with either differing variance or differing bias values have been developed. This thesis also investigates fitness inheritance as an evaluation
Structural Digital Signature for Image Authentication: An Incidental Distortion Resistant Scheme
 IEEE Trans. on Multimedia
, 2000
"... The existing digital data verification methods are able to detect regions that have been tampered with, but are too fragile to resist incidental manipulations. This paper proposes a new digital signature scheme which makes use of an image's contents (in the wavelet transform domain) to construct a s ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
The existing digital data verification methods are able to detect regions that have been tampered with, but are too fragile to resist incidental manipulations. This paper proposes a new digital signature scheme which makes use of an image's contents (in the wavelet transform domain) to construct a structural digital signature (SDS) for image authentication. The characteristic of the SDS is that it can tolerate contentpreserving modifications while detecting contentchanging modifications. Many incidental manipulations, which were detected as malicious modifications in the previous digital signature verification or fragile watermarking schemes, can be bypassed in the proposed scheme. Performance analysis is conducted and experimental results show that the new scheme is indeed superb for image authentication. keywords: Digital signature, Wavelet transform, Authentication, Fragility, Robustness. The preliminary version of this paper will be published in [14] (http://smart.iis.sinica.ed...
Nonuniform fast Fourier transform
 Geophysics
, 1999
"... The nonuniform discrete Fourier transform (NDFT) can be computed with a fast algorithm, referred to as the nonuniform fast Fourier transform (NFFT). In L dimensions, the NFFT requires O(N(ln #) L + ( Q L #=1 M # ) P L #=1 log M # ) operations, where M # is the number of Fourier components ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
The nonuniform discrete Fourier transform (NDFT) can be computed with a fast algorithm, referred to as the nonuniform fast Fourier transform (NFFT). In L dimensions, the NFFT requires O(N(ln #) L + ( Q L #=1 M # ) P L #=1 log M # ) operations, where M # is the number of Fourier components along dimension #, N is the number of irregularly spaced samples, and # is the required accuracy. This is a dramatic improvement over the O(N Q L #=1 M # ) operations required for the direct evaluation (NDFT). The performance of the NFFT depends on the lowpass filter used in the algorithm. A truncated Gauss pulse, proposed in the literature, is optimized. A newly proposed filter, a Gauss pulse tapered with a Hanning window, performs better than the truncated Gauss pulse and the Bspline, also proposed in the literature. For small filter length, a numerically optimized filter shows the best results. Numerical experiments for 1D and 2D implementations confirm the theoretically predicted ...
Bayesian Treed Gaussian Process Models with an Application to Computer Modeling
 Journal of the American Statistical Association
, 2007
"... This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian proce ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
This paper explores nonparametric and semiparametric nonstationary modeling methodologies that couple stationary Gaussian processes and (limiting) linear models with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. Mixing between full Gaussian processes and simple linear models can yield a more parsimonious spatial model while significantly reducing computational effort. The methodological developments and statistical computing details which make this approach efficient are described in detail. Illustrations of our model are given for both synthetic and real datasets. Key words: recursive partitioning, nonstationary spatial model, nonparametric regression, Bayesian model averaging 1