Results 11 - 20
of
146,676
Table 2: Average mobility and number of basins vis- ited per trial after 25,000 evaluations on the Rana and Schwefel 5D functions.
"... In PAGE 5: ... That is, a higher precision search algorithm will move further down the ridge by taking smaller steps along the ridge direction [7]. Various measurements for both the 5-D Rana and Schwe- fel functions are listed in Table2 . For each function, CHC-10 and CHC-20 visit signi cantly more basins of attraction and have signi cantly higher mobility than either local search (LS-10 and LS-20) or CMA-ES (CMA-200 and CMA-500).... ..."
Table 1: Default parameter setting for the a2
2001
"... In PAGE 18: ...1 Parameter Setting Besides population size a4 and parent number a1 , the strategy parameters a2 a50 a27 a2a23a22a24a22a23a22a25a2 a50 a48 a8 , a20 a11 , a20 a11 a1a0a1a2 , a20 a1 , and a0 a1 , connected to Equations (13), (14), (15), (16), and (17), respectively, have to be chosen.16 The default parameter settings are summarized in Table1 . In general, the selection related parameters a1 , a4 , and a50 a27 a2a23a22a24a22a23a22a23a2 a50 a48 are comparatively uncritical and can be chosen in a wide range without disturbing the adaptation procedure.... In PAGE 18: ... By de nition, all weights are greater than zero. In real world applications, the default settings from Table1 are good rst guesses. Only for a1 a12 a1 a44 does the default value yield a4 a47 a1 .... In PAGE 19: ... The optimal recombination weights depend on the function to be optimized, and it remains an open question whether the a2 a1 a4a7 a2 a4 a8 or the a2 a1 a11 a2 a4 a8 scheme performs better overall (using the default parameters accordingly). If overall simulation time does not substantially exceed, say, a11 a1 generations, a0 a1 should be chosen smaller than in Table1 , e.g.... In PAGE 23: ... 7 Simulation Results Four different evolution strategies are experimentally investigated. a0 a2 a1 a27a7 a2 a4 a8 -CMA-ES, where the default parameter setting from Table1 is used apart from a4 and a1 as given below, and a50 a16 a19 a1 for all a33 a19 a1a39a2a24a22a23a22a23a22a23a2 a1 . To reduce the compu- tational effort, the update of a21 a7a10a9a12a11 and a23 a7a10a9a12a11 from a3 a7a10a9a12a11 for Equations (13), (14), and (16) is done every a41 a1 generations in all simulations.... In PAGE 25: ... This selection scheme is not recommended for the CMA-ES, where a1 a19 a2 (see Section 5.1, in particular Table1 and 2). The recommended a2 a3 a11 a2 a7 a8 -CMA-ES performs roughly ten times faster (compare also Figure 10).... ..."
Cited by 116
Table 4. Regression performed on variables from the work modularity graph.
2003
Cited by 13
Table 4. Regression performed on variables from the work modularity graph.
2003
Cited by 13
Table 1. Default parameters for the (1+ )-CMA Evolution Strategy.
2005
"... In PAGE 6: ... Strategy Parameters The (external) strategy parameters are o spring number , target success probability ptarget succ , step size damping d, success rate averaging parameter cp, cu- mulation time horizon parameter cc, and covariance matrix learning rate ccov. Default values are given in Table1 . Most default values are derived from the precursor algorithms and validated by sketchy simulations on simple test functions: the target success rate is close to the well-known 1=5 and depends on , because the optimal success rate in the (1+ )-ES certainly decreases with increasing .... In PAGE 6: ... The parameters for the covariance matrix adaptation are similar to those for the (1, )-CMA-ES. Initialization The elements of the initial individual, a(0) parent are set to psucc = ptarget succ , pc = 0, and C = I, where ptarget succ is given in Table1... ..."
Table 2. Single-objective test functions to be minimized, where y = Ox and O is an orthogonal matrix, implementing an angle-preserving linear transformation
2005
"... In PAGE 7: ... 2.2 Simulation of the (1+ )-CMA-ES Test functions To validate essential properties of the search algorithm we use the single- objective test problems summarized in Table2 . The linear function flinear tests the ability and the speed to increase the step size .... In PAGE 7: ... Methods We conducted 51 runs for each function and each dimension. The initial candidate solution x is chosen uniformly randomly in the initial region from Table2 , and the initial = 3 is half of the width of the initial interval. Excepting flinear, the simulation is stopped when function value di erences do not exceed 10 12 or when the function value becomes smaller than the target function value 10 9.... ..."
Table 6: Performance of modular exponentiation algorithms for jnj = 512 (in msec)
"... In PAGE 10: ... Except for the classical algorithm, all other reduction algorithms require more or less precom- putations based on the modulus. The running times for modular reduction shown in Table 2 Table 5 do not include such precomputation time, while the running times for modular exponenti- ation shown in Table6 Table 9 do include the precomputation time. For exponentiation we used the window algorithm.... ..."
Cited by 1
Table 5: Performance of modular reduction algorithms for jnj = 2048 (in microsec)
"... In PAGE 10: ... Except for the classical algorithm, all other reduction algorithms require more or less precom- putations based on the modulus. The running times for modular reduction shown in Table 2 Table5 do not include such precomputation time, while the running times for modular exponenti- ation shown in Table 6 Table 9 do include the precomputation time. For exponentiation we used the window algorithm.... In PAGE 10: ... The reduction method using the Karatsuba multiplication in part, L2, shows almost the best performance in every case. The advantage of L2 becomes substantial when the size of modulus increases (see Table5 and Table 9). 4This means that for a large n it might be better to implement the Montgomery algorithm using two multiplications as in the original description.... ..."
Cited by 1
Table 9: Performance of modular exponentiation algorithms for jnj = 2048 (in msec)
"... In PAGE 9: ... Partial assembly language implementations are also done for PCs. Table 1 Table9 show our implementation results. The following notations are used in the tables: Machines and languages: { S20/60/C: implementation by C on SPARC20/60MHz { US/167/C: implementation by C on ULTRASPARC/167MHz... In PAGE 10: ... Except for the classical algorithm, all other reduction algorithms require more or less precom- putations based on the modulus. The running times for modular reduction shown in Table 2 Table 5 do not include such precomputation time, while the running times for modular exponenti- ation shown in Table 6 Table9 do include the precomputation time. For exponentiation we used the window algorithm.... ..."
Cited by 1
Table 4: Performance of modular reduction algorithms for jnj = 1024 (in microsec)
"... In PAGE 10: ...g., the Montgomery algorithm implemented in C runs almost two times slower than multiplication for jnj = 1024, see Table 1 and Table4 )4. The di erences are bigger and bigger as the size of modulus increases.... ..."
Cited by 1
Results 11 - 20
of
146,676