### Citations

7188 | Convex optimization
- Boyd, Vandenberghe
- 2004
(Show Context)
Citation Context ... iff xi ∈ selected individuals, x T i Axi + B T xi + C ≥ 1 iff xi ∈ non-selected individuals, where �A� is a Frobenius norm of matrix A. This problem specification is taken from Boyd and Vandenberghe =-=[3]-=- (page 429, Quadratic discrimination). Their formulation is, however, only a feasibility problem, i.e. the solution is any quadratic discriminant function that correctly classifies the data points. Th... |

581 |
Evolution and Optimum Seeking
- Schwefel
- 1995
(Show Context)
Citation Context ...th a feedback adaptation of the step size (Rechenberg’s one fifth rule), and self-adaptation of the step size, coordinate-wise step sizes, and self-adaptation of the whole covariance matrix (see e.g. =-=[10]-=-). However, Rudolph [9] showed that self-adaptive mutations can lead to premature convergence. Other algorithms that use Gaussian distribution very often fall into the class of estimation of distribut... |

528 | Completely derandomized self-adaptation in evolutionary strategies
- Hansen, Ostermeier
(Show Context)
Citation Context ...ate the distribution of the selected mutation steps instead of the distribution of selected individuals (cf. 1(a) and 1(b)). Pošík [7] applied this approach in a co-evolutionary manner. Hansen at al. =-=[4]-=- use similar principles in the evolutionary strategy with covariance matrix adaptation (CMA-ES) which is considered to be the stateof-the-art in real-valued black-box optimization. It adapts the step ... |

426 |
YALMIP: a toolbox for modeling and optimization
- Löfberg
- 2004
(Show Context)
Citation Context ... problem so that the solution should be a quadratic function with maximum margin to the points from both classes. 1 The algorithm was implemented in MATLAB 7.3 with the help of SeDuMi [11] and YALMIP =-=[6]-=- toolboxes. Compared to the previous algorithm presented in [8], this formulation of the problem automatically ensures the positive definitness of matrix A (which had to be enforced ‘from outside’ in ... |

146 |
Pattern recognition using generalized portrait method, Autom. Remote Control
- Vapnik, Lerner
- 1963
(Show Context)
Citation Context ...ortionofgenerated vectors that lie inside the separating ellipsoid varies (drops with increasing dimensionality). 1 It is an equivalent of the maximum margin separating hyperplane algorithm of Vapnik =-=[12]-=- which is learned by minimization of the weight vector of the linear decision function. The formulation in this paper thus maximizes margin to points in a quadratically mapped feature space. (3)sSuppo... |

72 |
A review on estimation of distribution algorithms
- Larrañaga
- 2002
(Show Context)
Citation Context ...] showed that self-adaptive mutations can lead to premature convergence. Other algorithms that use Gaussian distribution very often fall into the class of estimation of distribution algorithms (EDAs) =-=[5]-=-. They select better individuals and fit the Gaussian distribution to them—usually by means of the maximum likelihood estimation which is far from ideal (see Fig. 1(a) for an example of Estimation of ... |

27 | Self-adaptive mutations may lead to premature convergence - Rudolph - 2001 |

14 | The LEM3 system for non-darwinian evolutionary computation and its application to complex function optimization
- Wojtusiak, Michalski
- 2006
(Show Context)
Citation Context ... probability model from the description of the selected individuals hidden in the learned classifier. Similar principle can be found also in Learnable Evolution Model (LEM) of Wojtusiak and Michalski =-=[13]-=-. LEM offers several alternative ways of creating offspring (including GA-like pipeline, reinitialization pipeline, etc.), one of them consists of classifier distinguishing between good and bad indivi... |

11 |
Real-parameter optimization using the mutation step co-evolution
- Posik
- 2005
(Show Context)
Citation Context ...back of maximum likelihood estimation in real-valued EDAs is to estimate the distribution of the selected mutation steps instead of the distribution of selected individuals (cf. 1(a) and 1(b)). Pošík =-=[7]-=- applied this approach in a co-evolutionary manner. Hansen at al. [4] use similar principles in the evolutionary strategy with covariance matrix adaptation (CMA-ES) which is considered to be the state... |

10 |
LS-CMAES: A second-order algorithm for covariance matrix adaptation
- Auger, Schoenauer, et al.
- 2004
(Show Context)
Citation Context ...sitive definite D × D matrix, B is a vector with D elements, and C is a scalar. After finding the quadratic decision function, we need to turn it into the sampling Gaussian distribution. Auger et al. =-=[1]-=- discussed that setting the covariance matrix Σ to Σ = A−1 is a very good (if not optimal) choice. We follow this approach since the elliptic decision boundary then corresponds to certain contour line... |

9 |
SDR: A better trigger for adaptive variance scaling in normal EDAs
- Bosman, Grahl, et al.
- 2007
(Show Context)
Citation Context ...vious steps that the algorithm made. Another approach for fighting premature convergence when maximum likelihood estimation is used constitutes the Adaptive Variance Scaling (AVS) of Bosman and Grahl =-=[2]-=-. Their method can enlarge the step size when needed. In this paper, it is argued that we can learn better search distributions by estimating the contour line of the fitness function that goes between... |

3 | Estimation of fitness landscape contours in EAs
- Pošı́k, Franc
(Show Context)
Citation Context ...rules divide the search space with axis-parallel splits which are not very suitable for continuous spaces, on the other hand, LEM can be applied to mixed continuous-discrete problems. Pošík and Franc =-=[8]-=- proposed different model—a combination of quadratic discriminant function as the classifier, andtheGaussiandistributionasthesearchdistribution. Thebasicprincipleofthealgorithmdescribed inthispaperist... |