### Table 3. Density estimation techniques in multi-objective evolutionary algorithms and operators used in this study.

2001

"... In PAGE 11: ... Many advanced multi-objective evolutionary algorithms use some form of density dependent selection. Furthermore, nearly all techniques can be expressed in terms of density estimation, a classification is given in Table3 . We will make use of this as a further step towards a common framework of evolutionary multi-objective optimizers, and present the relevant enhancement of the unified model.... ..."

Cited by 20

### Table 3. Density estimation techniques in multi-objective evolutionary algorithms and operators used in this study.

2001

"... In PAGE 11: ... Many advanced multi-objective evolutionary algorithms use some form of density dependent selection. Furthermore, nearly all techniques can be expressed in terms of density estimation, a classification is given in Table3 . We will make use of this as a further step towards a common framework of evolutionary multi-objective optimizers, and present the relevant enhancement of the unified model.... ..."

Cited by 20

### Table 1: User Defined Constants for the Evolutionary Algorithm.

"... In PAGE 13: ... In the present formulation, all values of gt; are 1.0 as indicated in Table1 . More specifically, the component penalty for the sixth individual in the population is, The rank based fitness for an individual ten segment cantilever column can then be written as follows; In the present study, the scaling multipliers are assigned constant values which reflect the importance of each component.... In PAGE 18: ... It is recognized that the most correct procedure is to report averages of many runs [29]. Furthermore, Table1 should be referenced for information regarding the evolutionary algorithm parameters used for all runs. The design loads were chosen to illustrate one of the complexities of multiple constraint optimization.... ..."

### Table 5: User Preferences

in Abstract

2005

"... In PAGE 11: ... Procedure ScanMC aims at analyzing OR-edges ht; xi: if the ingoing node x is not already visited, the procedure inserts it in the priority queue; otherwise, the penalty of x is updated if and only if edge ht; xi improves the old penalty associated with x. Example 4 De ning default preferences, Mississippi gives a value on data items and delegation steps ( Table5 ). It prefers to deliver books by using a delivery company because this method is more secure and faster.... ..."

### Table 11: User Preferences

in Abstract

2006

"... In PAGE 16: ...e. the privacy penalty, is speci ed by the customer in his preferences (column 1 of Table11 ). The algorithm extracts from the queue PQ the node t with minimum priority ct which is assumed to be the privacy penalty of the minimal decomposition path from ? to t.... In PAGE 16: ... The output of the MinimumCost algorithm (DISCLOSE and NEEDED) allows to build the mini- mum cost decomposition path, including the list of data items required by the corresponding process. Example 5 Table11 reports the value of data items and delegation steps that Mississippi uses to initialize the business process. It prefers to deliver books using a delivery company because this method is safer and faster.... ..."

### Table 2 - User Preferences

2004

"... In PAGE 3: ... positive one. The value 0 indicates an absent of correlation. The correlation coefficient Pearson R is calculated as indicated below, where U and J indicate the values associated to interests, U and J represent the average of the interests U and J. Figure 3 - The Pearson R Algorithm For example, the table ( Table2 ) indicates the preferences of four people concerning the price and the time of departure of shuttles. The values were associated according to a scale that ranged from 1 (indicating low interest) to 10 (indicating high interest).... ..."

Cited by 1

### Table 1: Some samples of results obtained by using preference function and real area cost as the directive metric

"... In PAGE 16: ... real area cost To prove that preference function is a better directive metric than real area cost, we compared results which are synthesized under same conditions except using di erent directive metrics. Some typical examples are shown in Table1 . Statistic results on performance of preference function and real area cost are shown in Table 2.... In PAGE 17: ...687 0.895 Table 2: Average reduction ratio of results obtained by using preference function as the directive metric over results obtained by using real area cost Table1 shows some synthesized results of an El- liptic Filter[24]. The rst and second columns are the algorithms and the directive functions used.... In PAGE 20: ...12% 7.41% Table1 0: Average improvement obtained by FU shar- ing ing multi-function FUs and using mono-function FUs to implement these operations. Using multi-function FUs can reduce the number of FUs required, since the utilization of FUs is increased.... ..."

### Table 1. Comparison of performance of evolved algorithms on training instances, along with the preference expression of the best performing algorithm for each. Boldface denotes the algorithm requiring the fewest backtracks.

2005

"... In PAGE 3: ... 3.1 Empirical Study To determine the best algorithm for each particular training instance (listed in Table1 ), the evolutionary procedure was run 5 times and for 50 generations for each instance. The initially generation of algorithms all include the MOMS heuristic but are otherwise randomly generated.... In PAGE 3: ... The composition of each successive generation was as follows, to give a total of np = 50 in each generation: the previous nc = 3 best algorithms; nb = 36 new algorithms generated by standard GP crossover; and, nm = 11 algorithms generated by mutation. The results of the experiments are tabulated in Table1 , both for linear (weighted-sum combinations) and non-linear (multiplicative) combinations of measures. Every evolved algorithm required fewer backtracks than MOMS on its training instance with algoritms employing linear combinations of heuristics offering mean and median improvements over standard MOMS of 56.... ..."

Cited by 1

### Table 1. Comparison of performance of evolved algorithms on training instances, along with the preference expression of the best performing algorithm for each. Boldface denotes the algorithm requiring the fewest backtracks.

2005

"... In PAGE 3: ... 3.1 Empirical Study To determine the best algorithm for each particular training instance (listed in Table1 ), the evolutionary procedure was run 5 times and for 50 generations for each instance. The initially generation of algorithms all include the MOMS heuristic but are otherwise randomly generated.... In PAGE 3: ... The composition of each successive generation was as follows, to give a total of np = 50 in each generation: the previous nc = 3 best algorithms; nb = 36 new algorithms generated by standard GP crossover; and, nm = 11 algorithms generated by mutation. The results of the experiments are tabulated in Table1 , both for linear (weighted-sum combinations) and non-linear (multiplicative) combinations of measures. Every evolved algorithm required fewer backtracks than MOMS on its training instance with algoritms employing linear combinations of heuristics offering mean and median improvements over standard MOMS of 56.... ..."

Cited by 1

### Table 2. Results of the our proposed multi-objective approach after 1-hour runtime

2007

"... In PAGE 13: ...99 and the num ber of iterations within SA to be 1,000,000. Table2 lists the re- sults of using different evaluation functions on the obtained solutions. For the weighted-sum objective function, we use the sam e set of weight values as in formula (29), and list the num - ber of archived non-dom inated solutions (see colum n 2) and the best solution under this evaluation function (see colum n 3).... In PAGE 14: ...Table 2. Results of the our proposed multi-objective approach after 1-hour runtime A ccording to the results in Table2 , we can see that our proposed approach is very prom ising in solving the m ulti-objective nurse scheduling problem . In terms of the solution quality evaluated by the sam e objective function, our approach performs similar to the IP-based VNS, and significantly improve the best results of the hybrid genetic algorithm and the hybrid VNS by 25.... ..."