### TABLE I TRACKING RESULTS FOR meeting1, FOR OUR APPROACH AND A TRADITIONAL MULTI-OBJECT PF. RESULTS ARE SHOWN FOR INDIVIDUAL PEOPLE, AND AVERAGED OVER ALL PEOPLE.

2007

Cited by 6

### Table 2. Results comparison of game calculation and multi-objective optimization Objective function Design variable

2005

### Table 2: Tracking success rate, and F-measures for location (Fx) and speaking status (Fs), averaged over the four objects in the meeting video sequence (initial 1715 frames). PF denotes a ba- sic PF multi-object tracker. MCMC denotes the approach in [5].

2005

"... In PAGE 4: ... An objective evaluation procedure involves the computation for each participant of the success rate measure mentioned in Section 4, and the F-measure (which com- bines precision and recall) for location and speaking status, over a number of runs of the trackers. Results for the first 1715 frames are shown in Table2 , comparing the proposed method with a ba- sic multi-object PF over 20 runs. They show that MCMC sampling outperforms the basic PF in both ability to track and estimation of the speaking status.... ..."

Cited by 3

### Table 2: Tracking success rate, and F-measures for location (Fx) and speaking status (Fs), averaged over the four objects in the meeting video sequence. PF denotes a basic PF multi-object tracker. MCMC denotes the approach in [5].

2005

"... In PAGE 4: ... An objective evaluation procedure in- volves the computation (for each participant) of the success rate measure mentioned in Section 4, and the F-measures (which com- bines precision and recall) for location and speaking status, over a number of runs of the trackers. Results for the first 1700 frames are shown in Table2 , comparing the proposed method with a ba- sic multi-object PF over 20 runs. They show that MCMC sampling outperforms the basic PF [5] in both ability to track and estimation of the speaking status.... ..."

Cited by 3

### Table 4: Multi-objective and diversity-based ranking for Figure 15

2002

"... In PAGE 24: ... Note that A and B still receive a higher fitness than the solutions that they dominate (C and E). The associated fine-grained ranks are shown in Table4 , to the right of the original coarse-grained equivalents. ... ..."

Cited by 3

### Table 2: Comparing run times of the multi-objective algo- rithms (in seconds)

"... In PAGE 5: ...orithms on kn500.2, kn750.2, kn750.3 and kn750.4. These plots provide very strong evidence in favour of the HISAM. Finally, Table2 shows very clearly that HISAM has a very much faster run time than either SEAMO2 or MOGLS. MOGLS is particularly slow because of its frequent need to re-evaluate all the members of CS, the current list of solutions, with respect to their weighted linear scalarizing functions.... ..."

### Table 3: Multi-objective optimisation algorithms based on simulated annealing. Dominance energy Volume energy

"... In PAGE 92: ...based or volume based) and whether the search is exploratory (computational temperature T gt; 0) or greedy (T = 0). Table3 summarises greedy and exploratory algorithms using dominance and volume energies, together with single solution and set states, which are described in this section; their performance on standard test problems is compared in section 4.4.... In PAGE 99: ... Results on MOSA and SAMOSA give a direct comparison of single solution states against set states, while dominance based and volume based energy measures are compared via the SAMOSA and VOLMOSA algorithms. As displayed in Table3 , the temperature zero versions of the algorithms are denoted by MOSA0 and SAMOSA0. Performance is evaluated on well-known test functions from the literature, namely the DTLZ test suite problems 1-6 [Deb et al.... ..."

### Table 3: Computation effort (CE) metric values (number of evaluations) Instance size Single-objective techniques Multi-objective techniques

"... In PAGE 6: ... 5.2 Results We analyze first the results obtained with the CE metric by all the algorithms, which are included in Table3 . At a first glance, it can be observed that the multi-objective al- gorithms are more efficient than the single-objective ones.... ..."

### Table 3: Computation e ort (CE) metric values (number of evaluations) Instance size Single-objective techniques Multi-objective techniques

"... In PAGE 6: ... 5.2 Results We analyze rst the results obtained with the CE metric by all the algorithms, which are included in Table3 . At a rst glance, it can be observed that the multi-objective al- gorithms are more e cient than the single-objective ones.... ..."

### Table 3. The average time for obtaining a solution for the multi-objective optimization problems by using TGP. The results are averaged over 30 independent runs.

"... In PAGE 13: ...igure 5. Diversity metric computed at every 10 generations. The results are averaged over 30 independent runs. Numerical values of the convergence and diversity metrics for the last generation are also given in section 9. 8 Running time Table3 is meant to show the efiectiveness and simplicity of the TGP algorithm by giving the time needed for solving these problems using a PIII Celeron computer at 850 MHz. Table 3 shows that TGP without archive is very fast.... In PAGE 13: ... 8 Running time Table 3 is meant to show the efiectiveness and simplicity of the TGP algorithm by giving the time needed for solving these problems using a PIII Celeron computer at 850 MHz. Table3 shows that TGP without archive is very fast. An average of 0.... ..."