• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 292,387
Next 10 →

TABLE I TRACKING RESULTS FOR meeting1, FOR OUR APPROACH AND A TRADITIONAL MULTI-OBJECT PF. RESULTS ARE SHOWN FOR INDIVIDUAL PEOPLE, AND AVERAGED OVER ALL PEOPLE.

in Audio-Visual Probabilistic Tracking of Multiple Speakers
by Daniel Gatica-perez, Guillaume Lathoud, Jean-marc Odobez, Iain Mccowan 2007
Cited by 6

Table 2. Results comparison of game calculation and multi-objective optimization Objective function Design variable

in Analysis and Application of Multi-object Decision Design Based on Game Theory
by Nenggang Xie, Na Shi, Jiahan Bao, Hao Fang 2005

Table 2: Tracking success rate, and F-measures for location (Fx) and speaking status (Fs), averaged over the four objects in the meeting video sequence (initial 1715 frames). PF denotes a ba- sic PF multi-object tracker. MCMC denotes the approach in [5].

in Tracking people in meetings with particles
by Daniel Gatica-perez, Jean-marc Odobez, Sileye Ba, Kevin Smith, Guillaume Lathoud 2005
"... In PAGE 4: ... An objective evaluation procedure involves the computation for each participant of the success rate measure mentioned in Section 4, and the F-measure (which com- bines precision and recall) for location and speaking status, over a number of runs of the trackers. Results for the first 1715 frames are shown in Table2 , comparing the proposed method with a ba- sic multi-object PF over 20 runs. They show that MCMC sampling outperforms the basic PF in both ability to track and estimation of the speaking status.... ..."
Cited by 3

Table 2: Tracking success rate, and F-measures for location (Fx) and speaking status (Fs), averaged over the four objects in the meeting video sequence. PF denotes a basic PF multi-object tracker. MCMC denotes the approach in [5].

in Tracking people in meetings with particles
by Daniel Gatica-perez, Jean-marc Odobez, Sileye Ba, Kevin Smith, Guillaume Lathoud 2005
"... In PAGE 4: ... An objective evaluation procedure in- volves the computation (for each participant) of the success rate measure mentioned in Section 4, and the F-measures (which com- bines precision and recall) for location and speaking status, over a number of runs of the trackers. Results for the first 1700 frames are shown in Table2 , comparing the proposed method with a ba- sic multi-object PF over 20 runs. They show that MCMC sampling outperforms the basic PF [5] in both ability to track and estimation of the speaking status.... ..."
Cited by 3

Table 4: Multi-objective and diversity-based ranking for Figure 15

in Elitism, Sharing, And Ranking Choices In Evolutionary Multi-Criterion Optimisation
by R. C. Purshouse, P. J. Fleming 2002
"... In PAGE 24: ... Note that A and B still receive a higher fitness than the solutions that they dominate (C and E). The associated fine-grained ranks are shown in Table4 , to the right of the original coarse-grained equivalents. ... ..."
Cited by 3

Table 2: Comparing run times of the multi-objective algo- rithms (in seconds)

in A Hierarchical Solve-and-Merge Framework for Multi-Objective Optimization
by Christine L. Mumford
"... In PAGE 5: ...orithms on kn500.2, kn750.2, kn750.3 and kn750.4. These plots provide very strong evidence in favour of the HISAM. Finally, Table2 shows very clearly that HISAM has a very much faster run time than either SEAMO2 or MOGLS. MOGLS is particularly slow because of its frequent need to re-evaluate all the members of CS, the current list of solutions, with respect to their weighted linear scalarizing functions.... ..."

Table 3: Multi-objective optimisation algorithms based on simulated annealing. Dominance energy Volume energy

in A Study of Simulated Annealing Techniques for
by Multi-objective Optimisation
"... In PAGE 92: ...based or volume based) and whether the search is exploratory (computational temperature T gt; 0) or greedy (T = 0). Table3 summarises greedy and exploratory algorithms using dominance and volume energies, together with single solution and set states, which are described in this section; their performance on standard test problems is compared in section 4.4.... In PAGE 99: ... Results on MOSA and SAMOSA give a direct comparison of single solution states against set states, while dominance based and volume based energy measures are compared via the SAMOSA and VOLMOSA algorithms. As displayed in Table3 , the temperature zero versions of the algorithms are denoted by MOSA0 and SAMOSA0. Performance is evaluated on well-known test functions from the literature, namely the DTLZ test suite problems 1-6 [Deb et al.... ..."

Table 3: Computation effort (CE) metric values (number of evaluations) Instance size Single-objective techniques Multi-objective techniques

in Francisco Chicano
by Antonio J. Nebro, Dpto Lenguajes Y Ciencias, Francisco Luna
"... In PAGE 6: ... 5.2 Results We analyze first the results obtained with the CE metric by all the algorithms, which are included in Table3 . At a first glance, it can be observed that the multi-objective al- gorithms are more efficient than the single-objective ones.... ..."

Table 3: Computation e ort (CE) metric values (number of evaluations) Instance size Single-objective techniques Multi-objective techniques

in Optimal Antenna Placement Using a New Multi-Objective CHC Algorithm
by Antonio J. Nebro, Francisco Chicano, Francisco Luna, et al.
"... In PAGE 6: ... 5.2 Results We analyze rst the results obtained with the CE metric by all the algorithms, which are included in Table3 . At a rst glance, it can be observed that the multi-objective al- gorithms are more e cient than the single-objective ones.... ..."

Table 3. The average time for obtaining a solution for the multi-objective optimization problems by using TGP. The results are averaged over 30 independent runs.

in Using Traceless Genetic Programming for Solving Multiobjective Optimization Problems
by Mihai Oltean
"... In PAGE 13: ...igure 5. Diversity metric computed at every 10 generations. The results are averaged over 30 independent runs. Numerical values of the convergence and diversity metrics for the last generation are also given in section 9. 8 Running time Table3 is meant to show the efiectiveness and simplicity of the TGP algorithm by giving the time needed for solving these problems using a PIII Celeron computer at 850 MHz. Table 3 shows that TGP without archive is very fast.... In PAGE 13: ... 8 Running time Table 3 is meant to show the efiectiveness and simplicity of the TGP algorithm by giving the time needed for solving these problems using a PIII Celeron computer at 850 MHz. Table3 shows that TGP without archive is very fast. An average of 0.... ..."
Next 10 →
Results 11 - 20 of 292,387
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University