• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 35,996
Next 10 →

Table 2. Optimal views based on model-based reconstruction.

in Finding Optimal Views for 3D Face Shape Modeling
by Jinho Lee Baback, Jinho Lee, Baback Moghaddam, Hanspeter Pfister, Raghu Machiraju 2004
"... In PAGE 5: ... Based on an average reconstruction time of 30 sec- onds, this search takes about 45 hours. The results are presented in Table2 which shows the optimal views for K = {1,2,3,4,5} and the correspond- ing minimum average reconstruction errors (refer to Table 1 Figure 7. Reconstruction errors for all view con- figurations with 4 cameras (K = 4) ranked by magnitude of ensemble error.... In PAGE 5: ... Figure 7 shows the errors of all combinatorial view configurations for the case K = 4, ranked in ascending order of error. Each er- ror bar represents the subjects standard deviation for that configuration (the first error bar corresponds to the optimal configuration and is the subject standard deviation listed in Table2 ). Other plots for K = 1,2,3 and 5 are quite sim- ilar in nature, all showing a well-defined minimum with the subject variation (error-bars) being lowest for the best configuration (left most) and highest for the worst (right most).... In PAGE 5: ... Using the same search strategy, we now evaluate the visual hull constructions ob- tained from the given subset of silhouette images and com- pare them to the ground truth. Table 3 shows the optimal views for K = {2,3,4,5} and the corresponding error val- ues (same format as in Table2 except that the visual hull from a single silhouette (K = 1) has no finite volume and... In PAGE 6: ... There are a few dif- ferences but these are somewhat misleading. The best view configurations in Table2 are marked in Figure 8 with ar- rows. We note that our model-based optimal views have al- most the same errors as the best views chosen with visual hull method and are always in the first plateau or top quar- tile that includes the key profile view #10.... ..."
Cited by 7

Table 2. Optimal views based on model-based reconstruction.

in Finding Optimal Views for 3D Face Shape Modeling
by Jinho Lee, Jinho Lee, Baback Moghaddam, Baback Moghaddam, Hanspeter Pfister, Hanspeter Pfister, Raghu Machiraju, Raghu Machiraju 2004
"... In PAGE 7: ... Based on an average reconstruction time of 30 seconds, this search takes about 45 hours. The results are presented in Table2 which shows the optimal views for K = {1,2,3,4,5} and the correspond- ing minimum average reconstruction errors (refer to Table 1 for exact coordinates). The standard deviation of the indi- Figure 7.... In PAGE 7: ... Figure 7 shows the errors of all combinatorial view configurations for the case K = 4, ranked in ascending order of error. Each er- ror bar represents the subjects standard deviation for that configuration (the first error bar corresponds to the optimal configuration and is the subject standard deviation listed in Table2 ). Other plots for K = 1,2,3 and 5 are quite sim- ilar in nature, all showing a well-defined minimum with the subject variation (error-bars) being lowest for the best configuration (left most) and highest for the worst (right most).... In PAGE 7: ... Using the same search strategy, we now evaluate the visual hull construc- tions obtained from the given subset of silhouette im- ages and compare them to the ground truth. Table 3 shows the optimal views for K = {2,3,4,5} and the correspond- ing error values (same format as in Table2 except that the visual hull from a single silhouette (K = 1) has no fi- nite volume and is omitted). Note that a visual hull recon- struction (especially one from few images) is not a very... In PAGE 8: ... Interestingly, the first plateau corresponding to the top group is all the sub- sets which include the profile view #10 (one of the most salient). We can see marked similarities in the opti- mal views in Table2 and Table 3. For example, both methods indicate views #3 and #10 to be the most infor- mative.... In PAGE 8: ... For example, the two most salient views (#3 and #10) correspond very closely with the established (biomet- ric) standards of 3/4 view (INS photos) and profile view ( mugshot photos). We have not yet searched for K gt; 5 mainly due to the computational costs, but it appears that reconstructions do not improve significantly beyond 4-5 views (see the best errors listed in Table2 ). One can easily incorporate additional physical and operational constraints into our framework.... ..."
Cited by 7

Table 1: Comparison of models based on their predictive accuracy (%) using each data source separately.

in Gene function classification using Bayesian models with hierarchybased priors
by Babak Shahbaba, Radford M. Neal 2006
"... In PAGE 8: ... Simulating the Markov chain for 10 iterations took about 2 minutes for MNL, 1 minute for treeMNL, and 3 minutes for corMNL, using a MATLAB implementation on an UltraSPARC III machine. 6 Results Table1 compares the three models with respect to their accuracy of prediction at each level of the hierarchy. In this table, level 1 corresponds to the top level of the hierarchy, while level 3 refers to the most detailed classes (i.... In PAGE 8: ... To provide a baseline for interpreting the results, for each task we present the performance of a model that ignores the covariates and simply assigns genes to the most common category at the given level in the training set. As we can see in Table1 , corMNL outperforms all other models. For the SEQ dataset, MNL performs better than treeMNL.... ..."
Cited by 3

Table 1: Comparison of models based on their predictive accuracy (%) using each data source separately.

in Gene function classification using Bayesian models with hierarchybased priors
by Babak Shahbaba, Radford M. Neal 2006
"... In PAGE 8: ... Simulating the Markov chain for 10 iterations took about 2 minutes for MNL, 1 minute for treeMNL, and 3 minutes for corMNL, using a MATLAB implementation on an UltraSPARC III machine. 6 Results Table1 compares the three models with respect to their accuracy of prediction at each level of the hierarchy. In this table, level 1 corresponds to the top level of the hierarchy, while level 3 refers to the most detailed classes (i.... In PAGE 8: ... To provide a baseline for interpreting the results, for each task we present the performance of a model that ignores the covariates and simply assigns genes to the most common category at the given level in the training set. As we can see in Table1 , corMNL outperforms all other models. For the SEQ dataset, MNL performs better than treeMNL.... ..."
Cited by 3

Table 1: Comparison of models based on their predictive accuracy (%) using each data source separately.

in unknown title
by unknown authors 2006
"... In PAGE 8: ... Simulating the Markov chain for 10 iterations took about 2 minutes for MNL, 1 minute for treeMNL, and 3 minutes for corMNL, using a MATLAB implementation on an UltraSPARC III machine. 6 Results Table1 compares the three models with respect to their accuracy of prediction at each level of the hierarchy. In this table, level 1 corresponds to the top level of the hierarchy, while level 3 refers to the most detailed classes (i.... In PAGE 8: ... To provide a baseline for interpreting the results, for each task we present the performance of a model that ignores the covariates and simply assigns genes to the most common category at the given level in the training set. As we can see in Table1 , corMNL outperforms all other models. For the SEQ dataset, MNL performs better than treeMNL.... ..."
Cited by 3

Table 1: Comparison between 3-D model-based rendering and the image-based rendering tech- niques

in TM A Survey of Image-based Rendering Techniques
by Sing Bing Kang 1997
"... In PAGE 9: ... In contrast with 3-D model-based rendering, image-based rendering tech- niques rely primarily on the original or trained set of images to produce new, virtual views. Comparisons between the 3-D model-based rendering and the image-based rendering tech- niques are shown in Table1 . In 3-D model-based rendering, 3-D objects and scenes are repre- sented by explicitly constructed 3-D models (from CAD modeler, 3-D digitizer, active range, or stereo techniques).... ..."

Table 1: Comparison of models based on their predictive accuracy (%) using each data source separately.

in unknown title
by unknown authors 2006
"... In PAGE 3: ...ype R) and of pairs of residues (i.e., the number of resi- due pairs of types R and S) in a sequence. There are 933 such attributes (see Table1 in [3]). Information in SIM (see Table 2 in [3]) and STR (see Table 3 in [3]) is derived based on a PSI-BLAST (position-specific iterative BLAST) search with parameters e = 10, h = 0.... In PAGE 4: ...biomedcentral.com/1471-2105/7/448 Table1 compares the three models with respect to their accuracy of prediction at each level of the hierarchy. In this table, level 1 corresponds to the top level of the hier- archy, while level 3 refers to the most detailed classes (i.... In PAGE 4: ... To provide a baseline for interpreting the results, for each task we present the performance of a model that ignores the covariates and simply assigns genes to the most com- mon category at the given level in the training set. As we can see in Table1 , corMNL outperforms all other models. For the SEQ dataset, MNL performs better than treeMNL.... ..."

Table 1. Distribution model based on InSAR information

in The Double Interpolation and Double Prediction (DIDP) approach for
by Linlin Ge, Shaowei Han, Chris Rizos 2000
"... In PAGE 6: ...nside the fault. The position of the interpolating point relative to the fault can be determined in the same way. The combination of open- and closed-curve models can deal with comparatively complex fault systems. After the classification of GPS stations and the intended interpolating points into different groups, a distribution model for interpolation for each group based on the GPS-corrected InSAR results is proposed, as illustrated in Table1 . In some sites of interest (Sites 1 to 4) there are both GPS and InSAR results for the deformation (the CGPS results may have to span one or more InSAR repeat cycles depending on the availability of suitable SAR image pairs).... ..."
Cited by 1

Table 1: Since the results of registration obtained by the observers are highly correlated, the model{based registration technique is observer{independent. Observers

in Model-Based Registration of Introral Radiographs
by T. Lehmann, H.G. Gröndahl, H. G. Grondahl, W. Schmitt, K. Spitzer
"... In PAGE 3: ... The normalized cross correlation coe cients were computed to indicate the similarity of the image data after registration [13]. Table1 shows the strong correlation between the registrations obtained by the three observers. 4 Discussion Although subtraction techniques are superior in diagnosing small changes in radiographs [2, 3, 4] automatic registration algorithms are still limited to RST{movements [8] and landmark based algorithms are strongly de- pendent on the observer who positions the points [7].... ..."

Table 1: Transformational Procedures (TPs). Note that the Input column lists only image-based arguments. Model-based arguments are not listed for Subgraph Isomorphism, Geometric Matching and Planar Distance TPs.

in Learning Control Strategies for Object Recognition
by Bruce Draper
"... In PAGE 24: ... Of course, since any two points on the object model can serve as compile-time parameters to a scaling TP, many other parameterizations of the scaling TP could be included in the library. Although the visual procedure library shown in Table1 is su cient for the pur- poses of this experiment, it includes just a few of the computer vision algorithms described in the literature. Unfortunately, the current Lisp implementation of SLS has proved an impediment to building a larger library, since source code for most visual procedures is available only in C.... ..."
Next 10 →
Results 1 - 10 of 35,996
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University