• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 105,182
Next 10 →

Table 1 - Taxonomy for approaches to registration in augmented reality

in Dedication
by James R Vallino, Christopher M Brown 1998
"... In PAGE 11: ...List of Tables Table1... In PAGE 40: ... We classify previous approaches in a taxonomy according to their usage of position sensing and camera or scene calibration. Table1 shows this taxonomy. Methods using position sensing typically measure the location of the viewer and/or camera viewing the scene with a sensor.... In PAGE 40: ... The Polhemus magnetic sensor (Polhemus Corporation 1996) is typical. The other axis in Table1 classifies methods based on whether they require the intrinsic parameters of the camera viewing the scene or knowledge of the metric location of features in the real scene. The intrinsic camera parameters might include lens focal length, and the height and width of the pixel sensor.... In PAGE 42: ... Any errors introduced by incorrect pose sensing or camera calibration propagate through the system and appear as misregistration in the final augmented reality image. One position-based reference in Table1 does not require camera or scene calibration.... ..."

TABLE II SENSING RANGE OF SEVERAL TYPICAL SENSORS

in Dynamic clustering for acoustic target tracking in wireless sensor networks
by Wei-peng Chen, Jennifer C. Hou, Lui Sha 2003
Cited by 26

Table 1: Percentage improvement in rates of successful conver- gence for saliency based 2D/3D registration with a simple illumi- nation model. Improvement is measured relative to unweighted 2D/3D registration under otherwise identical conditions. Perfor- mance degrades at 10% saliency but improves when more pixels are considered salient.

in G.-Z.: Visual feature extraction via eye tracking for saliency driven 2d/3d registration
by Adrian J. Chung, Fani Deligianni, Xiao-peng Hu, Guang-zhong Yang
"... In PAGE 5: ... Although subjective, this classi cation scheme can be applied consistently to every pose in an unbiased manner. In this way the registration success rate was estimated for saliency maps covering a varying percentage of the video image ( Table1 ). Saliency maps with between 30% to 60% non-zero pixels lead to an almost two- fold improvement in the registration success rate over traditional intensity-based correlation.... ..."
Cited by 1

Table 1: Percentage improvement in rates of successful conver- gence for saliency based 2D/3D registration with a simple illumi- nation model. Improvement is measured relative to unweighted 2D/3D registration under otherwise identical conditions. Perfor- mance degrades at 10% saliency but improves when more pixels are considered salient.

in G.-Z.: Visual feature extraction via eye tracking for saliency driven 2d/3d registration
by Adrian J. Chung, Fani Deligianni, Xiao-peng Hu, Guang-zhong Yang
"... In PAGE 5: ... Although subjective, this classification scheme can be applied consistently to every pose in an unbiased manner. In this way the registration success rate was estimated for saliency maps covering a varying percentage of the video image ( Table1 ). Saliency maps with between 30% to 60% non-zero pixels lead to an almost two- fold improvement in the registration success rate over traditional intensity-based correlation.... ..."
Cited by 1

Table 1. Number of companies developing and producing systems for the 3D measurement of the human body.

in
by unknown authors
"... In PAGE 2: ... ACTUAL STATE OF TECHNOLOGY Technologies used commercially for the digital measurement of the human body can be divided into five different groups: (a) laser scanning, (b) projection of white light patterns, (c) combination modeling and image processing, (d) digital manual measurement, (e) technologies based on other active sensors. Table1 and the Figure 3 show the distribution of all the existing companies developing systems for the measurement of the human body. These are divided into three major groups: systems based on laser scanning, systems employing white light projection and the rest.... ..."

Table 3: Preliminary image sensor characteristics based on prototype measurements and estimations.

in unknown title
by unknown authors
"... In PAGE 9: ... Lastly, the source of dark signal electrons is tackled by means of passive cooling of the sensor head down to -20 degrees centigrade. These measures should result in the performance profile shown in Table3 . Take note of the high charge conversion factor of 8fF and photon response of up to 8.... ..."

Table 3. 3D measurements of the test points

in 2006 SPIE Defense and Security Symposium Gamma/X-Ray Linear Pushbroom Stereo for 3D Cargo Inspection
by Zhigang Zhu *ab, Yu-chi Hu Bc
"... In PAGE 5: ...y using Eq. (6). Table 2 shows the perspective parameters and the Tx values for all the three settings. Table3 shows the 3D measurements using the image point pairs used for calibration between two views, the ten-degree and the twenty-degree images. The purpose is to examine the accuracy of the pushbroom stereo modeling and calibration results.... In PAGE 5: ... The purpose is to examine the accuracy of the pushbroom stereo modeling and calibration results. The numbers of the points listed in Table3 are labeled in Figure 1 for comparison. For the container with a dimension of 20x8x8 ft3, the average errors in depth z, length x and height y are 0.... In PAGE 5: ...imension of 20x8x8 ft3, the average errors in depth z, length x and height y are 0.064 ft, 0.033 ft and 0.178 ft respectively, indicating that the pushbroom modeling and calibration is accurate enough for 3D measurements. Note that the accuracy of the estimation in Table3 only reflects the errors in sensor modeling and calibration. No image localization errors are included.... In PAGE 6: ... (3) The parallel parameters are more accurate than the perspective ones due to fewer parameters in calibration and no inter-dependency among unknowns in the former, whereas three of the five unknowns in Eq. (8) are not independent, thus creating larger errors in the estimations of the y coordinates than the x coordinates ( Table3 ). In solving Eq.... In PAGE 10: ... 3D measurements and visualization of objects inside the cargo container. The black rectangular frames show the cargo container constructed from the test data in Table3 . The red lines (with stars) show the 3D estimates from automated stereo matches, for the cargo container and three objects inside.... ..."

Table 1 Algorithm for 3D data registration

in Vision Data Registration for Robot Self-localization in 3D
by Pifu Zhang, Evangelos E. Milios

Table 2: Performance of 3D registration

in Non-linear Registration of Pre- and Intraoperative Volume Data Based On Piecewise Linear Transformations
by C. Rezk-Salama, P. Hastreiter, G. Greiner, T. Ertl

Table 1: Average errors (standard deviation) for all sensors. Errors for force measurements: Sensor # Magnitude (N) Magnitude (%) Direction (o)

in A Control Basis for Haptically-Guided Grasping and Manipulation
by Kamal Souccar, Jefferson A. Coelho, Jr., Roderic A. Grupen 1998
"... In PAGE 11: ...xpressed in degrees. The average error for each measure is indicated in the histogram. The histograms shown are typical for the other sensors. Table1 shows the average errors and standard deviation for all measures and all sensors. Notice that the sensors are similar to each other, and the relative errors associated with the moments are higher than the errors associated with the force measurement, probably due to smaller signal to noise margins during measurements.... In PAGE 13: ... Figure 8 shows the distribution of values obtained for sensor #1 and Table 3 reports the average noise and standard deviation for all sensors. The background noise (corresponding to an average force magnitude error of 0:00133N) is relatively low when compared to the load cell calibration error of 0:02792N ( Table1 ). No appreciable long term drift was observed.... ..."
Cited by 1
Next 10 →
Results 1 - 10 of 105,182
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University