### Table 1. Results of our monocular head pose estimation system on the Face Pointing04 Database.

2006

"... In PAGE 4: ... 2.3 Results Table1 shows our results on the described dataset. As it can be seen, our imple- mentation performed with 12.... ..."

Cited by 3

### Table 1 The Absolute Category Rating scale, used for the assessment of the zapping times

2006

"... In PAGE 3: ....2 (2x), 0.5 (2x), 1.0, 2.0 (2x), 5.0 s. The motivation for using some zapping times twice is to test the consistency of the user responses. The subject was then asked to switch between the channels as often as needed and subsequently to assess the perceived quality of the switching time according to the ACR ITU-T scale given in Table1 . The same procedure was done for the remaining nine zapping time scenarios.... ..."

Cited by 2

### Table 3 is a breakdown of the correlations among those students who were handheld computer users.

"... In PAGE 4: ... The results for all students in the fall 2002 group (n=161) are shown in cross-tabulation Table 2. The results for the handheld computer user component of that population (n=101) are shown in cross-tabulation Table3 . All variables were approximately normally distributed.... In PAGE 4: ... Table3 . Association probabilities (p) resulting from c2 analysis of crosstabulation tables for nine ordinal variables across all handheld students.... ..."

### Table 1: The result of applying the junction localization method to a synthetic T -junction with di erent amounts of added white Gaussian noise. For each noise level, this table gives the scale at which the normalized residual assumes its minimum over scales, as well as the scale at which the estimate with the minimum absolute error over scales is obtained. Moreover, numerical values of the two error measures are given at these scales. As can be seen, the selected scales increase with the noise level, and the scale at which the normalized residual assumes its minimum over scales serves as a reasonable estimate of a scale at which a near optimal localization estimate over scales is obtained.

1998

"... In PAGE 27: ... As can be seen, the selected scales increase with the noise level, and the scale at which the normalized residual assumes its minimum over scales serves as a reasonable estimate of a scale at which a near optimal localization estimate over scales is obtained. Table1 gives a numerical illustration of basic properties of this scale selection method for junction localization. It shows the result of applying one iteration of the junction localization method to a T junction with 90 degree opening angles, and the results are shown in terms of the following six measures as function of the noise level: the selected scale tdmin obtained by minimizing the normalized residual over scales, the normalized residual at the selected scale, the absolute error in the localization estimate at the selected scale, the scale tabs at which the localization estimate with the minimum absolute error is obtained, the normalized residual at tabs,... ..."

Cited by 232

### Table 3: The Effects of Leverage: Interactive Specif ication

1999

"... In PAGE 10: ... Everything that follows asks in one way or another whether some or all of the coefficients in this simple model are related to the measures of leverage. The impact of leverage Table3 presents a first test of our central hypothesis. We begin with the three-variable specification, and add a single interaction term, given by dI t *DEBT t-1 , where DEBT t -1 is a once-lagged leverage measure.... In PAGE 11: ... This is perhaps easiest to see by comparing the impulse response of house prices to an income shock for cities with different leverage levels, shown in Figure 2. The figure uses the parameter estimates from column (1) of Table3 , and compares a city with the 10th percentile value of HIGHLTV (which is approximately 5%) to a city with the 90th percentile value of HIGHLTV (which is approximately 25%). The figure depicts a dramatic difference in the implied reaction of the two cities to a 1% income shock.... In PAGE 11: ...29% in the high-leverage city, before turning around. As a slight variation on the specifications in columns (1), (3), and (5) of Table3 , we also try including the lagged measure of leverage DEBT t -1 itself in the regression as an additional control variable. This is done in columns (2), (4) and (6) of the table.... In PAGE 11: ...n the tax code, demographics, etc.). 9.In Table3 and those that follow, our standard errors allow for both heteroskedasticity, as well as for correlation within each city-survey cluster. There are a total of 111 of these clusters in our data set.... In PAGE 12: ... However, for our purposes the important point is that including this extra variable in the regression does not materially change the estimated coefficients on the key dI t *DEBT t-1 interaction term. One concern with the regressions in Table3 is that they are very tightly parameterized. First, they allow only the dI t coefficient to vary with leverage, and force the dP t -1 and P t-1 /I t- 1 coefficients to be constant across cities with different leverage.... In PAGE 12: ... For example, the coefficient on dP t -1 is about the same across quartiles when we use HIGHLTV ; is higher in the high- leverage quartile when we use YESLOAN ; and is lower in the high-leverage quartile when we use MEDIAN . Finally, consistent with these first two observations, the regressions in Table 4 yield impulse response functions that look quite similar to those implied by the regressions in Table3 . This is illustrated in Figure 3, which plots the impulse responses for the high and low quartiles according to our HIGHLTV measure of leverage.... In PAGE 13: ... In the interests of brevity, detailed tables are not provided; they can be found in a previous version of this paper (Lamont and Stein, 1997). Moreover, the tests we discuss below represent modifications of our more tightly-parameterized specification from Table3 . We have also examined the analogous modifications of the looser specification in Table 4; as one might expect based on the comparisons above, these yield very similar conclusions .... In PAGE 13: ... We have also examined the analogous modifications of the looser specification in Table 4; as one might expect based on the comparisons above, these yield very similar conclusions . First, we check whether the results in Table3 are due primarily to a few influential outliers. We sort the observations on both dP t and dI t , and discard the top and bottom one percent of the realizations for these two variables.... In PAGE 14: ... The advantage of this approach is that the projected leverage measure at any time t now only contains information available at that time. 12 Next, we re-run the regressions of Table3 , but substitute in our projected leverage measures for the actual stale data . As one might have expected based on the idea that we are fixing a measurement error problem, the coefficients on the key dI t *DEBT t-1 term increase in all six specifications.... In PAGE 14: ... For example, in the first specification using the HIGHLTV measure, the coefficient of interest rises from 2.27 in column (1) of Table3 to 3.03, an increase of approximately 33%.... In PAGE 15: ... 15 We implement this approach in Table 5. The specifications are the same as in Table3 , except that we allow each of the 44 cities to have its own coefficient on dI t. Thus if some cities are more emerging than others over the entire sample, and hence have house prices that are more sensitive to income shocks, this will now be 13.... In PAGE 16: ... As it turns out, this specification does not reduce the interaction coefficients . In fact, in five of six cases, the interaction terms increase relative to Table3 , in some cases by quite a bit. Naturally, by removing all the across-city variation in our leverage measures, we reduce the precision of our estimates.... In PAGE 16: ... One natural such candidate variable is population growth. In Table 6, we run a horse race which effectively asks: are our previous interaction results truly due to leverage effects, or merely to the fact that leverage is correlated with population growth? The regressions are similar to those in Table3 , with the following modifications. In columns (1), (3) and (5), we add a second interaction term, dI t *dPOP t-1 , where dPOP t is defined as a city apos;s population growth in the year from t-1 to t.... In PAGE 18: ...20. Given the success of this first-step regression, we next proceed to run an IV version of the specification in Table3 . Everything is exactly as before, except we use dI t *DUMMY as an instrument for dI t *DEBT t-1 .... In PAGE 18: ... For example, in column (2), using our favored HIGHLTV measure, the point estimate goes from 1.784 in Table3 to 1.444 in Table 7.... ..."

Cited by 1

### Table 3 - Leverage equation: model with asymmetric effect of cash flow; dependent variable: (B/K)t; sample period: 1980-1990; GMM estimates in first differences

"... In PAGE 19: ...sales growth in capturing greater actual investment and hence a greater need for finance dominates in these cases. In Table3 , the cash flow coefficient is allowed to differ depending whether cash flow increases (D= I) or decreases (D, = 0). The coefficients on cash flow are negative and significant in both regimes for the sample of affiliates to large national groups, although the coefficient is larger in absolute value when cash flow decreases.... ..."

### Table 1: Numerical values of some characteristic entities obtained at the central point of the image in Figure 3 using di erent amounts of additive Gaussian noise and automatic scale selection. Note the stability of the selected integration scale (proportional to sdet L) with respect to variations in the noise level , and that the selected local scale tQ increases with . Observe also the increasing di erence between the estimates of the normalized anisotropy ~ Q computed at the selected local scale, and at zero local scale (true value 0.600). The last two columns show the error in surface orientation n computed by monocular shape-from-texture under a speci c assumption about the surface texture (weak isotropy).

1996

"... In PAGE 15: ...3 it is shown that under a certain assumption about the surface texture (weak isotropy), the estimate of surface orientation is directly related to the normalized anisotropy ~ Q, and to the eigenvector of L corresponding to the maximum eigenvalue. Table1 illustrates the accuracy in estimates of ~ Q and surface orientation computed in this 4In these curves there is also a minimum in the signature of ~ Q at coarse scales. The reason why this occurs is that the higher-frequency sine component is suppressed much faster than the lower-frequency sine component.... In PAGE 29: ... The middle column shows the same cylindrical surface image that was used in the rst row in Figure 4. Here, 25% white Gaussian noise has been added; a noise level high enough to ensure that direct computations on unsmoothed data are bound to fail (compare with Table1 ). It is quite obvious that the adaptive multi-scale blob detection technique is able to handle this noise level without much di culty.... ..."

Cited by 44

### Table 3: Example 3 Results for Frequency ! = 0:10. Series Amplitude Complex Scale y Real Scale z

1999

"... In PAGE 16: ....25 frequencies as being important. We display the results of the frequency 0.10 analysis in Table3 , the results for the 0.25 frequency are similar.... ..."

Cited by 2

### Table 2. Confusion matrix for classifying horizontal head orientation. Absolute counts are given. Correct classification was achieved 52.0% of the time.

in Estimating Head Pose with Neural Networks- Results on the Pointing04 ICPR Workshop Evaluation Data

"... In PAGE 3: ...verage error of 9.5 degrees for pan and 9.7 degrees for tilt on the multi-user test set. Table2 summarizes the pan and tilt estimation results on the Pointing04 ICPR workshop data. It has to be noted that the face orientation data used here consists of faces collected at orientations of 15 degree steps for horizontal rotation, and 30 degree steps for vertical ro- tation.... ..."

### Table 2: Results of estimated motion parameters from stereo images 4.7 Conclusions Several algorithms for motion extraction have been described in this chapter. For pure translation and pure rotation, linear methods are still favourable and can give reasonable results. For general cases, the availability of two pairs of stereo images leads to a linear method which can give much more reliable results and which does not su er from the ambiguous scale problem existing for any method using a pair of monocular images.

"... In PAGE 65: ... The translation vector T is then calculated using Equation 73 once the rotation para- meters are available. The simulation has run 100 times with random noise, and the mean values of the calculated motion parameters are shown in Table2 together with their true values. From the table it can be seen that the estimated motion parameters are very close to their true values although the 3D point coordinates are corrupted with noise.... ..."