### Table 1. Ten statements for computing the Fisher information (see text).

"... In PAGE 2: ... When considering the important limiting case of a pure Poisson-noise process, the log-likelihood function, the Fisher-information and thus the MVB can be stated in closed form (Adorf 1996b). Given the intrinsically rather complex de nition of the Fisher- information, the equations resulting from pure Poisson-noise assumptions are remarkably simple, as is evident from the fact that ten simple IDL statements (see Table1 ) su ce to compute all the independent elements of the symmetric 4 4 element Fisher information matrix for joint photometry and astrometry of an arbitrary source on top of a spatially stationary (non-variable) background. Table 1.... ..."

### Table 1: The Fisher information (FI) matrices and the inverse of the variational covariance (IVC) matrices corresponding to Figure 1. Each cell contains a 2 2 matrix.

2005

"... In PAGE 6: ... For di erent values of these parameters, we compute the corresponding Fisher information matrices and the co- variance matrices of the variational posteriors. The mixture densities of some typical cases are plotted in Figure 1, and the corresponding Fisher information matrices and the inverses of variational covariance ma- trices are described in Table1 . Obviously, if the com- ponents in the mixture models are widely separated, these two matrices are very similar, whereas, if the components are nearly identical, they are very di er- ent.... ..."

Cited by 3

### Table 1: The Fisher information (FI) matrices and the inverse of the variational covariance (IVC) matrices corresponding to Figure 1. Each cell contains a 2 2 matrix.

2005

"... In PAGE 6: ... For di erent values of these parameters, we compute the corresponding Fisher information matrices and the co- variance matrices of the variational posteriors. The mixture densities of some typical cases are plotted in Figure 1, and the corresponding Fisher information matrices and the inverses of variational covariance ma- trices are described in Table1 . Obviously, if the com- ponents in the mixture models are widely separated, these two matrices are very similar, whereas, if the components are nearly identical, they are very di er- ent.... ..."

Cited by 3

### Table 3.1: Mean and standard deviation for 200 replication of estimate of 0, (~ ) and (~ ) are the standard deviation of the estimate computed using true and observed Fisher information number respectively, under gamma frailty

### Table 2. Performance comparison of the G metric (equation (18)) under different conditions of cut cores and values (inequalities (6) and (7)). Three item selection techniques are reported for the POKS approach (information gain, Fisher information, and random item selection with a 95% confidence interval) , whereas only the Fisher information technique is reported for the IRT framework, which is the most commonly used. IRT POKS

"... In PAGE 31: ... One set at = 0:15 for all conditions, and another one tailored for each test and corresponding to the graphs of figure 6. Table2 summarizes the results of the simulations under these different conditions. The random selection represents the average of 9 simulation runs for each cut score.... ..."

### Table 6.1: Estimates of standard deviation for ratio estimator ^ using the bootstrap method, delta method, asymptotic normal approximation using Fisher information of Poisson regression model, and Cruz-Orive apos;s formula (2.3.

### Table 1: The Phase Di erencing Algorithm. In [2] we have derived the Cramer-Rao lower bound on the error variance in estimating the phase model parameters when the signal is observed in the presence of additive white Gaussian noise. More speci cally, let denote the observations log likelihood function and let SNR = A2 2 denote the signal-to-noise ratio where 2 is the observation noise variance. In [2] we conclude that the elements of Fisher Information Matrix (FIM) block which corresponds to the phase parameters are given by

1996

"... In PAGE 8: ... Ideally, the resulting 2-D signal is constant with amplitude A. The algorithm which is based on the foregoing results is summarized in Table1 . In the following we refer to the algorithm as the Phase Di erencing Algorithm (PD Algorithm).... ..."

Cited by 8

### Table 4. Summary of logbook information from the 1999 and 2000 season. [CPUE = Catch per unit effort. A unit of effort is one trip by one fisher, with a trip being a single night of marroning.]

in of

"... In PAGE 11: ... The information from logbook holders for the 2000 marron season and a comparison of the 1999 information are provided in table 4. Overall, catch rates were slightly lower in the 2000 season as compared to the 1999 season ( Table4 ). Of interest is the reduction in the activity by logbook holders and the reduction in the percent of trips that failed to return the bag limit of 10 legal sized marron.... ..."

### Table 1: Classi cation Results for Fisher Iris data

2005

"... In PAGE 6: ... Furthermore, a one-class classi er, when compared to a two-class classi er, has a smaller amount of information available, in order to build up the mapping function and therefore a comparison is di cult as well. As shown in Table1 , the self-detector clas- si cation method outperforms the real-valued negative se- lection in all classi cation tests for all chosen radii for 100% training data. For 50% training data, the self-detector clas- si cation with radius rs = 0:1 outperforms the real-valued negative selection in the detection rate, but it has a higher false positive rate.... ..."

Cited by 11

### Table 9: Sampling Distribution of Normalized Estimates for Species No. 38 Estimate Mean S.D. Skewness Kurtosis

1998

"... In PAGE 18: ...Place of Table 9 Table9 shows that the normalized MCMC estimates have approximately a standard normal distribution. This suggests that interval estimates based on the estimated Fisher information will be fairly reliable in this situation.... ..."

Cited by 10