### Table 5: Prediction accuracies(a2 ) by the classical LDA in the original space and the gen- eralized LDA algorithms in the nonlinearly transformed feature space. In the Mfeature dataset, the classical LDA was not applicable due to the singularity of the within-class scatter matrix.

2006

"... In PAGE 30: ... Using the training set of the rst pair among ten pairs and the nearest-neighbor classi er, 5 cross-validation was used in order to determine the optimal value for a8 in the Gaussian kernel function a0 a2a44 a2 a15 a0 a14a52 a7 a2a15a4 a6 a10 a3 a6 a12a11 a26 a2 a6 a11 a30 a14a13 a11 a16a15 . After nding the optimal a8 values, mean prediction accuracies from ten pairs of training and test sets were calculated and they are reported in Table5 . In the regularization method, while the regularization parameter was set as 1, the optimal a8 value was searched by the cross-validation.... In PAGE 30: ... In the regularization method, while the regularization parameter was set as 1, the optimal a8 value was searched by the cross-validation. Table5 also reports the prediction accuracies by the classical LDA in the original data space and it demonstrates that nonlinear discriminant analysis can improve prediction accuracies compared with linear discriminant analysis. Figure 3 illustrates the computational complexities using the speci c sizes of the train- ing data used in Table 5.... In PAGE 30: ... Table 5 also reports the prediction accuracies by the classical LDA in the original data space and it demonstrates that nonlinear discriminant analysis can improve prediction accuracies compared with linear discriminant analysis. Figure 3 illustrates the computational complexities using the speci c sizes of the train- ing data used in Table5 . As in the comparison of the generalized LDA algorithms, the method To-a3 a44 a0a4a1 a52 [5] gives the lowest computational complexities among the compared... In PAGE 31: ...5 2 2.5 3x 1013 Data sets Complexity (flops) (a1 ) To-NR(Sw) (+) LDA/GSVD (O) RLDA (X) To-N(Sw) (a8 ) Proposed LDA/GSVD (a0 ) To-R(Sb) Figure 3: The gures compare complexities required for the generalized LDA algorithms in the feature space for speci c problem sizes of training data used in Table5 . From the left on x-axis, the data sets, Musk, Isolet, Car, Mfeature, Bcancer and Bscale are corresponded.... In PAGE 31: ... methods. However, combining To-a3 a44 a0a36a1 a52 with kernel methods does not make effective nonlinear dimension reduction method as shown in Table5 . In the generalized eigenvalue problem, a22 a28a1 a22 a6 a1 a21 a7 a40 a22 a3 a22 a6 a3 a21 where a22 a60a1a24a22 a6 a1 a15 a22 a3 a22 a6 a3 a35a56a37 a22 a41 a22 a15 the data dimension is equal to the number of data and the rank of a22 a49a3 a22 a6 a3 is not severely smaller than the data dimension.... ..."

### Table 1: Rational Krylov Algorithm (Approximate Gen- eral Version)

1997

Cited by 1

### Table 2 Summary of the results obtained for the initial point and the optimal solution for the Gen- eral Aviation aircraft design problem.

"... In PAGE 9: ... Similarly, the fuselage diameter does not increase from its lower bound due to both weight and drag considerations. From Table2 it can also be seen that the cruise ve- locity was increased; however, this is a design variable that is not visualized on the CAD model. While the optimization progress presents valuable information to the designer, in order to fully understand the tradeoffs, one must also consider the impact of the constraints as discussed next.... ..."

### Table 1: Perplexity results using various configurations on gen- eral, on-topic and off-topic word lists.

1998

"... In PAGE 3: ... The word lists were used to interpolate the general and topic-specific models for each of the 57 articles. Table1 shows the perplexity values obtained on the reference transcripts of the test set, using the general language model only, the topic-specific language models only, linear interpolation of the general and topic-specific language model for each story, and the interpolated language models for various selection configura- tions of the general, on-topic and off-topic word lists. MI indi- cates that the topic lists were derived using the average mutual information measure.... ..."

Cited by 7

### Table 1: Perplexity results using various configurations on gen- eral, on-topic and off-topic word lists.

"... In PAGE 3: ... The word lists were used to interpolate the general and topic-specific models for each of the 57 articles. Table1 shows the perplexity values obtained on the reference transcripts of the test set, using the general language model only, the topic-specific language models only, linear interpolation of the general and topic-specific language model for each story, and the interpolated language models for various selection configura- tions of the general, on-topic and off-topic word lists. MI indi- cates that the topic lists were derived using the average mutual information measure.... ..."

### Table 1: Perplexity results using various configurations on gen- eral, on-topic and off-topic word lists.

"... In PAGE 3: ... The word lists were used to interpolate the general and topic-specific models for each of the 57 articles. Table1 shows the perplexity values obtained on the reference transcripts of the test set, using the general language model only, the topic-specific language models only, linear interpolation of the general and topic-specific language model for each story, and the interpolated language models for various selection configura- tions of the general, on-topic and off-topic word lists. MI indi- cates that the topic lists were derived using the average mutual information measure.... ..."

### Table 2: Accuracy comparison between ZRST and RSG. Experiment 3. Bisection/Multisection method can also be used for the eigenprob- lem of symmetric tridiagonal pencils. However, we are unable to locate a suitable code in standard software packages speci cally designed for our problem in the most gen- eral form. So instead, we tested the standard eigenvalue problem (i.e. S = I and T = [1; 2; 1] as a special case. Table 3 shows the computational result comparing our algorithm ZRST with bisection algorithm DSTEBZ in LAPACK [1] on this problem. Our algorithm ZRST is about three times faster than DSTEBZ. For the special case S = I, the standard symmetric tridiagonal eigenproblem, more detailed results are reported in [14].

### Table 1: Primitive solids, and the approximating su- perellipsoid shape parameters. quot;- quot; means that in gen- eral the primitive can not be recovered using the su- perellipsoid model we employ.

1994

Cited by 5

### Table 1. Table of process model performance. Combined model quality is shown with the individual performance of the first 10 principal modes. It can be seen that the more significant modes are gen- erally more predictable.

"... In PAGE 6: ... p4 has almost no visible effect. Table1 shows the quality of performance obtained when using various models to predict the shape (the feature val- ues) based on input parameter values. Using these results, it is possible to determine how predictable certain shape vari- ations are given parameter changes.... ..."

### Table 3. Results on the at-tire domain, and the easy and hard 8-puzzle problems. Blackbox was run in its default mode, with -solver graphplan (BlackboxGP), and with -solver walksat (BlackboxWS). The planners implemented apply to problems for which the objective is to maximize the probability of satsifying the problem goals within the given time window, or the related goal of minimizing expected completion time. More gen- eral MDP problems are often given by specifying rewards for performing certain (noop or non-noop) actions. It appears that some of the kinds of information propagated here should be useful in those more general settings, and it would be interesting to see if this generalization could be made without a sacri ce in performance. Acknowledgements.This research is sponsored in part by NSF National Young Investigator grant CCR-9357793, NSF grant CCR-9732705 and an AT amp;T / Lu- cent Special Purpose Grant in Science and Technology. We would like to thank the anonymous reviewers for their detailed, thoughtful, and helpful comments.

1999

"... In PAGE 11: ... We consider the goal of achieving board state ABCDEFGH (reading left to right, top to bottom) from two di erent initial states: one in which a solution requires 18 steps and one in which a solution requires 30 steps (this is the case of initial board HGFEDCBA ). Results are given in Table3 . Note that PGraphplan is the fastest of all planners tested (even the deterministic ones) on this problem.... ..."

Cited by 59