### Table 5.3 presents the computational cost of critical steps. The nonlinear sys- tem solution time consists of the calculation of the Jacobian (matrix assembly), application of the nonlinear operator, formation of the multilevel preconditioner, and the solution of the linear system. The linear system solution time is dom- inated by matrix-vector multiplication, application of the multilevel precondi- tioner (preconditioning), and orthogonalization for the Krylov subspace vectors. The nonlinear solve takes most of the time and its parallelization is fairly well achieved.

1998

Cited by 6

### Table 1 summarizes the fundamental differences in the approaches of both at- tacks. Following the notation in [7], Templates estimate the data-dependent part ht itself, whereas the Stochastic model approximates the linear part of ht in the chosen vector subspace (e.g., F9) and is not capable of including non-linear parts. Templates build a covariance matrix for each key dependency whereas the Sto- chastic Model generates only one covariance matrix, hereby neglecting possible multivariate key dependent noise terms. A further drawback may be that terms of the covariance matrix are distorted because of non-linear parts of ht in F9.

"... In PAGE 5: ... Table1 . Fundamental differences between Templates and the Stochastic Model 5 The Euclidean norm proposed in [7] produces very similar results.... ..."

### Table III. Comparison of ROC classifier performance for to values of Pd. Results are shown for the linear filter versus four different types of nonlinear training. N: white noise training, G-S: Gram-Schmidt orthogonalization, subN: PCA subspace noise, C-H: convex hull rejection class.

1998

Cited by 2

### Table III. Comparison of ROC classifier performance for to values of Pd. Results are shown for the linear filter versus four different types of nonlinear training. N: white noise training, G-S: Gram-Schmidt orthogonalization, subN: PCA subspace noise, C-H: convex hull rejection class.

### Table 5.7: Parameters used in the comparison experiment. The threshold values used are listed at Table 5.7. The data set of this ex- periment is 100,000 transactions of synthetic data at di erent dimensionality. Because of the reasons described in Section 5.3.1 and Section 5.4, only simple hyper-rectangular data are generated for this experiment. Only one subspace is chosen and three dimensional clusters are embedded in this subspace. All algo- rithms discover the subspace successfully. Figure 5.7 shows the performance of them. They all scales non-linearly with the dimensionality. The running time of the 50-dimensional case is 25.0, 18.4 and 31.40 times of the running time of the 10-dimensional case for ENCLUS SIG, ENCLUS INT and CLIQUE respectively. This suggests our algorithms have better scalability than CLIQUE. We suspect that the ENCLUS algorithms run faster than CLIQUE because the candidate sets in ENCLUS are based on subspaces rather than on units in CLIQUE. Since the potential number of dense units is much larger than the

1999

### Table 1. Recognition accuracies (in %) with k = 20 subspace projections using 5-fold Cross-Validation.

2004

"... In PAGE 22: ...31% with the highest rate achieved being 79.62% as shown in Table1 . The full image- vector nearest-neighbor (template matching) | i.... In PAGE 23: ...30% with the highest rate being 82.90% as shown in Table1 . We found little di erence between the two ICA algorithms and noted that ICA resulted in the largest performance variation in the 5 trials (7.... In PAGE 24: ...able 2. Comparison of various techniques across multiple attributes (k = 20). PCA ICA KPCA Bayes Accuracy 77% 77% 87% 95% Complexity 108 109 109 108 Uniqueness yes no yes yes Projections linear linear nonlinear linear with the highest rate being 92.37% as shown in Table1 . The standard devi- ation of the KPCA trials was slightly higher (3.... In PAGE 24: ...83% with the highest rate achieved being 97.87% as shown in Table1 . The standard 10 In practice, kI gt; kE yields good results.... In PAGE 26: ... 4.6 Performance of Manifolds The relative performance of the principal manifold techniques and Bayesian matching is summarized in Table1 and Figure 11. The advantage of proba- bilistic matching over metric matching on both linear and nonlinear manifolds is quite evident ( 18% increase over PCA and 8% over KPCA).... ..."

Cited by 7

### Table 3 Discretized subspaces

2004

Cited by 5

### Tables method subspace sparsity

2004

Cited by 4