### Table 1: Categorization of Manifold Learning Methods

2007

"... In PAGE 2: ...Table 1: Categorization of Manifold Learning Methods 2 Manifold Learning Methods and their connections to Distance Metric Learning Manifold Learning approaches can be categorized along the following two dimensions: first, the learnt embedding is linear or nonlinear; and second, the structure to be pre- served is global or local (see Table1 ). Based on the analysis in section 1, all the linear methods in Table 1 except Multidimensional Scaling (MDS), learn an explicit linear projective mapping and can be interpreted as the problem of distance metric learning.... In PAGE 2: ...Table 1: Categorization of Manifold Learning Methods 2 Manifold Learning Methods and their connections to Distance Metric Learning Manifold Learning approaches can be categorized along the following two dimensions: first, the learnt embedding is linear or nonlinear; and second, the structure to be pre- served is global or local (see Table 1). Based on the analysis in section 1, all the linear methods in Table1 except Multidimensional Scaling (MDS), learn an explicit linear projective mapping and can be interpreted as the problem of distance metric learning. MDS finds the low-rank projection that best preserves the inter-point distance matrix E.... ..."

### Table 1: Embedding data sets into manifolds

"... In PAGE 3: ... To evaluate the accuracy of the manifolds obtained we used several measures. Table1 compares the manifold with the best plane embedding in terms of (1) average error: the ratio of the objective function value (I) to the sum of squares of all the e ective distances, (2) av- erage expansion: the average expansion factor for pairs whose distance went up compared to the original, (3) average contraction: the average shrinking factor for pairs whose distance went down and (4) maximum dis- tortion: the product of maximum contraction (max fac- tor by which some edge length was reduced) and maxi- mum expansion (factor by which some edge length was... In PAGE 4: ... To test how well the learned manifold generalizes, we dropped at random 8% of the measured e ective dis- tances (edges) from the data sets, computed the man- ifold on the rest of the observations and made predic- tions on the 8% not used in computing the manifold. The last column in Table1 shows that the manifold prediction error is low on all the measures and is com- parable to that on the full set of values. From this we conclude that the manifold captures and generalizes wireless connectivity accurately.... ..."

### Table 2: Comparison of recognition rate on one rat on one generalized manifold after linear/nonlinear alignment.

2003

Cited by 17

### Table 1: Unsupervised distance metric learning methods. This group of methods es- sentially learn a low-dimensional embedding of the original feature space; and can be categorized along two dimensions: preserving glocal structure vs. preserving local structure; and linear vs. nonlinear

2007

### Table 2.2: Non-Linearly Separable Classification Data

### Table 4: Elementary School Non-Linear Production Function

in Enhancing our Understanding of the Complexities of Education: "Knowledge Extraction from Data" using

### Table 4.13: Stability Margin 1= kTzwk1 The behaviour of the control system is characterised in terms of the closed-loop norms in Table 4.13 and Table 4.14 on the next page in xed operating points only. It would be substantially more complicated to analyse a gain-scheduled control scheme as a linear parameter-varying system, and the conditions for robust stability and robust performance would only be su cient in this framework, so that practically one often resorts to nonlinear simulation studies. Yet, the simple numbers 1= kTzwk1 and kTedk1 already give a good indication of the control system behaviour. The system exhibits good stability properties over a wide range of operating con- ditions, at low manifold pressure, however, and especially during idle operation, the stability is compromised.

### Table 1: Sampling density of various models. The rst di- mension in parentheses corresponds to the (rd=rs) dimen- sion, which is only applicable to the embedding calculation, but not the linear prerendering.

"... In PAGE 5: ... The images are rendered at a resolution of 320 by 320 and principal com- ponent analysis (PCA) reduces the data size. The sampling density of each model is shown in Table1 . For simplicity, the isotropic single-lobe version of the Lafortune model is employed.... In PAGE 5: ... As a result, in ad- dition to the nonlinear parameters, we also need to sample along the rd rs parameter. We sample each of the ve models with regular grids ( Table1 ). Next we seek to embed all ve models into a single embedding space.... In PAGE 7: ...nt suprathreshold behavior observed in Section 2.1. Fitting a single BRDF to a target model takes about 10 minutes on average on a single PC, but the computation needs to be per- formed only once. We sample the BRDF models with the same grid as for the embedding space ( Table1 ) and store all the pairwise conversions. Alternative BRDF model Current BRDF model A (Current pick) B apos; dC apos; dB apos; A apos; C apos; Figure 6: Illustration of the manifolds spanned by two an- alytical BRDF models in an abstract uni ed BRDF space.... ..."