### Table 1 Properties of techniques for dimensionality reduction.

"... In PAGE 11: ...2. General properties In Table1 , the thirteen dimensionality reduction tech- niques are listed by four general properties: (1) the con- vexity of the optimization problem, (2) the main free... In PAGE 11: ... We discuss the four general properties below. For property 1, Table1 shows that most techniques for dimensionality reduction optimize a convex cost func- tion. This is advantageous, because it allows for find- ing the global optimum of the cost function.... In PAGE 11: ... Because of their nonconvex cost functions, autoencoders, LLC, and manifold charting may suffer from getting stuck in local optima. For property 2, Table1 shows that most nonlinear tech- niques for dimensionality reduction all have free param- eters that need to be optimized. By free parameters, we mean parameters that directly influence the cost func- tion that is optimized.... In PAGE 11: ... The main advantage of the presence of free parameters is that they provide more flexibility to the technique, whereas their main disadvantage is that they need to be tuned to optimize the performance of the di- mensionality reduction technique. For properties 3 and 4, Table1 provides insight into the computational and memory complexities of the com- putationally most expensive algorithmic components of the techniques. The computational complexity of a di- mensionality reduction technique is of importance to its applicability.... In PAGE 12: ...duction technique is determined by data properties such as the number of datapoints n, the original dimension- ality D, the target dimensionality d, and by parameters of the techniques, such as the number of nearest neigh- bors k (for techniques based on neighborhood graphs) and the number of iterations i (for iterative techniques). In Table1 , p denotes the ratio of nonzero elements in a sparse matrix to the total number of elements, m indi- cates the number of local models in a mixture of factor analyzers, and w is the number of weights in a neural network. Below, we discuss the computational complex- ity and the memory complexity of each of the entries in the table.... ..."

### TABLE 1 Local Network Positions

2005

Cited by 2

### Table 2: Technology Mapping results

"... In PAGE 8: ... The results show that the Boolean approach reduces the number of matching algorithm calls, nd smaller area circuits in better CPU time, and reduces the initial network graph because generic 2-input base function are used. Table2 presents a comparison between SIS and Land for the library 44-2.genlib, which is distributed with the SIS package.... ..."

### Table 1: Advantages of aggressive dimensionality reduction

2001

"... In PAGE 10: ... The rationale behind these methods is that any change in the nearest neighbor from the full dimensionality leads to loss of information; the rationale behind our approachistobe aggressive in removing the dimensions whichhavelow co- herence as noise; thus, on an overall basis the aggressiveness of a dimensionality reduction process which uses the coher- ence probability of the dimensions may lead to very low precision with respect to the original data but much higher e#0Bectiveness and coherence. In order to illustrate our point, wehave indicated #28in Table1 #29 the prediction accuracy us- ing a 1#25-thresholding technique in which only those eigen- values which are less than 1#25 of the largest eigenvalue are discarded. This prediction accuracy is typically very close to the full dimensional accuracy and is signi#0Ccantly lower than the optimal accuracy for all 3 data sets #28as illustrated in the accuracy charts of Figures 5, 8, 11#29.... In PAGE 10: ... Thus, such a drastic reduction in dimensionality does not attempt to mirror the original nearest neighbors in the data; but rather improves their qualityby removing the noise e#0Bects in high dimensionality. It is also clear from Table1 that the opti- mal accuracy dimensionality is signi#0Ccantly lower than the 1#25-thresholding method. In fact, the dimensionality for the 1#25-thresholding method is quite close to the full dimension- ality.... ..."

Cited by 13

### Table 1: Advantages of aggressive dimensionality reduction Data Set Full Dimensional Optimal Quality Optimal Quality 1%-thresholding 1%-thresholding

2001

"... In PAGE 10: ... The rationale behind these methods is that any change in the nearest neighbor from the full dimensionality leads to loss of information; the rationale behind our approach is to be aggressive in removing the dimensions which have low co- herence as noise; thus, on an overall basis the aggressiveness of a dimensionality reduction process which uses the coher- ence probability of the dimensions may lead to very low precision with respect to the original data but much higher e ectiveness and coherence. In order to illustrate our point, we have indicated (in Table1 ) the prediction accuracy us- ing a 1%-thresholding technique in which only those eigen- values which are less than 1% of the largest eigenvalue are discarded. This prediction accuracy is typically very close to the full dimensional accuracy and is signi cantly lower than the optimal accuracy for all 3 data sets (as illustrated in the accuracy charts of Figures 5, 8, 11).... In PAGE 10: ... Thus, such a drastic reduction in dimensionality does not attempt to mirror the original nearest neighbors in the data; but rather improves their quality by removing the noise e ects in high dimensionality. It is also clear from Table1 that the opti- mal accuracy dimensionality is signi cantly lower than the 1%-thresholding method. In fact, the dimensionality for the 1%-thresholding method is quite close to the full dimension- ality.... ..."

Cited by 13

### Table 4: Optimized symmetry reduction

"... In PAGE 6: ... Redundant transition removal and symmetry are compatible because if two transitions are independent in the original state graph, they are also independent in the symmetry reduced state graph. As shown in Table4 , because there is no sharing between Unexpanded and Reached, less run-time re- duction is obtained for preserved/guaranteed guards. However, a better run-time reduction is obtained for redundant transition removal, because removing a re- dundant transition also remove the computation re- quired to nd the canonical state.... ..."

### Table 1. Classification accuracy for dimensionality reduction algorithms. ROSIS KSC BOTSWANA

"... In PAGE 7: ... The number of features for each approach, that is, the number of principal components for PCA, the number of features for SFS, and the number of contiguous intervals in PieceConst, were chosen using 10-CV on the training sets. Table1 shows the overall classification accuracy for the datasets ROSIS, KSC and PAVIA when the optimal number of features is chosed using cross-valiation. Overall, PieceConst performs equal to or better than the other dimensionality reduction metods.... ..."

### Table 2. Constrained graph layout.

"... In PAGE 14: ... The constrained layout of Graph 1 to Graph 9, with the constraints imposed, are given in Figure 8. Table2 shows the time in seconds for each method to layout the constrained graph. Again, Model B is significantly faster than Model A and Model C is usually as fast as Model B.... ..."

### Table 1. Common graph embedding view for the most popular dimensionality reduction algorithms. Note that type D means direct graph embedding, while L and K mean the linearization and kernelization of the graph embedding, respectively.

2005

"... In PAGE 4: ... (2-4). Table1 lists the similarity and constraint matrices for all above mentioned methods. And their corresponding graph embedding types are also demonstrated.... In PAGE 4: ... From Eq. (6) and (7), we can easily have the listed formulations of the similarity matrix W and constraint matrix B for PCA/KPCA and LDA/KDA as listed in Table1 . Figure 2 plots the intrinsic graphs for PCA and LDA, respectively.... ..."

Cited by 8