### Table 3: Weights of the linear model trained on the sparse-21 representation.

2006

"... In PAGE 5: ... We recognize in this picture sev- eral well-known features mentioned in previous studies, as well as new ones. The two main differences between previously reported nucleotide features and the features highlighted in Table3 and Figure 2 are that (i) the features obtained by the LASSO result from a global analysis of the complete dataset, as opposed to statistical analysis of sub- Huesken et al. [33]), and (ii) we provide a precise quanti- tative assessment of the importance of each feature, the weight of a feature being its contribution in the final pre- dicted silencing efficacy.... In PAGE 7: ... The comparison of the two models learned to each other highlights the conservation of most motifs discussed above, suggesting that they are not just an artifact of the training set but might be related to some biological function. In fact, as observed in Table3 , very few positions seem to be without influence on the efficacy of the siRNA. A ques- tion worth investigating is whether all the features appear- ing in Table 3 and Figure 2 really help predict efficacy, or whether some of them may be discarded.... In PAGE 7: ... In fact, as observed in Table 3, very few positions seem to be without influence on the efficacy of the siRNA. A ques- tion worth investigating is whether all the features appear- ing in Table3 and Figure 2 really help predict efficacy, or whether some of them may be discarded. The fact that the LASSO model tries to find parsimonious models based on as few features as possible to predict accuracy suggests that all detected motifs indeed play a role.... ..."

### Table 3: ODEs on the Stiefel manifold k =4,n = 100; 1000: Flops count over one time step for the methods. Second experiment.

2003

"... In PAGE 11: ... Second experiment. In this case the advantage of the commutator-free Lie group methods over the others is much more evident, as we can see in Table3 and Figure 3 (b). The RKR-C4 performs... ..."

Cited by 3

### Table 1: Categorization of Manifold Learning Methods

2007

"... In PAGE 2: ...Table 1: Categorization of Manifold Learning Methods 2 Manifold Learning Methods and their connections to Distance Metric Learning Manifold Learning approaches can be categorized along the following two dimensions: first, the learnt embedding is linear or nonlinear; and second, the structure to be pre- served is global or local (see Table1 ). Based on the analysis in section 1, all the linear methods in Table 1 except Multidimensional Scaling (MDS), learn an explicit linear projective mapping and can be interpreted as the problem of distance metric learning.... In PAGE 2: ...Table 1: Categorization of Manifold Learning Methods 2 Manifold Learning Methods and their connections to Distance Metric Learning Manifold Learning approaches can be categorized along the following two dimensions: first, the learnt embedding is linear or nonlinear; and second, the structure to be pre- served is global or local (see Table 1). Based on the analysis in section 1, all the linear methods in Table1 except Multidimensional Scaling (MDS), learn an explicit linear projective mapping and can be interpreted as the problem of distance metric learning. MDS finds the low-rank projection that best preserves the inter-point distance matrix E.... ..."

### Table 1: Our results and their comparison with previous work. The parallel model of computation is the arbitrary CRCW PRAM [8]. The second contribution is based on the following observation: The results in Table 1 require a priori the knowledge of the arboricity of the input graph. Since computing the exact value of arboricity seems to be hard [12, 18], we provide here algorithms that compute a 2-approximation for arboricity (i.e., an approximation which can be at most twice the exact value). Moreover, we show that using the approximate value, we can still obtain an optimal implicit representation of a sparse graph. The k-forest coloring problem is of independent interest since it is a fundamental prob- lem in the design of fault-tolerant communication networks [7], analysis of electric networks [6, 15] and the study of rigidity of structures [11]. 2 Preliminaries We rst show that an optimal implicit representation of a graph G can be obtained opti- mally, if a k-forest coloring of G is given.

"... In PAGE 3: ... First, we provide optimal sequential and parallel algorithms for obtaining optimal implicit representations of sparse graphs. Our results and their comparison with previous work are summarized in Table1 . It is worth noting that our results are achieved by simple and rather intuitive techniques compared with those used in [3, 4, 17] and moreover, our algorithms are easy to implement.... In PAGE 7: ... Hence, the total resource bounds are as those stated in the theorem. 4 Approximating Arboricity The results listed in Table1 require a priori the knowledge of the arboricity of the input graph in order to obtain its optimal implicit representation. However, the known algorithms for computing the exact value of the arboricity are based on matroid theory: a sequential algorithm [5] and a randomized parallel algorithm [12].... ..."

### Table 2.1 Representations of subspace manifolds.

1998

Cited by 186

### Table 3b. Solution Statistics for Model 2 (Minimization)

1999

"... In PAGE 4: ...6 Table 2. Problem Statistics Model 1 Model 2 Pt Rows Cols 0/1 Vars Rows Cols 0/1 Vars 1 4398 4568 4568 4398 4568 170 2 4546 4738 4738 4546 4738 192 3 3030 3128 3128 3030 3128 98 4 2774 2921 2921 2774 2921 147 5 5732 5957 5957 5732 5957 225 6 5728 5978 5978 5728 5978 250 7 2538 2658 2658 2538 2658 120 8 3506 3695 3695 3506 3695 189 9 2616 2777 2777 2616 2777 161 10 1680 1758 1758 1680 1758 78 11 5628 5848 5848 5628 5848 220 12 3484 3644 3644 3484 3644 160 13 3700 3833 3833 3700 3833 133 14 4220 4436 4436 4220 4436 216 15 2234 2330 2330 2234 2330 96 16 3823 3949 3949 3823 3949 126 17 4222 4362 4362 4222 4362 140 18 2612 2747 2747 2612 2747 135 19 2400 2484 2484 2400 2484 84 20 2298 2406 2406 2298 2406 108 Table3 a. Solution Statistics for Model 1 (Maximization) Pt Initial First Heuristic Best Best LP Obj.... In PAGE 5: ...) list the elapsed time when the heuristic procedure is first called and the objective value corresponding to the feasible integer solution returned by the heuristic. For Table3 a, the columns Best LP Obj. and Best IP Obj.... In PAGE 5: ... report, respectively, the LP objective bound corresponding to the best node in the remaining branch-and-bound tree and the incumbent objective value corresponding to the best integer feasible solution upon termination of the solution process (10,000 CPU seconds). In Table3 b, the columns Optimal IP Obj., bb nodes, and Elapsed Time report, respectively, the optimal IP objective value, the total number of branch-and-bound tree nodes solved, and the total elapsed time for the solution process.... ..."

### Table 1. Invariant manifolds in the symmetric representation.

in c ○ World Scientific Publishing Company DYNAMICS OF THREE COUPLED EXCITABLE CELLS WITH D3 SYMMETRY

1998

"... In PAGE 5: ...n the following way [Ashwin et al., 1990]. A real variable which corresponds to the aver- age phase, that is: = ( 1 + 2 + 3)=3, and a complex variable = 1 + ei2 3 2 + ei4 3 3.The invariant manifolds in this coordinate system are shown in Table1 . The action of the symmetry group corresponds to (Fig.... ..."

### Table 2: Technology Mapping results

"... In PAGE 8: ... The results show that the Boolean approach reduces the number of matching algorithm calls, nd smaller area circuits in better CPU time, and reduces the initial network graph because generic 2-input base function are used. Table2 presents a comparison between SIS and Land for the library 44-2.genlib, which is distributed with the SIS package.... ..."

### Table 3.2: Space counters for different representations. These counters indicate the number of bytes required to represent the structures and are computed as explained in Appendix C. The Struct. column denotes the total number of structures produced by the analysis. Benchmark Struct. Dense Sparse Base OBDD Func.

### Table 1. Statistical and Data Analysis Corresponding to the Initial Recognition According to Natural Image and Representation Structural Signal-Based

2006

"... In PAGE 9: ... There are several criteria to classify watermarking techniques. Table1 shows some fundamental categories. Visible watermarks can be seen by eyes.... In PAGE 10: ...Type of Document Image, Video, Audio, Text Human Perception Visible, Invisible Working Domain Spatial Domain, Frequency Domain Watermark Type Pseudo Random Number (PRN) sequence, Visual Watermark Information Type Non-Blind, Semi-Blind, Blind Table1 : Categories of watermarking techniques To provide the necessary properties (robustness, invisibility, data capacity, and security), we proposed a new system which make methodology decision based on the ANN classification [10]. This method provides robustness against common geometric attacks.... In PAGE 14: ... and the threshold for each band are given in Table 1. Table1 . Scaling factor .... In PAGE 22: ...6, 0] P 28.6025 Table1 . The anthropometric data used in the eye model with frontal pose.... In PAGE 25: ...Table1 .... In PAGE 25: ... Three different QP offsets, 0, -6, -12, were selected for the study. Table1 illustrates the changes in quality by using the different ROI parameters while Figure 1 provides an example. The different parameters resulted in a total of 18 encod- ing combinations.... In PAGE 25: ... The participants also preferred -6 QP offset. Table1 shows that with no offset, the face was not clear enough and at -12 QP offset, the distortions in the hands was too much relative to the improvement of the face. Fig.... In PAGE 33: ... The average bitrate mR corresponding to the initial recog- nition was noted for both the structural and signal-based rep- resentations, and the standard deviation sR of the initial recog- nition bitrates were computed for each representation. Columns two through five of Table1 summarize these statistics. Larger standard deviations reflected a difficulty in recognizing the content.... In PAGE 34: ... Normal- izing mR for signal-based representations by RVL specifies the average recognition bitrate as a percentage of the visually lossless bitrate. The normalized mR and RVL are listed in the sixth and seventh columns, respectively, of Table1 . The average normalized recognition bitrate mR/RVL for all nine images is 0.... In PAGE 34: ...bout 1.15 cpd is preserved. These observations indicate that information valuable for recognition resides in the higher fre- quency content. The eighth and ninth columns of Table1 list the percentage of the average recognition rate mR when ex- cluding the LL band and the average recognition rate when excluding the LL band mRnoLL, respectively. Excluding the images difficult to recognize, indicated by an asterisk, the pro- portions span over the small interval of 0.... In PAGE 34: ...ortions span over the small interval of 0.79 to 0.85.The values in column eight of Table1 support the assertion that more information which is used by observers for recognition is available in the higher frequency content. Note further that the proportion of information is nearly constant with an aver- age of 0.... In PAGE 52: ...92s 26047 48.094s Table1 The comparison of time and computation consumption From the results, we can see the gradient search and exhaustive search both can reach the right registration result. Because of the fewer number of points required for search, the gradient algorithm needs less searching time comparing with the exhaustive search.... ..."