### Table 4:1. Result of the State Duma election in 1995, percent

"... In PAGE 22: ... However, the populist Liberal Democratic Party received a significantly lower support in Moscow Oblast as compared with the Russian average as well as a forest region like Arkhangelsk (cf. Table4 :1). Table 4:1.... ..."

### Table 2. Preliminary schedule for REDES development

"... In PAGE 4: ...LIST OF TABLES AND FIGURES Table 1. Initial database components 25 Table2 . Preliminary schedule for REDES development 40 Figure 1.... In PAGE 43: ... The emphasis of the first phase is on the support for information and application users, and the second phase on the support for technical users. A preliminary schedule is presented in the Table2 . It depicts the principal activities leading to successful demonstration (end of phase 1) and operational... ..."

### Table 9, we simply rede ne V1 to be the unconverged right singular vector approximations from the previous iteration, i.e.,

1992

"... In PAGE 26: ... This hybrid Lanczos approach which incorporates inner iterations of single-vector Lanczos bidiagonalization within the outer iterations of a block Lanczos SVD recursion is outlined in Tables 9 and 10. As an alternative to the outer recursion in Table9 , which is derived from the equivalent eigenvalue in the 2-cyclic matrix B, Table 11 depicts the simpli ed outer block Lanczos recursion for approximating the eigensystem of ATA. Combining the equations in (34), we obtain ATA^ Vk = ^ VkHk ; where Hk = JT k Jk is the k k symmetric block tridiagonal matrix Hk 0 B B B B B B B B @ S1 RT 1 R1 S2 RT 2 R2 RT k?1 Rk?1 Sk 1 C C C C C C C C A ; (40) having block size b.... In PAGE 26: ... Analogous to the diagonalization of Bk in (36), the computation of eigenpairs of the resulting tridiagonal matrix in this case can be performed via a Jacobi or QR-based symmetric eigensolver. The conservation of computer memory for our iterative block Lanczos iterative SVD method is insured by enforcing an upper bound, c, for the order (bk) of any Jk constructed (see Table9 ). This technique was suggested by Golub, Luk, and Overton in [20].... In PAGE 27: ... corresponding to i are given by ui = ^ Uk ^ Q qi ; vi = ^ Vk ^ P pi ; (41) where pi, qi are the i-th columns of P, Q, respectively. Suppose that before restarting the outer iteration in Table9 we have determined that p0 singular triplets are acceptable to a user-supplied tolerance for the residual error de ned in (6). Then, we update the values of the block size (b), the maximum allowable order for Jk (c), the number of diagonal blocks for Jk (d), and the number of triplets yet to be found (p) as follows: bnew = bold ? p0 ; if b pold, (42) = min fbold ; pold ? p0g otherwise, cnew = cold ? p0 ; pnew = pold ? p0 ; dnew = bcnew=bnewc : All converged left and right singular vector approximations are respectively stored in matrices U0 and... In PAGE 29: ... Table 11 Hybrid Lanczos Outer Iteration for the Equivalent Symmetric Eigensystem of ATA. estimate the residual for some k (see 6) by kykk2 of Step (1a) in Table9 for iteration l + 1, where yk is the k-th column of the n b matrix Yi. Hence, at the start of iteration l + 1 we can determine the accuracy of our approximations from iteration l.... In PAGE 29: ... As with the previous iterative SVD methods, we access the sparse matrices A and AT for this hybrid Lanczos method only through sparse matrix-vector multiplications. Some e ciency, however, is gained in the outer (block) Lanczos iterations by the multiplication of b vectors (Steps (1a), (1b) in Table9 ) rather than by a single vector. These dense vectors may be stored in a fast local memory (cache) of a hierarchical memory-based archtitecture (Alliant FX/80, Cray-2S) and thus yield more e ective data reuse.... In PAGE 29: ... These dense vectors may be stored in a fast local memory (cache) of a hierarchical memory-based archtitecture (Alliant FX/80, Cray-2S) and thus yield more e ective data reuse. The total reorthogonalization strategy and de ation of converged singular vector approximations is accomplished in Steps (1b), (1e) in Table9 and Steps (2b), (2d) in Table 10. A stable variant of Gram-Schmidt orthogonalization ([37]), which requires e cient dense matrix-vector muliplication (level-2 BLAS) routines ([14]), is used to produce the orthogonal projections of Yi (i.... In PAGE 33: ... Table 13 also indicates that a signi cant proportion of time (24% of total CPU time) is spent in the level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS kernels. The outer block Lanczos recursion for ATA (see Table 11), as with the outer recursion in Table9 , primarily consists of these higher- level BLAS kernels (also supplied by the Alliant FX/Series Scienti c Library) which are designed for execution on all 8 processors of the Alliant FX/80. The modi ed Gram-Schmidt procedure we employ for re-orthogonalization is also driven by the higher-level BLAS kernels.... In PAGE 37: ...2 from the eigensystem of AT A. The parameters for BLSVD (see Table9 in Section 3:5) include the initial block size, b, an upper bound on the dimension of the Krylov subspace, c. For LASVD, we also include a similar upper bound, q, for the order of the symmetric tridiagonal matrix Tj in (31).... In PAGE 38: ... The consequence of doubling p in terms of memory is discussed in Section 4:3. As mentioned in the preceding section, BLSVD requires an initial block size b, where b p, and the bound c on the Krylov subspace generated within the outer block Lanczos recursion given in Table9 . The choice of b can be di cult, and as mentioned in Section 3:5 we have made some gains... ..."

Cited by 4

### Table A.4: Many of the methods de ned for derived Statement classes. These methods are de ned in the base Statement class to call error routines and are rede ned for the derived classes which use them. y indicates that there exist corresponding methods which insert data into these elds.

### Table A.6: Many of the methods de ned for derived Expression classes. These methods are de ned in the base Expression class to call error routines and are rede ned for the derived classes which use them. y indicates that there exist corresponding methods which insert data into these elds.

### Table II. Mean lane deviation and time to crash as a function of speed for the DUMAS model.

2005

### Table 1: Singularity rates before and after rede#0Cning process.

2000

"... In PAGE 10: ...7#25 singularity o#0B the original images. Table1 lists the changing of the singularity rates for some of the images employed in the experiments. Thus, this method is proved to be e#0Ecient to remove the singularity in HSI representation of a color image.... ..."

Cited by 19

### Table 1. Peer Categorization in DUMAS

2002

Cited by 13

### Table 1: Methods defined for all objects, i.e., classes and their instances unless elsewhere rede- fined.

1992

"... In PAGE 7: ... In particular, this ability has proven useful, when adding specialized modelling primitives for hypermedia and argumentative net- works [8] and for database integration [9] to VML. Table1 shows all the predefined methods de-... ..."

Cited by 3

### Table 5: Assumptions for the Adaptation Techniques in Table 4 Rede ned View Adaptation Technique Assumptions

1995

"... In PAGE 15: ... The full list of adaptation techniques for aggregate views is given in Appendix A in Table 4. The assumptions used are listed in Table5 . Table 4 can be used in the same ways as Table 2.... ..."

Cited by 65