### Table 9--Elasticity estimates from the different models

"... In PAGE 21: ... Because these four models are non-nested rationalizations of the s ame data, some divergence exists in t he elasticity estimates across the four models, with those from t he log-infrequency of purch ase model the most plausible. The last row of Table9 gives the sample me an of the elasticity of postal services demand with respect to the probability of compute r own ership. For the log-infrequency model, this mean elasticity implies that increases in th e p robability computer owners hip bring about reductions in the demand for postal delivery services at the household-level.... ..."

### Table 1. A block version of explicitly restarted Arnoldi reduction with polynomial acceleration

"... In PAGE 1: ... In this paper, we use the convex hull proposed for the solution of the nonsymmetric linear system to accelerate the convergence of the restarted Arnoldi iteration. The algorithm of the explicitly restarted Arnoldi iteration is summarized in Table1 . The choice of subspace dimension 109 is usually a tradeoff between the length of the reduction that may be tolerated and the rate of convergence.... ..."

### Table 1: The explicit and implicit methods.

"... In PAGE 8: ... The computations were carried out on a PC with an Intel PIII-1GHz CPU, RAM of 256 MB, and Windows 2000 OS. Table1 contains computation times for the two different methods, for a number of small instances. The instances are characterized by that there is a demand between every node-pair and that link capacities are uni- formly distributed over {10, 20, 30, 40, 50}.... In PAGE 8: ... For the first 7 instances there are two paths per demand, and for the 5 last instances there are three paths per demand. The results of Table1 suggest that the implicit method is slightly faster than that of the explicit method. If we sum up the usage of variables and constraints for the MIP corresponding to iteration k in the two different approaches, this is reasonable to expect.... In PAGE 9: ...0770 Table 2: The distribution approach. computation times given in Table1 and Table 2, strongly suggests that the distribution approach should be used whenever its deviation from the true optimum is acceptable. Certainly, such an error tolerance will depend on the details of the application.... ..."

### Table 5: Reductions for

1998

"... In PAGE 3: ... In more detail, ) con- tains rewrite rules for applying explicit substitu- tions to variables, -abstractions, applications and other substitutions. The relations ) and ) are de ned in appendix C in Table5 and ) is the union of ) and ) . The crucial property of sub- ject reduction can be proven for ) by induction over typing judgements.... ..."

Cited by 1

### Table 1: Niching method classi cation and its applications.

"... In PAGE 3: ... In some cases, the overall environment varies over both time and space. Table1 summarizes the broad categories. The table lists the location of each category of niching method, with respect to the two dimensions of behavior.... In PAGE 18: ...Table1 0: RAGAS.... ..."

### Table 4: Results on Dimension Reduction

"... In PAGE 7: ...(26) for CF, we apply the di- mension reduction technique by expressing the function (I 1 1+ f W) 1 restricted in the rst K principal eigenspace of f W (see [10]). We applied this dimension reduction (DR) ver- sion of CF and the results are shown in Table4 . The rst 2 columns are for the 10% labeled case as in Table 2.... ..."

### Table 1 Properties of techniques for dimensionality reduction.

"... In PAGE 11: ...2. General properties In Table1 , the thirteen dimensionality reduction tech- niques are listed by four general properties: (1) the con- vexity of the optimization problem, (2) the main free... In PAGE 11: ... We discuss the four general properties below. For property 1, Table1 shows that most techniques for dimensionality reduction optimize a convex cost func- tion. This is advantageous, because it allows for find- ing the global optimum of the cost function.... In PAGE 11: ... Because of their nonconvex cost functions, autoencoders, LLC, and manifold charting may suffer from getting stuck in local optima. For property 2, Table1 shows that most nonlinear tech- niques for dimensionality reduction all have free param- eters that need to be optimized. By free parameters, we mean parameters that directly influence the cost func- tion that is optimized.... In PAGE 11: ... The main advantage of the presence of free parameters is that they provide more flexibility to the technique, whereas their main disadvantage is that they need to be tuned to optimize the performance of the di- mensionality reduction technique. For properties 3 and 4, Table1 provides insight into the computational and memory complexities of the com- putationally most expensive algorithmic components of the techniques. The computational complexity of a di- mensionality reduction technique is of importance to its applicability.... In PAGE 12: ...duction technique is determined by data properties such as the number of datapoints n, the original dimension- ality D, the target dimensionality d, and by parameters of the techniques, such as the number of nearest neigh- bors k (for techniques based on neighborhood graphs) and the number of iterations i (for iterative techniques). In Table1 , p denotes the ratio of nonzero elements in a sparse matrix to the total number of elements, m indi- cates the number of local models in a mixture of factor analyzers, and w is the number of weights in a neural network. Below, we discuss the computational complex- ity and the memory complexity of each of the entries in the table.... ..."

### Table 1: Primal Allocation Heuristic - LU versus Explicit: Time in Seconds

"... In PAGE 1: ... \PDS*lue quot; means that the explicit inverse was used but that an LU-factorization was used to help reinvert it. In Table1 the column labeled \Pc-red quot; gives the percent of solution time reduction obtained when the primal allocation heuristic is used. In the rows labeled \pds*nh quot; the problems were solved not using the primal allocation heuristic.... In PAGE 3: ... Note that when solving PDS-40 with the explicit inverse, the allocation heuristic actually increased solution time. Table1 suggests that the LU-factorization has a greater impact upon solution times than does the primal allocation heuristic. LU-factorization would be preferred over the primal allocation heuristic if a choice were to be made as to which one should be implemented flrst.... ..."

### TABLE 1: Classification of some of the common jammed ordered lattices of equisized spheres in two and three dimensions, in which Z denotes the coordination number and O is the packing fraction for the infinite latticea

2001

Cited by 3