### Table 1 Block coordinate descent algorithm

"... In PAGE 6: ... The resulting CG scheme is ensured to converge to the unique minimizer of J as a function of k, under constraint (4). See Table1 for the detailed algorithm.... ..."

### Table 1. Coordinates and peak accelerations of stations that exhibited significant shaking.

"... In PAGE 2: ... This systematic effort on part of NSMN-ERD, supplemented by strong motion stations deployed by KOERI and ITU in Istanbul and Marmara Region produced very significant and important records that will be useful for studying the earthquake and rebuilding efforts. The coordinates of 19 significant stations that recorded the main-shock and the peak accelerations at these stations are summarized in Table1 . Peak accelerations of these stations are plotted into the map in Figure 1.... In PAGE 5: ...4 earthquake. For illustration only, the peak values from the 19 stations in Table1 are superimposed on the attenuation curves in Figures 8a and b plotted for two types of soils (shear wave velocity, Vs =760m/s and Vs=360m/s). However, this should be interpreted in light of the sparse deployment discussed above (that may have resulted in missing motions with larger peak accelerations) and also the fact that considerable number of the stations listed in the table are recorded in buildings that are more than two stories and should not be in the comparative curves.... In PAGE 11: ...11 Figure 1. Map showing peak accelerations summarized in Table1 plotted at relative locations of significant strong-motion stations within and in close proximity to the epicentral area (Base map courtesy of BKS Surveys Ltd.... In PAGE 15: ... Attenuation curve for an M=7.4 earthquake superimposed with peak accelerations in Table1 (plotted using method from Boore, Joyner and Fumal, 1997). Figure 9.... ..."

### Table 3: Comparison of distributed memory Jacobi methods by blocks: without acceleration (no acc), without acceleration and two subsweeps per block (n a,2 sw), and with semiclassical in each block (sc). On a Paragon.

### Table 2: Random instances: Iterated descent. Problem Time 2-Change 3-Change L-K Flower size (sec.)

1995

"... In PAGE 13: ... The results reported are averages for each size, and are given as normalized tour length c(T )=pn in order to make a comparison to [Fiechter 1994]. Table2 shows the iterated descent results for each of the four transitions 2-Change, 3-Change, L-K and Flower. This table gives the immediate strength of each transition and in addition, it al- lows us to compare the simple iterated descent method to our tabu search approach.... ..."

Cited by 4

### Table 1: Heuristics for ESTP. Type classi cation and performance. An \- quot; in- dicates that no or insu cient data is available to give a reliable estimate of reduction over MST and/or running time complexity.

1997

"... In PAGE 16: ...Summary In Table1 we present a summary of heuristics for ESTP, by noting their local search type, average reduction over MST (when available) and running time complexity (when available). The type classi cation descent method in general stands for an iterative best improvement method.... ..."

Cited by 4

### Table 30.4 Results of the Descent Method

1998

Cited by 2

### Table 1: Gradient descent learning

"... In PAGE 5: ... We used the backpropagation through-time algorithm which employs gradient descent for training. We ran two major experiments; Table1 shows illustrative results for experiment 1 which uses small random weights in the range of -1 to 1 while Table 2 shows larger weight values used for weight initialization. We would terminate training if the network could learn 88% of the training samples and tested the networks generalization performance with data set not included in the training set.... ..."

### Table 2: Gradient descent learning

"... In PAGE 5: ... We used the backpropagation through-time algorithm which employs gradient descent for training. We ran two major experiments; Table 1 shows illustrative results for experiment 1 which uses small random weights in the range of -1 to 1 while Table2 shows larger weight values used for weight initialization. We would terminate training if the network could learn 88% of the training samples and tested the networks generalization performance with data set not included in the training set.... ..."

### Table 3: Average number of completed descent iterations performed per centisecond for the two problem sets over 250 centiseconds running time under each solution method. A completed descent iteration includes the generation of a starting solution (either randomly or using the Lagrangian scheme), the descent from this solution to a local optimum, and for the Lagrange scheme, the updating of the multipliers.

1997

"... In PAGE 18: ... Figure 8: Plot showing the average objective function value for 100 test problems in set one (40 blades) with respect to the computational time using both randomly generated starting solutions and those generated by the Lagrangean dual scheme. Table3 compares the average number of iterations completed per second for each of the different solution methods. (Remember that each of these iterations generates a single locally-optimal solution.... ..."

Cited by 3

### Table 3: Average number of completed descent iterations performed per centisecond for the two problem sets over 250 centiseconds running time under each solution method. A completed descent iteration includes the generation of a starting solution (either randomly or using the Lagrangian scheme), the descent from this solution to a local optimum, and for the Lagrange scheme, the updating of the multipliers.

1997

"... In PAGE 18: ... Figure 8: Plot showing the average objective function value for 100 test problems in set one (40 blades) with respect to the computational time using both randomly generated starting solutions and those generated by the Lagrangean dual scheme. Table3 compares the average number of iterations completed per second for each of the difierent solution methods. (Remember that each of these iterations generates a single locally-optimal solution.... ..."

Cited by 3