### Table 9 - Star-shape - overall RMSE for the GPOF method (used as initialization) and the Direct ML approach.

2004

"... In PAGE 24: ...8 1 Figure 4: The Star shape estimation improvement - the penalty function as a function of each vertex separately. Experiment 4: Table9 summarizes the results of the average error obtained over 20 runs using the star-shape, applying the GPOF method for initialization, and applying 20 iterations of the coordinate descent algorithm. Each such iteration updates every vertex once, and so we have 200 overall updates.... ..."

Cited by 7

### Table 5: Number of iterations and total time to achieve 100% and 50% of the optimal solution as the element heterogeneity increases.

"... In PAGE 10: ...21 in all cases. In Table5 , we give the iterations and time to reach the 100% and 50% accurate solutions for the cases in which we perturb all of the vertices in the mesh ac- cording to formulas and parameters given above. As with the other test suites, if a highly accurate solu- tion to the optimization problem is sought, the inexact Newton method outperforms the coordinate descent method in every case.... ..."

### Table 4: Number of iterations, total time, and time to achieve 50% optimal solution as the element heterogene- ity increases.

"... In PAGE 8: ...5 in the uniform element size test suite. In Table4 , we give the number of iterations and the times to reach the optimal and 50% improved solutions as the element heterogeneity, measured by the ratio of maximum element volume to minimum element vol- ume, increases. As with the uniform element distri- bution test suite, the inexact Newton method is sig- niflcantly faster than the coordinate descent method when the optimal solution is desired.... ..."

### Table 5: Number of iterations and total time to achieve 100% and 50% of the optimal solution as the element heterogeneity increases.

"... In PAGE 10: ...21 in all cases. In Table5 , we give the iterations and time to reach the 100% and 50% accurate solutions for the cases in which we perturb all of the vertices in the mesh ac- cording to formulas and parameters given above. As with the other test suites, if a highly accurate solu- tion to the optimization problem is sought, the inexact Newton method outperforms the coordinate descent method in every case.... ..."

### Table 4: Number of iterations, total time, and time to achieve 50% optimal solution as the element heterogene- ity increases.

"... In PAGE 8: ...5 in the uniform element size test suite. In Table4 , we give the number of iterations and the times to reach the optimal and 50% improved solutions as the element heterogeneity, measured by the ratio of maximum element volume to minimum element vol- ume, increases. As with the uniform element distri- bution test suite, the inexact Newton method is sig- niflcantly faster than the coordinate descent method when the optimal solution is desired.... ..."

### Table 2. Descent Concepts

2001

"... In PAGE 4: ... Our objective was to identify the concepts that pilots needed to know to understand managed descent mode functioning and to develop a computer based training (CBT) module that presented those concepts clearly so pilots could construct a coherent understanding of the relations and dependencies between concepts. The entire set of concepts is too large to present here, however in Table2 we present several descent concepts a pilot must know to understand how the... ..."

Cited by 1

### Table 4: Behavior of the classical D-K iteration approach A: analysis - S: synthesis

2001

"... In PAGE 15: ... Compute the new controller and return to Step 2, until convergence. Table4 shows the best behavior that we have obtained so far. In many instances the algorithm is often cycling or the cost increases before reaching a smaller value.... In PAGE 15: ... In many instances the algorithm is often cycling or the cost increases before reaching a smaller value. We observe that this coordinate descent technique fails to achieve an adequate value of , Table4 , as compared to the Lagrangian method in Table 3. This is due to the fact that the method is not garanteed to provide a local minimizer.... ..."

Cited by 9

### Table 1: Gradient descent learning

"... In PAGE 5: ... We used the backpropagation through-time algorithm which employs gradient descent for training. We ran two major experiments; Table1 shows illustrative results for experiment 1 which uses small random weights in the range of -1 to 1 while Table 2 shows larger weight values used for weight initialization. We would terminate training if the network could learn 88% of the training samples and tested the networks generalization performance with data set not included in the training set.... ..."

### Table 2: Gradient descent learning

"... In PAGE 5: ... We used the backpropagation through-time algorithm which employs gradient descent for training. We ran two major experiments; Table 1 shows illustrative results for experiment 1 which uses small random weights in the range of -1 to 1 while Table2 shows larger weight values used for weight initialization. We would terminate training if the network could learn 88% of the training samples and tested the networks generalization performance with data set not included in the training set.... ..."