### Table 1 Summary of Exact and Approximate Quantitative Predictions Made by the Stochastic Optimized-Sttbmovement Model

1988

"... In PAGE 9: ...MEYER, ABRAMS, KORNBLUM, WRIGHT, AND SMITH dictions. Also included in Table1 are illustrative numerical val- ues derived from these predictions for representative target dis- tances and widths. Complete derivations of the model apos;s predic- tions appear in the Appendix.... In PAGE 9: ... Also, because VD/W increases monotonically at a decreasing rate as D/W increases, Equation 4 exhibits a shape similar to log2(2Z gt;/ W), mimicking Pitts apos; law. The degree of sim- ilarity is illustrated in Table1 , where we have fit Equation 1 (Pitts apos; law) to illustrative numerical values derived from the sto- chastic optimized-submovement model. Here it can be seen that the square-root and logarithmic trade-offs come fairly close to each other (r = .... In PAGE 9: ...urvature is greater than that of a linear function (i.e., one where x = 1) but less than that of the corresponding logarithmic function. 14 A square-root function comes closer than a logarithmic function to some of Pitts apos; (1954, Table1 , p. 385) own data.... In PAGE 28: ... Following our general discussion, it would be interesting to conduct studies on the effects of explicit movement-training techniques designed to promote the opti- mality of subjects apos; performance during spatially constrained movement tasks. The model apos;s predictions ( Table1 ) could pro- vide a useful benchmark against which to assess the efficacy of alternative instructional formats and practice protocols. By comparing these predictions with data collected under various real-world conditions, one may eventually achieve significant improvements of people apos;s performance in practical situations requiring skilled movement.... ..."

Cited by 40

### Table 1: Accuracy of stochastic approximation procedure.

1998

Cited by 1

### Table 1. The relations among randomness, stochasticity, and approximations.

### Table 1. The relations among randomness, stochasticity and approximations

### Table 3. Network loss rate (unconditional loss probability) in simulation step 2.

1999

"... In PAGE 8: ... Bit rate, speech quality, and complexity of some waveform and hybrid coders 39 Table 2. Network loss rate (unconditional loss probability) in simulation step 1 56 Table3 . Network loss rate (unconditional loss probability) in simulation step 2 56 Table 4.... In PAGE 56: ... In the second simulation step, we assign to p and q approximated values reflecting real network loss conditions measured in the Internet [Bolo93]. Table3 and 4 show the network loss rate (unconditional loss probability) and the application loss rate, i.e.... ..."

### Table 3: Approximated processing times in seconds for each of the networks.

"... In PAGE 4: ... However, there is a downside to having a large network. Table3 shows the approximated processing times3 for one pass on... In PAGE 4: ...7 seconds. Training both these network for 500 passes would take well over (because of back- propagation, which was not considered in Table3 ) 25 seconds for the low-detail network, compared to 1 hour and 45 minutes for the high-detail network. Quite the difference! The medium-detail network did not have as good re- sults as the high-detail one, but the latter took a lot longer to process, without spectacular improvements in performance.... ..."

### Table 1: E ciency improvements. Stochastic algorithm Total moves

2002

"... In PAGE 2: ...ertain. The selling price is sp cents per paper. For a speci c problem, whose weekly demand is shown below, the cost of each paper is c = 20 cents and the selling price is sp = 25 cents. Solve the problem, if the news vendor knows the demand uncertainties but does not know the demand curve for the com- ing week a-priori ( Table1 ). Assume no salvage value s = 0, so that any papers bought in excess of demand are simply discarded with no return.... In PAGE 2: ... Assume no salvage value s = 0, so that any papers bought in excess of demand are simply discarded with no return. Table1 : Weekly demand and its uncertainties. Weekly demand Uncertainty i Day Demand j Demand Probability (di) (dj) (pj) 1 Monday 50 1 50 5/7 2 Tuesday 50 2 100 1/7 3 Wednesday 50 3 140 1/7 4 Thursday 50 5 Friday 50 6 Saturday 100 7 Sunday 140 Solution: In this problem, we want to nd how many papers the vendor must buy (x) to maximize the pro t.... In PAGE 2: ... Our rst instinct to solve this problem is to nd the average demand and nd the optimal sup- ply x corresponding this demand. Since the average demand from the Table1 is 70 papers, x = 70 should be the solution. Let us see if this represents the op- timal solution for the problem.... In PAGE 3: ... Therefore, the value of stochastic solution, VSS, is 1750 ( 50) = 1800 cents per week. Now consider the case where the vendor knows the exact demand ( Table1 ) a-priori. This is the perfect information problem where we want to nd the solution xi for each day i.... In PAGE 12: ... For instance, for k = 1, uk = 0 ( rst discrete value), then problem (20) would become: z = Min v+ 11 + v+ 12 + v 11 + v 12 s: t: v11 + v12 v13 + v14 + v+ 11 v 11 = 2:5 v11 + v12 + v13 v14 + v+ 12 v 12 = 2:25 v11; v12; v13; v14 gt;0 v+ 11; v+ 12; v 11; v 12 gt;0 (21) The solution to (21) is z = 0. The results of all of the problems (k = 1 11) are summarized in Table1 . Variables not shown are equal to zero.... In PAGE 13: ...13 Table1 : Determining feasibility of the second stage. k uk zk vk2 vk4 1 0 0 2.... In PAGE 16: ... Table1 shows total number of con gurational moves of di erent stochastic optimization methods. The rst two algorithms are (conventional) stochas- tic optimization algorithms with a xed Nsamp while the last three algorithms are stochastic annealing al- gorithms with a varying Nsamp.... In PAGE 20: ...SIAG/OPT Views-and-News Table1 : Base sample. Sample No.... ..."

### Table 3: Recall Improvement by Approximate

2003

"... In PAGE 6: ...3 Improving Recall by Approximate String Search We also conducted experiments to evaluate how much we can further improve the recognition per- formance by using the approximate string search- ing method described in Section 3. Table3 shows the results. The leftmost columns show the thresh- olds of the normalized costs for approximate string searching.... ..."

Cited by 15

### Table 3: Recall Improvement by Approximate

2003

"... In PAGE 6: ...3 Improving Recall by Approximate String Search We also conducted experiments to evaluate how much we can further improve the recognition per- formance by using the approximate string search- ing method described in Section 3. Table3 shows the results. The leftmost columns show the thresh- olds of the normalized costs for approximate string searching.... ..."

Cited by 15

### Table 3: Recall Improvement by Approximate

"... In PAGE 6: ...3 Improving Recall by Approximate String Search We also conducted experiments to evaluate how much we can further improve the recognition per- formance by using the approximate string search- ing method described in Section 3. Table3 shows the results. The leftmost columns show the thresh- olds of the normalized costs for approximate string searching.... ..."