### Table 2: Non-convex quarticly constrained optimization problem for hierarchy and policy discovery in bounded stochastic recursive controllers.

in Abstract

"... In PAGE 5: ... 3.3 Algorithms Since the problem in Table2 has non-convex (quartic) constraints in Eq. 5 and 6, it is difficult to solve.... In PAGE 5: ... 5 and 6, it is difficult to solve. We consider three approaches inspired from the techniques for non-hierarchical controllers: Non-convex optimization: Use a general non-linear solver, such as SNOPT, to directly tackle the optimization problem in Table2 . This is the most convenient approach, however a globally optimal solution may not be found due to the non-convex nature of the problem.... In PAGE 7: ... 4 Experiments We report on some preliminary experiments with three toy problems (paint, shuttle and maze) from the POMDP repository3. We used the SNOPT package to directly solve the non-convex optimization problem in Table2 and bounded hierarchical policy iteration (BHPI) to solve it iteratively. Table 3 reports the running time and the value of the hierarchical policies found.... ..."

### Table 2: Non-convex quarticly constrained optimization problem for hierarchy and policy discovery in bounded stochastic recursive controllers.

in Abstract

"... In PAGE 5: ... 3.3 Algorithms Since the problem in Table2 has non-convex (quartic) constraints in Eq. 5 and 6, it is difficult to solve.... In PAGE 5: ... 5 and 6, it is difficult to solve. We consider three approaches inspired from the techniques for non-hierarchical controllers: Non-convex optimization: Use a general non-linear solver, such as SNOPT, to directly tackle the optimization problem in Table2 . This is the most convenient approach, however a globally optimal solution may not be found due to the non-convex nature of the problem.... In PAGE 7: ... 4 Experiments We report on some preliminary experiments with three toy problems (paint, shuttle and maze) from the POMDP repository3. We used the SNOPT package to directly solve the non-convex optimization problem in Table2 and bounded hierarchical policy iteration (BHPI) to solve it iteratively. Table 3 reports the running time and the value of the hierarchical policies found.... ..."

### Table 2: Non-convex quarticly constrained optimization problem for hierarchy and policy discovery in bounded stochastic recursive controllers.

"... In PAGE 5: ... 3.3 Algorithms Since the problem in Table2 has non-convex (quartic) constraints in Eq. 5 and 6, it is difficult to solve.... In PAGE 5: ... 5 and 6, it is difficult to solve. We consider three approaches inspired from the techniques for non-hierarchical controllers: Non-convex optimization: Use a general non-linear solver, such as SNOPT, to directly tackle the optimization problem in Table2 . This is the most convenient approach, however a globally optimal solution may not be found due to the non-convex nature of the problem.... In PAGE 7: ... 4 Experiments We report on some preliminary experiments with three toy problems (paint, shuttle and maze) from the POMDP repository3. We used the SNOPT package to directly solve the non-convexoptimization problem in Table2 and bounded hierarchical policy iteration (BHPI) to solve it iteratively. Table 3 reports the running time and the value of the hierarchical policies found.... ..."

### Table 10.1 Variance reduction w.r.t. MC for the stochastic activity network; t = 13 for QMC and t = 8 for QMCc

2003

Cited by 8

### Table 9.1 Variance reduction w.r.t. MC for the stochastic activity network; t = 13 for QMC and t = 8 for QMCc

2003

Cited by 8

### Table 11: Impact of non-convexity

2007

"... In PAGE 26: ...Table 11: Impact of non-convexity These cases are analyzed in Table11 , where, in percentages, \robust nominal quot; is the nominal return attained by the optimal solution to the robust optimization problem and \robust worst case quot; is the worst-case return it attains under the uncertainty model; \robust positions quot; is the number of positions taken by the robust portfolio. From an aggregate perspective all six cases are equivalent: the adversary can, in each case, decrease returns by a total \mass quot; of 100.... In PAGE 26: ... From an aggregate perspective all six cases are equivalent: the adversary can, in each case, decrease returns by a total \mass quot; of 100. Yet, as we can see from Table11 , the six cases are structurally quite di erent. It appears, therefore, that a smooth convex model used to replace our histogram structure would likely produce very di erent results in at least some of the six cases.... ..."

### Table 8: Stochastic volatility - volatility of variance.

2002

Cited by 1

### Table 9: Stochastic volatility - mean reversion in variance.

2002

Cited by 1

### Table 1: Variance Reduction

1997

"... In PAGE 4: ... Results for importance sampling, concomitants, CC.IS, and other variance reduction methods are shown in in Table1 , for the Studentized mean example in Fig- ure 1. Numbers in the table are the estimated e ciency for each method, relative to simple Monte Carlo boot-... In PAGE 5: ... Either combination does substantially better than using the component methods in isolation, and both are substantially better than methods such as balanced bootstrap sampling or antithetic variates. Results are not as good for all statistics and datasets as shown in Table1 . The extremely small conditional variance of T given L apparent in Figure 1 does not occur in all problems and is particularly favorable to control variates and (to a lesser extent) to concomi- tants.... In PAGE 6: ...Smoothing Concomitants Finally, we consider the e ect of smoothing concomi- tants distribution estimates. The empirical results in Table1 indicate that the method of smoothing we used is e ective for estimating quantiles. We conjecture that smoothing distributions with concomitants is much more e ective than without; we return to this point below, but rst motivate and describe our smoothing method.... In PAGE 6: ... The idea in vertical smoothing is to continue to x L b at Ly b as in (12), but to replace the indicator function with an estimate of the probability based on the dis- tribution of random R b, the residual for the b apos;th order statistic L b. Results in Table1 are obtained by a simple procedure in which the distribution of R is estimated by the local nearest-neighbor empirical distribution, ^ Pb = 7?1 3 X j=?3 I( ^ ?1(Ly b) + R b+j a); with adjustments for extreme values of b. We use a lo- cal rather than global estimate because the distribution of R may depend on L , e.... In PAGE 6: ... In horizontal smoothing, we keep the residual R b xed, and replace the observed order statistic L b not by a single value Ly b, but rather by an estimate of the distribution for the b apos;th random order statistic, ^ Pb = ^ P( ^ ?1(L (b)) + R b a); where now L (b) is considered random and R b xed. Results ( Table1 ) are promising, with smoothing im- proving quantil estimates. This is still work in progress.... ..."

Cited by 1

### Table 2 Axioms and rules for the stochastic reduction relation.

2006