Results 1  10
of
31
Simple Procedures for Selecting the Best Simulated System when the Number of Alternatives Is Large
 Operations Research
, 1999
"... In this paper we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of alternatives is finite, but large enough that rankingandselection (R&S) procedures may require too much computation to be practical. Our approach is to ..."
Abstract

Cited by 44 (9 self)
 Add to MetaCart
In this paper we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of alternatives is finite, but large enough that rankingandselection (R&S) procedures may require too much computation to be practical. Our approach is to use the data provided by the first stage of sampling in an R&S procedure to screen out alternatives that are not competitive and thereby avoid the (typically much larger) secondstage sample for these systems. Our procedures represent a compromise between standard R&S proceduresthat are easy to implement, but can be computationally inefficientand fully sequential proceduresthat can be statistically efficient, but are more difficult to implement and depend on more restrictive assumptions. We present a general theory for constructing combined screening and indifferencezone selection procedures, several specific procedures and a portion of an extensive empirical evaluation. ...
Adaptive problemsolving for largescale scheduling problems: A case study
, 1996
"... Although most scheduling problems are NPhard, domain specific techniques perform well in practice but are quite expensive to construct. In adaptive problemsolving, domain specific knowledge is acquired automatically for a general problem solver with a flexible control architecture. In this approac ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Although most scheduling problems are NPhard, domain specific techniques perform well in practice but are quite expensive to construct. In adaptive problemsolving, domain specific knowledge is acquired automatically for a general problem solver with a flexible control architecture. In this approach, a learning system explores a space of possible heuristic methods for one wellsuited to the eccentricities of the given domain and problem distribution. In this article, we discuss an application of the approach to scheduling satellite communications. Using problem distributions based on actual mission requirements, our approach identifies strategies that not only decrease the amount of CPU time required to produce schedules, but also increase the percentage of problems that are solvable within computational resource limitations. 1.
Dynamic programming approximations for a stochastic inventory routing problem
 Transportation Science
, 2004
"... This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers ’ inventories, and decide when and how much inventory should be replenished at each customer. ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers ’ inventories, and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem, and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources. ∗ Supported by the National Science Foundation under grant DMI9875400.
Comparisons with a Standard in Simulation Experiments
 Management Science
, 1998
"... We consider the problem of comparing a finite number of stochastic systems with respect to a single system (designated as the "standard") via simulation experiments. The comparison is based on expected performance, and the goal is to determine if any system has larger expected performance than the s ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
We consider the problem of comparing a finite number of stochastic systems with respect to a single system (designated as the "standard") via simulation experiments. The comparison is based on expected performance, and the goal is to determine if any system has larger expected performance than the standard, and if so to identify the best of the alternatives. In this paper we provide twostage experiment design and analysis procedures to solve the problem for a variety of scenarios, including when we encounter unequal variances across systems, and when we use the variance reduction technique of common random numbers and it is appropriate to do so. The emphasis is added because in some cases common random numbers can be counterproductive when performing comparisons with a standard. We also provide methods for estimating the critical constants required by our procedures, present a portion of an extensive empirical study and demonstrate one of the procedures via a numerical example. 1 Intr...
Efficient simulation budget allocation for selecting an optimal subset
 INFORMS Journal on Computing
, 2008
"... We consider a variation of the subset selection problem in ranking and selection, where motivated by recently developed global optimization approaches applied to simulation optimization, our objective is to identify the topm out of k designs based on simulated output. Using the optimal computing bu ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
We consider a variation of the subset selection problem in ranking and selection, where motivated by recently developed global optimization approaches applied to simulation optimization, our objective is to identify the topm out of k designs based on simulated output. Using the optimal computing budget framework, we formulate the problem as that of maximizing the probability of correctly selecting all of the topm designs subject to a constraint on the total number of samples available. For an approximation of this correct selection probability, we derive an asymptotically optimal allocation procedure that is easy to implement. Numerical experiments indicate that the resulting allocations are superior to other methods in the literature, and the relative efficiency increases for larger problems. 1
TwoStage MultipleComparison Procedures for SteadyState Simulations
 Annals of Statistics
, 1999
"... this paper, the results naturally apply to (asymptotically) stationary time series. ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
this paper, the results naturally apply to (asymptotically) stationary time series.
Selecting The Best System: Theory And Methods
, 2003
"... This paper provides an advanced tutorial on the construction of rankingandselection procedures for selecting the best simulated system. We emphasize procedures that provide a guaranteed probability of correct selection, and the key theoretical results that are used to derive them. ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
This paper provides an advanced tutorial on the construction of rankingandselection procedures for selecting the best simulated system. We emphasize procedures that provide a guaranteed probability of correct selection, and the key theoretical results that are used to derive them.
Simulation allocation for determining the best design in the presence of correlated sampling
 INFORMS Journal on Computing
, 2006
"... doi 10.1287/ijoc.1050.0141 ..."
Sequential Inductive Learning
 In Proceedings of the Thirteenth National Conference on Artificial Intelligence
, 1995
"... In this paper I advocate a new model for inductive learning. Called sequential induction, this model bridges classical fixedsample learning techniques (which are efficient but ad hoc), and worstcase approaches (which provide strong statistical guarantees but are too inefficient for practical use). ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
In this paper I advocate a new model for inductive learning. Called sequential induction, this model bridges classical fixedsample learning techniques (which are efficient but ad hoc), and worstcase approaches (which provide strong statistical guarantees but are too inefficient for practical use). According to the sequential inductive model, learning is a sequence of decisions which are informed by training data. By analyzing induction at the level of these decisions, and by utilizing the minimum data necessary to make each decision, sequential inductive techniques can provide the strong statistical guarantees of worstcase methods, but with substantially less data than those methods require. The sequential inductive model is also useful as a method for determining a sufficient sample size for inductive learning and as such, is relevant to megainduction,where the preponderance of data introduces problems of scale. The peepholing and decisiontheoretic subsampling approaches of Catlet...
The KnowledgeGradient Algorithm for Sequencing Experiments in Drug Discovery
 INFORMS J. on Computing
, 2010
"... We present a new technique for adaptively choosing the sequence of molecular compounds to test in drug discovery. Beginning with a base compound, we consider the problem of searching for a chemical derivative of the molecule that best treats a given disease. The problem of choosing molecules to test ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We present a new technique for adaptively choosing the sequence of molecular compounds to test in drug discovery. Beginning with a base compound, we consider the problem of searching for a chemical derivative of the molecule that best treats a given disease. The problem of choosing molecules to test to maximize the expected quality of the best compound discovered may be formulated mathematically as a rankingandselection problem in which each molecule is an alternative. We apply a recently developed algorithm, known as the knowledgegradient algorithm, that uses correlations in our Bayesian prior distribution between the performance of different alternatives (molecules) to dramatically reduce the number of molecular tests required, but it has heavy computational requirements that limit the number of possible alternatives to a few thousand. We develop computational improvements that allow the knowledgegradient method to consider much larger sets of alternatives, and we demonstrate the method on a problem with 87,120 alternatives.