Results 11  20
of
150
A review of design and modeling in computer experiments
 Handbook of Statistics
, 2003
"... Abstract In this paper, we provide a review of statistical methods that are useful in conducting computer experiments. Our focus is primarily on the task of metamodeling, which is driven by the goal of optimizing a complex system via a deterministic simulation model. However, we also mention the ca ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract In this paper, we provide a review of statistical methods that are useful in conducting computer experiments. Our focus is primarily on the task of metamodeling, which is driven by the goal of optimizing a complex system via a deterministic simulation model. However, we also mention the case of a stochastic simulation, and examples of both cases are discussed. The organization of our review separates the two primary tasks for metamodeling: (1) select an experimental design; (2) fit a statistical model. We provide an overview of the general strategy and discuss applications in electrical engineering, chemical engineering, mechanical engineering, and dynamic programming. Then, we dedicate a section to statistical modeling methods followed by a section on experimental designs. Designs are discussed in two paradigms, modeldependent and modelindependent, to emphasize their different objectives. Both classical and modern methods are discussed.
Response Surface Methodology CentralComposite Design Modifications for Human
"... Selected response surface methodology (RSM) designs that are viable alternatives in human performance research are discussed. Two major RSM designs that are variations of the basic, blocked, centralcomposite design have been selected for consideration: (1) centralcomposite designs with multiple ob ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Selected response surface methodology (RSM) designs that are viable alternatives in human performance research are discussed. Two major RSM designs that are variations of the basic, blocked, centralcomposite design have been selected for consideration: (1) centralcomposite designs with multiple observations at only the center point, and (2) cen tralcomposite designs with multiple observations a t each experimental point. Designs of the latter type are further categorized as: (a) designs which collapse data across all observations at the same experimental point; ( b) betweensubjects desgns in which no subject is observed more than once, and observations at each experimental point may be multiple and unequal or multiple and equal; and (c) withinsubject designs in ~ ~ which each subject is observed only once at each experimental point. The ramifications of these designs are discussed in terms of various criteria such as rotatability, orthogonal blocking, and estimates of error.
Response Surface Methodology Optimization of Fermentation Conditions for Rapid and Efficient Accumulation of Macrolactin A by Marine Bacillus amyloliquefaciens ESB2
, 2012
"... molecules ..."
ComputerGenerated Minimal (and Larger) ResponseSurface Designs: (II) The Cube
, 1991
"... Computergenerated designs in the cube are described which have the minimal (or larger) number of runs for a full quadratic responsesurface design. Examples of 2factor designs are included with 6 to 20 runs, 3factor designs with 10 to 20 runs, 4factor designs with 15 to 20 runs, 5factor designs ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Computergenerated designs in the cube are described which have the minimal (or larger) number of runs for a full quadratic responsesurface design. Examples of 2factor designs are included with 6 to 20 runs, 3factor designs with 10 to 20 runs, 4factor designs with 15 to 20 runs, 5factor designs with 21 to 25 runs, 6factor designs with 28 to 31 runs, and 7factor designs with 36 and 39 runs. The designs were constructed by minimizing the average prediction variance, and without imposing any prior constraints  such as a central composite structure  on the locations of the points.
VariableComplexity Response Surface Design of an HSCT Configuration
, 1996
"... A variablecomplexity response surface methodology has been applied to the multidisciplinary design of a High Speed Civil Transport (HSCT). The term variablecomplexity refers to a design procedure in which refined, computationally expensive analysis techniques are combined with simple, computation ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
A variablecomplexity response surface methodology has been applied to the multidisciplinary design of a High Speed Civil Transport (HSCT). The term variablecomplexity refers to a design procedure in which refined, computationally expensive analysis techniques are combined with simple, computationally inexpensive techniques. We have used the simple analysis methods to define a subregion of the design space in which an optimal HSCT design is likely to exist. The refined analysis methods were then used to construct smooth response surface models of various aerodynamic and structural weight quantities. Aerodynamic response surface models were constructed for volumetric wave drag and supersonic drag due to lift based on an example problem involving four HSCT wing design variables. Optimization was then performed for the complete HSCT configuration using the aerodynamic response surface models. Preliminary research on the development of a structural response surface model for the wing ben...
1The Impact of Classifier Configuration and Classifier Combination on Bug Localization
"... Abstract—Bug localization is the task of determining which source code entities are relevant to a bug report. Manual bug localization is labor intensive, since developers must consider thousands of source code entities. Current research builds bug localization classifiers, based on information retri ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Bug localization is the task of determining which source code entities are relevant to a bug report. Manual bug localization is labor intensive, since developers must consider thousands of source code entities. Current research builds bug localization classifiers, based on information retrieval models, to locate entities that are textually similar to the bug report. Current research, however, does not consider the effect of classifier configuration, i.e., all the parameter values that specify the behavior of a classifier. As such, it is unknown the effect of each parameter or which parameter values lead to the best performance. In this paper, we empirically investigate the effectiveness of a large space of classifier configurations, 3,172 in total. Further, we introduce a framework for combining the results of multiple classifier configurations, since classifier combination has shown promise in other domains. Through a detailed case study on over 8,000 bug reports from three largescale projects, we make two main contributions. First, we show that the parameters of a classifier have a significant impact on its performance. Second, we show that combining multiple classifiers—whether those classifiers are handpicked or randomly chosen relative to intelligentlydefined subspaces of classifiers—improves the performance of even the best individual classifiers.
An Experimental Determination of Losses in a ThreePort Wave Rotor
 Journal of Engineering for Gas Turbines and Power
, 1998
"... ..."
(Show Context)
New BoxBehnken designs
, 2000
"... this article to add substantially to their list. The incomplete block design upon which a BBtype design is based must satisfy the following two properties: ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
this article to add substantially to their list. The incomplete block design upon which a BBtype design is based must satisfy the following two properties: