Results 1  10
of
128
A rigorous framework for optimization of expensive functions by surrogates
 Structural Optimization
, 1999
"... Abstract. The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which direct application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of ..."
Abstract

Cited by 164 (16 self)
 Add to MetaCart
Abstract. The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which direct application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31variable helicopter rotor blade design example and for a standard optimization test example. Key words. approximation concepts, surrogate optimization, response surfaces, pattern search methods, derivativefree optimization, design and analysis of computer experiments (DACE), computational engineering. Subject classication. Applied & Numerical Mathematics 1. Introduction. The
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 84 (5 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Latin Supercube Sampling for Very High Dimensional Simulations
, 1997
"... This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables ..."
Abstract

Cited by 73 (7 self)
 Add to MetaCart
This paper introduces Latin supercube sampling (LSS) for very high dimensional simulations, such as arise in particle transport, finance and queuing. LSS is developed as a combination of two widely used methods: Latin hypercube sampling (LHS), and QuasiMonte Carlo (QMC). In LSS, the input variables are grouped into subsets, and a lower dimensional QMC method is used within each subset. The QMC points are presented in random order within subsets. QMC methods have been observed to lose effectiveness in high dimensional problems. This paper shows that LSS can extend the benefits of QMC to much higher dimensions, when one can make a good grouping of input variables. Some suggestions for grouping variables are given for the motivating examples. Even a poor grouping can still be expected to do as well as LHS. The paper also extends LHS and LSS to infinite dimensional problems. The paper includes a survey of QMC methods, randomized versions of them (RQMC) and previous methods for extending Q...
Maximum likelihood estimation of a stochastic integrateandfire neural model
 NIPS
, 2003
"... We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can eff ..."
Abstract

Cited by 63 (21 self)
 Add to MetaCart
(Show Context)
We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen in vivo. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses (not intracellular voltage data). Specifically, we prove that the log likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop an efficient algorithm for computing the maximum likelihood solution, demonstrate the effectiveness of the resulting estimator with numerical simulations, and discuss a method of testing the model’s validity using timerescaling and density evolution techniques. Paninski et al., November 30, 2004 2 1
Recent Advances In Randomized QuasiMonte Carlo Methods
"... We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional inte ..."
Abstract

Cited by 60 (13 self)
 Add to MetaCart
We survey some of the recent developments on quasiMonte Carlo (QMC) methods, which, in their basic form, are a deterministic counterpart to the Monte Carlo (MC) method. Our main focus is the applicability of these methods to practical problems that involve the estimation of a highdimensional integral. We review several QMC constructions and dierent randomizations that have been proposed to provide unbiased estimators and for error estimation. Randomizing QMC methods allows us to view them as variance reduction techniques. New and old results on this topic are used to explain how these methods can improve over the MC method in practice. We also discuss how this methodology can be coupled with clever transformations of the integrand in order to reduce the variance further. Additional topics included in this survey are the description of gures of merit used to measure the quality of the constructions underlying these methods, and other related techniques for multidimensional integration. 1 2 1.
Variance Reduction via Lattice Rules
 Management Science
, 2000
"... This is a review article on lattice methods for multiple integration over the unit hypercube, with a variancereduction viewpoint. It also contains some new results and ideas. The aim is to examine the basic principles supporting these methods and how they can be used effectively for the simulation ..."
Abstract

Cited by 53 (12 self)
 Add to MetaCart
This is a review article on lattice methods for multiple integration over the unit hypercube, with a variancereduction viewpoint. It also contains some new results and ideas. The aim is to examine the basic principles supporting these methods and how they can be used effectively for the simulation models that are typically encountered in the area of Management Science. These models can usually be reformulated as integration problems over the unit hypercube with a large (sometimes infinite) number of dimensions. We examine selection criteria for the lattice rules and suggest criteria which take into account the quality of the projections of the lattices over selected lowdimensional subspaces. The criteria are strongly related to those used for selecting linear congruential and multiple recursive random number generators. Numerical examples illustrate the effectiveness of the approach.
Moment Inequalities for Functions of Independent Random Variables
"... this paper is to provide such generalpurpose inequalities. Our approach is based on a generalization of Ledoux's entropy method (see [26, 28]). Ledoux's method relies on abstract functional inequalities known as logarithmic Sobolev inequalities and provide a powerful tool for deriving exp ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
this paper is to provide such generalpurpose inequalities. Our approach is based on a generalization of Ledoux's entropy method (see [26, 28]). Ledoux's method relies on abstract functional inequalities known as logarithmic Sobolev inequalities and provide a powerful tool for deriving exponential inequalities for functions of independent random variables, see Boucheron, Massart, and AMS 1991 subject classifications. Primary 60E15, 60C05, 28A35; Secondary 05C80 Key words and phrases. Moment inequalities, Concentration inequalities; Empirical processes; Random graphs Supported by EU Working Group RANDAPX, binational PROCOPE Grant 05923XL The work of the third author was supported by the Spanish Ministry of Science and Technology and FEDER, grant BMF200303324 Lugosi [6, 7], Bousquet [8], Devroye [14], Massart [30, 31], Rio [36] for various applications. To derive moment inequalities for general functions of independent random variables, we elaborate on the pioneering work of Latala and Oleszkiewicz [25] and describe socalled #Sobolev inequalities which interpolate between Poincare's inequality and logarithmic Sobolev inequalities (see also Beckner [4] and Bobkov's arguments in [26])
Concentration Inequalities Using the Entropy Method
, 2002
"... We investigate a new methodology... The main purpose of this paper is to point out the simplicity and the generality of the approach. We show how the new method can recover many of Talagrand's revolutionary inequalities and provide new applications in a variety of problems including Rademacher ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
We investigate a new methodology... The main purpose of this paper is to point out the simplicity and the generality of the approach. We show how the new method can recover many of Talagrand's revolutionary inequalities and provide new applications in a variety of problems including Rademacher averages, Rademacher chaos, the number of certain small subgraphs in a random graph, and the minimum of the empirical risk in some statistical estimation problems.
Concentration inequalities
 Advanced Lectures in Machine Learning
, 2004
"... Abstract. Concentration inequalities deal with deviations of functions of independent random variables from their expectation. In the last decade new tools have been introduced making it possible to establish simple and powerful inequalities. These inequalities are at the heart of the mathematical a ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Concentration inequalities deal with deviations of functions of independent random variables from their expectation. In the last decade new tools have been introduced making it possible to establish simple and powerful inequalities. These inequalities are at the heart of the mathematical analysis of various problems in machine learning and made it possible to derive new efficient algorithms. This text attempts to summarize some of the basic tools. 1