Results 1  10
of
160
Randomized Experiments from Nonrandom Selection in the U.S. House Elections
 Journal of Econometrics
, 2008
"... This paper establishes the relatively weak conditions under which causal inferences from a regressiondiscontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discont ..."
Abstract

Cited by 302 (15 self)
 Add to MetaCart
This paper establishes the relatively weak conditions under which causal inferences from a regressiondiscontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discontinuity in any predetermined (or “baseline”) variables at the RD threshold. Specifically, consider a standard treatment evaluation problem in which treatment is assigned to an individual if and only if V> v0, but where v0 is a known threshold, and V is observable. V can depend on the individual’s characteristics and choices, but there is also a random chance element: for each individual, there exists a welldefined probability distribution for V. The density function – allowed to differ arbitrarily across the population – is assumed to be continuous. It is formally established that treatment status here is as good as randomized in a local neighborhood of V = v0. These ideas are illustrated in an analysis of U.S. House elections, where the inherent uncertainty in the final vote count is plausible, which would imply that the party that wins is essentially randomized among elections decided by a narrow margin. The evidence is consistent with this prediction, which is then used to generate “nearexperimental ” causal estimates of the electoral advantage to incumbency.
Pricing american options: a duality approach
 Operations Research
, 2001
"... We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial ap ..."
Abstract

Cited by 148 (5 self)
 Add to MetaCart
(Show Context)
We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial approximation is close to the true price of the option, the bounds are also guaranteed to be close. We also explicitly characterize the worstcase performance of the pricing bounds. The computation of the lower bound is straightforward and relies on simulating the suboptimal exercise strategy implied by the approximate option price. The upper bound is also computed using Monte Carlo simulation. This is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem, which is the main theoretical result of this paper. Our algorithm proves to be accurate on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. These numerical results suggest that our pricing method can be successfully applied to problems of practical interest. ∗An earlier draft of this paper was titled Pricing HighDimensional American Options: A Duality
Running to Keep the Same Place: Consumer Choice as a Game of Status
 AMERICAN ECONOMIC REVIEW
, 2004
"... If individuals care about their status, defined as their rank in the distribution of consumption of one “positional ” good, then the consumer’s problem is strategic as her utility depends on the consumption choices of others. In the symmetric Nash equilibrium, each individual spends an inefficiently ..."
Abstract

Cited by 82 (7 self)
 Add to MetaCart
If individuals care about their status, defined as their rank in the distribution of consumption of one “positional ” good, then the consumer’s problem is strategic as her utility depends on the consumption choices of others. In the symmetric Nash equilibrium, each individual spends an inefficientlyhighamountonthe status good. Using techniques from auction theory, we analyze the effects of exogenous changes in the distribution of income. In a richer society, almost all individuals spend more on conspicuous consumption, and individual utility is lower at each income level. In a more equal society, the poor are worse off.
Modeling and Generating Random Vectors with Arbitrary Marginal Distributions and Correlation Matrix
, 1997
"... We describe a model for representing random vectors whose component random variables have arbitrary marginal distributions and correlation matrix, and describe how to generate data based upon this model for use in a stochastic simulation. The central idea is to transform a multivariate normal random ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
(Show Context)
We describe a model for representing random vectors whose component random variables have arbitrary marginal distributions and correlation matrix, and describe how to generate data based upon this model for use in a stochastic simulation. The central idea is to transform a multivariate normal random vector into the desired random vector, so we refer to these vectors as having a NORTA (NORmal To Anything) distribution. NORTA vectors are most useful when the marginal distributions of the component random variables are neither identical nor from the same family of distributions, and they are particularly valuable when the dimension of the random vector is greater than two. Several numerical examples are provided. Keywords: simulation, random vector, input modeling, correlation matrix, copulas 1 Introduction In many stochastic simulations, simple input modelsidependent and identically distributed sequences from standard probability distributionsare not faithful representations of th...
CLT for Linear Spectral Statistics of Large Dimensional Sample Covariance Matrices
, 2003
"... This paper shows their of rate of convergence to be 1/n by proving, after proper scaling, they form a tight sequence. Moreover, if EX 11 =0andEX11 =2, or if X11 and T n are real and EX 11 = 3, they are shown to have Gaussian limits ..."
Abstract

Cited by 63 (2 self)
 Add to MetaCart
This paper shows their of rate of convergence to be 1/n by proving, after proper scaling, they form a tight sequence. Moreover, if EX 11 =0andEX11 =2, or if X11 and T n are real and EX 11 = 3, they are shown to have Gaussian limits
Less hashing, same performance: Building a better bloom filter
 In Proc. the 14th Annual European Symposium on Algorithms (ESA 2006
, 2006
"... ABSTRACT: A standard technique from the hashing literature is to use two hash functions h1(x) and h2(x) to simulate additional hash functions of the form gi(x) = h1(x) + ih2(x). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, on ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
(Show Context)
ABSTRACT: A standard technique from the hashing literature is to use two hash functions h1(x) and h2(x) to simulate additional hash functions of the form gi(x) = h1(x) + ih2(x). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, only two hash functions are necessary to effectively implement a Bloom filter without any loss in the asymptotic false positive probability. This leads to less computation and potentially less need for
Phase Change of Limit Laws in the Quicksort Recurrence Under Varying Toll Functions
, 2001
"... We characterize all limit laws of the quicksort type random variables defined recursively by Xn = X In + X # n1In + Tn when the "toll function" Tn varies and satisfies general conditions, where (Xn ), (X # n ), (I n , Tn ) are independent, Xn . . . , n 1}. When the "to ..."
Abstract

Cited by 54 (18 self)
 Add to MetaCart
We characterize all limit laws of the quicksort type random variables defined recursively by Xn = X In + X # n1In + Tn when the "toll function" Tn varies and satisfies general conditions, where (Xn ), (X # n ), (I n , Tn ) are independent, Xn . . . , n 1}. When the "toll function" Tn (cost needed to partition the original problem into smaller subproblems) is small (roughly lim sup n## log E(Tn )/ log n 1/2), Xn is asymptotically normally distributed; nonnormal limit laws emerge when Tn becomes larger. We give many new examples ranging from the number of exchanges in quicksort to sorting on broadcast communication model, from an insitu permutation algorithm to tree traversal algorithms, etc.
Sojourn Time Asymptotics in the M/G/1 Processor Sharing Queue
 QUEUEING SYSTEMS
, 1998
"... We show for the M/G/1 processor sharing queue that the service time distribution is regularly varying of index \Gamma , noninteger, iff the sojourn time distribution is regularly varying of index \Gamma . This result is derived from a new expression for the LaplaceStieltjes transform of the sojo ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
(Show Context)
We show for the M/G/1 processor sharing queue that the service time distribution is regularly varying of index \Gamma , noninteger, iff the sojourn time distribution is regularly varying of index \Gamma . This result is derived from a new expression for the LaplaceStieltjes transform of the sojourn time distribution. That expression also leads to other new properties for the sojourn time distribution. We show how the moments of the sojourn time can be calculated recursively and prove that the kth moment of the sojourn time is finite iff the kth moment of the service time is finite. In addition, we give a short proof of a heavy traffic theorem for the sojourn time distribution, prove a heavy traffic theorem for the moments of the sojourn time, and study the properties of the heavy traffic limiting sojourn time distribution when the service time distribution is regularly varying. Explicit formulas and multiterm expansions are provided for the case that the service time has a Pareto...
R: Sample size determination in microarray experiments for class comparison and prognostic classification. Biostatistics 2005; 6: 27
"... Determining sample sizes for microarray experiments is important but the complexity of these experiments, and the large amounts of data they produce, can make the sample size issue seem daunting, and tempt researchers to use rules of thumb in place of formal calculations based on the goals of the ex ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
(Show Context)
Determining sample sizes for microarray experiments is important but the complexity of these experiments, and the large amounts of data they produce, can make the sample size issue seem daunting, and tempt researchers to use rules of thumb in place of formal calculations based on the goals of the experiment. Here we present formulas for determining sample sizes to achieve a variety of experimental goals, including class comparison and the development of prognostic markers. Results are derived which describe the impact of pooling, technical replicates and dyeswap arrays on sample size requirements. These results are shown to depend on the relative sizes of different sources of variability. A variety of common types of experimental situations and designs used with singlelabel and duallabel microarrays are considered. We discuss procedures for controlling the false discovery rate. Our calculations are based on relatively simple yet realistic statistical models for the data, and provide straightforward sample size calculation formulas.
On asymptotics of eigenvectors of large sample covariance matrix
 Annals of Probab
"... Let {Xij}, i,j =..., be a double array of i.i.d. complex random variables with EX11 = 0,EX11  2 = 1 and EX11  4 < ∞, and let An = 1 1/2 ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
Let {Xij}, i,j =..., be a double array of i.i.d. complex random variables with EX11 = 0,EX11  2 = 1 and EX11  4 < ∞, and let An = 1 1/2