Results 1  10
of
113
Optimal inapproximability results for MAXCUT and other 2variable CSPs?
, 2005
"... In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games ..."
Abstract

Cited by 175 (26 self)
 Add to MetaCart
In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games
Mutual information, Fisher information and population coding
 Neural Computation
, 1998
"... In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable. To appear in Neural Computation Vol. 10, Issue 7, published by the MIT press. 1 Laboratory associated with C.N.R.S. (U.R.A. 1306), ENS, and Universities Paris VI and Paris VII 1 Introduction A natural framework to study how neurons communicate, or transmit information, in the nervous system is information theory (see e...
On coupling constructions and rates in the CLT for dependent summands with applications to the antivoter model and weighted
, 1997
"... This paper deals with rates of convergence in the CLT for certain types of dependency. The main idea is to combine a modification of a theorem of Stein, requiring a coupling construction, with a dynamic setup provided by a Markov structure that suggests natural coupling variables. More specifically ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
This paper deals with rates of convergence in the CLT for certain types of dependency. The main idea is to combine a modification of a theorem of Stein, requiring a coupling construction, with a dynamic setup provided by a Markov structure that suggests natural coupling variables. More specifically, given a stationary Markov chain X�t � , and a function U = U�X�t��, we propose a way to study the proximity of U to a normal random variable when the state space is large. We apply the general method to the study of two problems. In the first, we consider the antivoter chain X�t � =�X �t� i �i∈ � � t = 0 � 1���� � where � is the vertex set of an nvertex regular graph, and X �t� i =+1or−1. The chain evolves from time t to t + 1 by choosing a random vertex i, and a random neighbor of it j, and setting X �t+1� i =−X �t� j and X�t+1� k = X �t� k for all k = i. For a stationary antivoter chain, we study the normal approximation of Un = U �t� n = ∑ i X �t� i for large n and consider some conditions on sequences of graphs such that Un is asymptotically normal, a problem posed by Aldous and Fill. The same approach may also be applied in situations where a Markov chain does not appear in the original statement of a problem but is constructed as an auxiliary device. This is illustrated by considering weighted Ustatistics. In particular we are able to unify and generalize some results on normal convergence for degenerate weighted Ustatistics and provide rates. 1. Introduction and
Stability analysis for stochastic programs
 ANNALS OF OPERATIONS RESEARCH
, 1991
"... For stochastic programs with recourse and with (several joint) probabilistic constraints, respectively, we derive quantitative continuity properties of the relevant expectation functionals and constraint set mappings. This leads to qualitative and quantitative stability results for optimal values an ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
For stochastic programs with recourse and with (several joint) probabilistic constraints, respectively, we derive quantitative continuity properties of the relevant expectation functionals and constraint set mappings. This leads to qualitative and quantitative stability results for optimal values and optimal solutions with respect to perturbations of the underlying probability distributions. Earlier stability results for stochastic programs with recourse and for those with probabilistic constraints are refined and extended, respectively. Emphasis is placed on equipping sets of probability measures with metrics that one can handle in specific situations. To illustrate the general stability results we present possible consequences when estimating the original probability measure via empirical ones.
Confidence intervals for a binomial proportion and asymptotic expansions
 Ann. Statist
, 2002
"... We address the classic problem of interval estimation of a binomial proportion. The Wald interval ˆp ± z α/2n −1/2 ( ˆp(1 −ˆp)) 1/2 is currently in near universal use. We first show that the coverage properties of the Wald interval are persistently poor and defy virtually all conventional wisdom. We ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We address the classic problem of interval estimation of a binomial proportion. The Wald interval ˆp ± z α/2n −1/2 ( ˆp(1 −ˆp)) 1/2 is currently in near universal use. We first show that the coverage properties of the Wald interval are persistently poor and defy virtually all conventional wisdom. We then proceed to a theoretical comparison of the standard interval and four additional alternative intervals by asymptotic expansions of their coverage probabilities and expected lengths. The four additional interval methods we study in detail are the scoretest interval (Wilson), the likelihoodratiotest interval, a Jeffreys prior Bayesian interval and an interval suggested by Agresti and Coull. The asymptotic expansions for coverage show that the first three of these alternative methods have coverages that fluctuate about the nominal value, while the Agresti– Coull interval has a somewhat larger and more nearly conservative coverage function. For the five interval methods we also investigate asymptotically their average coverage relative to distributions for p supported within (0, 1). In terms of expected length, asymptotic expansions show that the Agresti– Coull interval is always the longest of these. The remaining three are rather comparable and are shorter than the Wald interval except for p near 0 or 1. These analytical calculations support and complement the findings and the recommendations in Brown, Cai and DasGupta (Statist. Sci. (2001) 16
Tight thresholds for cuckoo hashing via XORSAT
, 2010
"... We settle the question of tight thresholds for offline cuckoo hashing. The problem can be stated as follows: we have n keys to be hashed into m buckets each capable of holding a single key. Each key has k ≥ 3 (distinct) associated buckets chosen uniformly at random and independently of the choices ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
We settle the question of tight thresholds for offline cuckoo hashing. The problem can be stated as follows: we have n keys to be hashed into m buckets each capable of holding a single key. Each key has k ≥ 3 (distinct) associated buckets chosen uniformly at random and independently of the choices of other keys. A hash table can be constructed successfully if each key can be placed into one of its buckets. We seek thresholds ck such that, as n goes to infinity, if n/m ≤ c for some c < ck then a hash table can be constructed successfully with high probability, and if n/m ≥ c for some c> ck a hash table cannot be constructed successfully with high probability. Here we are considering the offline version of the problem, where all keys and hash values are given, so the problem is equivalent to previous models of multiplechoice hashing. We find the thresholds for all values of k> 2 by showing that they are in fact the same as the previously known thresholds for the random kXORSAT problem. We then extend these results to the setting where keys can have differing number of choices, and provide evidence in the form of an algorithm for a conjecture extending this result to cuckoo hash tables that store multiple keys in a bucket.
Inference for identifiable parameters in partially identified econometric models
 Journal of Statistical Planning and Inference – Special Issue in Honor of Ted Anderson
, 2008
"... This paper considers the problem of inference for partially identified econometric models. The class of models studied are defined by a population objective function Q(θ, P) for θ ∈ Θ. The second argument indicates the dependence of the objective function on P, the distribution of the observed data. ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
This paper considers the problem of inference for partially identified econometric models. The class of models studied are defined by a population objective function Q(θ, P) for θ ∈ Θ. The second argument indicates the dependence of the objective function on P, the distribution of the observed data. Unlike the classical extremum estimation framework, it is not assumed that Q(θ, P) has a unique minimizer in the parameter space Θ. The goal may be either to draw inferences about some unknown point in the set of minimizers of the population objective function or to draw inferences about the set of minimizers itself. In this paper, the object of interest is some unknown point θ ∈ Θ0(P), where Θ0(P) = arg minθ∈Θ Q(θ, P), and so we seek random sets that contain each θ ∈ Θ0(P) with at least some prespecified probability asymptotically. We also consider situations where the object of interest is the image of some point θ ∈ Θ0(P) under a known function. Computationally intensive, yet feasible procedures for constructing random sets satisfying the desired coverage property under weak assumptions are provided. We also provide conditions under which the confidence regions are uniformly consistent in level. To do this, we first derive new uniformity results about subsampling that are of independent interest.
On Roots of Random Polynomials
 Trans. American Math. Soc
, 1995
"... We study the distribution of the complex roots of random polynomials of degree n with i.i.d. coefficients. Using techniques related to Rice's treatment of the real roots question, we derive, under appropriate moment and regularity conditions, an exact formula for the average density of this distribu ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We study the distribution of the complex roots of random polynomials of degree n with i.i.d. coefficients. Using techniques related to Rice's treatment of the real roots question, we derive, under appropriate moment and regularity conditions, an exact formula for the average density of this distribution, which yields appropriate limit average densities. Further, using a different technique, we prove limit distributions results for coefficients in the domain of attraction of the stable law. AMS 1991 Subject classification: Primary 34F05. Secondary 26C10, 30B20. KEYWORDS: Random polynomials, complex roots, domain of attraction of the stable law. The work of this author was partially supported by the Russian Foundation for Fundamental Research, grant 940100301, and by grants R36000 and R36300 of the International Scientific Foundation. y The work of this author was done while he visited MIT, under support from NSF grant 9302709DMS. 1 1 Introduction Let fa j g 1 j=0 denote a s...
Probabilistic Analysis and Scheduling of Critical Soft RealTime Systems
, 1999
"... In addition to correctness requirements, a realtime system must also meet its temporal constraints, often expressed as deadlines. We call safety or mission critical realtime systems which may miss some deadlines critical soft realtime systems to distinguish them from hard realtime systems, where ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In addition to correctness requirements, a realtime system must also meet its temporal constraints, often expressed as deadlines. We call safety or mission critical realtime systems which may miss some deadlines critical soft realtime systems to distinguish them from hard realtime systems, where all deadlines must be met, and from soft realtime systems which are not safety or mission critical. The performance of a critical soft realtime system is acceptable as long as the deadline miss rate is below an application specific threshold. Architectural features of computer systems, such as caches and branch prediction hardware, are designed to improve average performance. Deterministic realtime design and analysis approaches require that such features be disabled to increase predictability. Alternatively, allowances must be made for for their effects by designing for the worst case. Either approach leads to a decrease in average performance. Since critical soft realtime systems do not require that all deadlines be met, average performance can be improved by adopting a probabilitistic approach. In order to allow a tradeoff between deadlines met and average
Distribution sensitivity in stochastic programming
, 1991
"... In this paper, stochastic programming problems are viewed as parametric programs with respect to the probability distributions of the random coefficients. General results on quantitative stability in parametric optimization are used to study distribution sensitivity of stochastic programs. For recou ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
In this paper, stochastic programming problems are viewed as parametric programs with respect to the probability distributions of the random coefficients. General results on quantitative stability in parametric optimization are used to study distribution sensitivity of stochastic programs. For recourse and chance constrained models quantitative continuity results for optimal values and optimal solution sets are proved (with respect to suitable metrics on the space of probability distributions). The results are useful to study the effect of approximations and of incomplete information in stochastic programming.