Results 1  10
of
55
Bayesian Model Selection in Finite Mixtures by Marginal Density Decompositions
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2001
"... ..."
Density and Hazard Rate Estimation for Right Censored Data Using Wavelet Methods
, 1997
"... This paper describes a wavelet method for the estimation of density and hazard rate functions from randomly right censored data. We adopt a nonparametric approach in assuming that the density and hazard rate have no specific parametric form. The method is based on dividing the time axis into a dyadi ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
This paper describes a wavelet method for the estimation of density and hazard rate functions from randomly right censored data. We adopt a nonparametric approach in assuming that the density and hazard rate have no specific parametric form. The method is based on dividing the time axis into a dyadic number of intervals and then counting the number of events within each interval. The number of events and the survival function of the observations are then separately smoothed over time via linear wavelet smoothers, and then the hazard rate function estimators are obtained by taking the ratio. We prove that the estimators possess pointwise and global mean square consistency, obtain the best possible asymptotic MISE convergence rate and are also asymptotically normally distributed. We also describe simulation experiments that show these estimators are reasonably reliable in practice. The method is illustrated with two real examples. The first uses survival time data for patients with liver...
Informationtheoretically secret key generation for fading wireless channels
 IEEE Trans on Information Forensics and Security
, 2010
"... Abstract—The multipathrich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is timevarying, locationsensitive, and uniquely shared by a given transmitter–receiver pair. The complexity associated with a richly scattering envir ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Abstract—The multipathrich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is timevarying, locationsensitive, and uniquely shared by a given transmitter–receiver pair. The complexity associated with a richly scattering environment implies that the shortterm fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is wellsuited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a selfauthenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s. Index Terms—Informationtheoretic security, key generation, PHY layer security. I.
Asymptotic minimaxity of false discovery rate thresholding for sparse exponential data
 Ann. Statist
, 2006
"... Control of the False Discovery Rate (FDR) is an important development in multiple hypothesis testing, allowing the user to limit the fraction of rejected null hypotheses which correspond to false rejections (i.e. false discoveries). The FDR principle also can be used in multiparameter estimation pro ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Control of the False Discovery Rate (FDR) is an important development in multiple hypothesis testing, allowing the user to limit the fraction of rejected null hypotheses which correspond to false rejections (i.e. false discoveries). The FDR principle also can be used in multiparameter estimation problems to set thresholds for separating signal from noise when the signal is sparse. Success has been proven when the noise is Gaussian; see [3]. In this paper, we consider the application of FDR thresholding to a nonGaussian setting, in hopes of learning whether the good asymptotic properties of FDR thresholding as an estimation tool hold more broadly than just at the standard Gaussian model. We consider a vector Xi, i = 1,..., n, whose coordinates are independent exponential with individual means µi. The vector µ is thought to be sparse, with most coordinates 1 and a small fraction significantly larger than 1. This models a situation where most coordinates are simply ‘noise’, but a small fraction of the coordinates contain ‘signal’. We develop an estimation theory working with log(µi) as the estimand, and use the percoordinate meansquared error in recovering log(µi) to measure risk. We consider minimax
Nonparametric changepoint estimation
 Annals of Statistics
, 1988
"... Consider a sequence of independent random variables {X.: I < i < n} having 1.cdf F for i < en and cdf G otherwise. A strongly consistent esti!.Jlator of the changepoint e E (0.1) is proposed. The estimator requires no knowledge of the ~ ~ functional forms or parametric families of F and G. Furthe ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Consider a sequence of independent random variables {X.: I < i < n} having 1.cdf F for i < en and cdf G otherwise. A strongly consistent esti!.Jlator of the changepoint e E (0.1) is proposed. The estimator requires no knowledge of the ~ ~ functional forms or parametric families of F and G. Furthermore. F and G need not differ in their means (or other measure of location). is that F and G differ on a set of positive probability. The only requirement The proof of consistency provides rates of convergence and bounds on the error probability for the estimator. The estimator is applied to two wellknown data sets; in both cases it yields results in close agreement with previous parametric analyses. A simulation study
S: Marginal asymptotics for the “large p, small n” paradigm: with applications to microarray data
 Annals of Statistics
"... The “large p, small n ” paradigm arises in microarray studies, where expression levels of thousands of genes are monitored for a small number of subjects. There has been an increasing demand for study of asymptotics for the various statistical models and methodologies using genomic data. In this art ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
The “large p, small n ” paradigm arises in microarray studies, where expression levels of thousands of genes are monitored for a small number of subjects. There has been an increasing demand for study of asymptotics for the various statistical models and methodologies using genomic data. In this article, we focus on onesample and twosample microarray experiments, where the goal is to identify significantly differentially expressed genes. We establish uniform consistency of certain estimators of marginal distribution functions, sample means and sample medians under the large p small n assumption. We also establish uniform consistency of marginal pvalues based on certain asymptotic approximations which permit inference based on false discovery rate techniques. The affects of the normalization process on these results is also investigated. Simulation studies and data analyses are used to assess finite sample performance.
Asymptotic distributions for the cost of linear probing hashing, Random Structures and Algorithms
"... Abstract. We study moments and asymptotic distributions of the construction cost, measured as the total displacement, for hash tables using linear probing. Four different methods are employed for different ranges of the parameters; together they yield a complete description. This extends earlier res ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. We study moments and asymptotic distributions of the construction cost, measured as the total displacement, for hash tables using linear probing. Four different methods are employed for different ranges of the parameters; together they yield a complete description. This extends earlier results by Flajolet, Poblete and Viola. The average cost of unsuccessful searches is considered too. 1.
A Note on Linear Expected Time Algorithms for Finding Convex Hulls
 Computing
, 1981
"... Consider n independent identically distributed random vectors from R d with common density f , and let E(C) be the average complexity of an algorithm that finds the convex hull of these points. Most wellknown algorithms satisfy E(C) = O(n) for certain classes of densities. In this note, we show t ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Consider n independent identically distributed random vectors from R d with common density f , and let E(C) be the average complexity of an algorithm that finds the convex hull of these points. Most wellknown algorithms satisfy E(C) = O(n) for certain classes of densities. In this note, we show that E(C) = O(n) for algorithms that use a "throwaway" preprocessing step when f is bounded away from 0 and 1 on any nondegenerate rectangle of R 2 . 1 Introduction Let X 1 ; : : : ; X n be independent identically distributed random vectors from R d with common density f , and let C be the complexity of a given convex hull algorithms for X 1 ; : : : ; X n (thus, C is a random variable). In this note we will discuss several convex hull algorithms and the condition on f that will insure their linear average time behavior: E(C) = O(n) (1) In general, the more sophisticated algorithms satisfy (1) for a larger class of densities than do the simple algorithms. The purpose of this note is ...
On Locally Uniformly Linearizable High Breakdown Location and Scale Functionals
, 1998
"... this paper and the standard one model situation of robust statistics. They consider a finite number of models or challenges and look for a procedure which performs well at all of them. The hope is that such a procedure will also perform reasonably well for challenges which lie between. For a given s ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
this paper and the standard one model situation of robust statistics. They consider a finite number of models or challenges and look for a procedure which performs well at all of them. The hope is that such a procedure will also perform reasonably well for challenges which lie between. For a given sample a likelihood based compromise between the two challenges is made. The use of likelihood means that the method of Morgenthaler and Tukey does not satisfy DP5. In Section 6 we show how it is possible to "coarsen" a large class of distributions by reducing them to a finite sample of m points which themselves satisfy DP5. These points can be used to decide between a finite set of challenges and hence to make the weights of the weighted mean depend on the shape of the sample but in a differentiable manner. 3 Local uniform linearity
A SUPPLY AND DEMAND FRAMEWORK FOR TWOSIDED MATCHING MARKETS
"... Abstract. We propose a new model of twosided matching markets, which allows for complex heterogeneous preferences, but is more tractable than the standard model, yielding rich comparative statics and new results on large matching markets. We simplify the standard Gale and Shapley (1962) model in tw ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract. We propose a new model of twosided matching markets, which allows for complex heterogeneous preferences, but is more tractable than the standard model, yielding rich comparative statics and new results on large matching markets. We simplify the standard Gale and Shapley (1962) model in two ways. First, following Aumann (1964) we consider a setting where a finite number of agents on one side (colleges or firms) are matched to a continuum mass of agents on the other side (students or workers). Second, we show that, in both the discrete and continuum model, stable matchings have a very simple structure, with colleges accepting students ranked above a threshold, and students demanding their favorite college that will accept them. Moreover, stable matchings may be found by solving for thresholds that balance supply and demand for colleges. We give general conditions under which the continuum model admits a unique stable matching, in contrast to the standard discrete model. This stable matching varies continuously with the parameters of the model, and comparative statics may be derived as in competitive equilibrium theory, through the market clearing equations. Moreover, given a sequence of large discrete