Results 1  10
of
76
Network tomography: recent developments
 Statistical Science
, 2004
"... Today's Int ernet is a massive, dist([/#][ net work which cont inuest o explode in size as ecommerce andrelatH actH]M/# grow. Thehet([H(/#]H( and largelyunregulatS stregula of t/ Int/HH3 renderstnde such as dynamicroutc/[ opt2]3fl/ service provision, service level verificatflH( and det(2][/ of ..."
Abstract

Cited by 137 (4 self)
 Add to MetaCart
(Show Context)
Today's Int ernet is a massive, dist([/#][ net work which cont inuest o explode in size as ecommerce andrelatH actH]M/# grow. Thehet([H(/#]H( and largelyunregulatS stregula of t/ Int/HH3 renderstnde such as dynamicroutc/[ opt2]3fl/ service provision, service level verificatflH( and det(2][/ of anomalous/malicious behaviorext/[(22 challenging. The problem is compounded bytS fact tct onecannot rely ont[ cooperatH2 of individual servers and routSS t aid intS collect[3 of net workt/[S measurement vits fort/]3 t/]3] In many ways, net workmonit]/#[ and inference problems bear a st[fl[ resemblancet otnc "inverse problems" in which key aspect of asystfl are not direct/ observable. Familiar signal processing orst[]23/#[S problems such ast omographic imagereconst[/#[S] and phylogenet# tog identn/HH2[M have int erest3/ connect[HU t tonn arising in net working. This artflMM int/ ducesnet workt/H3]S]/ y, a new field which we believe will benefit greatU from tm wealt of stH2](/#S( ttH2 andalgorit#S( It focuses especially on recent development s int2 field includingtl applicat[fl of pseudolikelihoodmetfl ds andt reeestfl3](/# formulat]M23 Keyw ords:Net workt/HflS33/ y, pseudolikelihood,t opology identn/]H22(/ tn est/]H tst 1 Introducti6 Nonet work is an island, ent/S ofitS[S] everynet work is a piece of an int/]SS work, a part of t/ main . Alt[]][ administHSHSS of smallscale net works can monit( localt ra#ccondit][/ and ident ify congest/# point s and performance botU((2/ ks, very few net works are complet/# # Rui Castroan Robert Nowak are with theDepartmen t of Electricalan ComputerEnterX Rice Unc ersity,Houston TX; Mark Coates is with the Departmen t of Electricalan ComputerEnterX McGill UnG ersity,Mon treal, Quebec,Can Gan Lian an Bin Yu are with theDepartmen t of Statistics,...
The incidental parameter problem since 1948
 JOURNAL OF ECONOMETRICS 95 (2000) 391413
, 2000
"... This paper was written to mark the 50th anniversary of Neyman and Scott's Econometrica paper defining the incidental parameter problem. It surveys the history both of the paper and of the problem in the statistics and econometrics literature. ..."
Abstract

Cited by 125 (0 self)
 Add to MetaCart
This paper was written to mark the 50th anniversary of Neyman and Scott's Econometrica paper defining the incidental parameter problem. It surveys the history both of the paper and of the problem in the statistics and econometrics literature.
Network Loss Inference using Unicast EndtoEnd Measurement
 Proc. ITC Conf. IP Traffic, Modeling and Management
, 2000
"... The fundamental objective of this work is to determine the extent to which unicast, endto end network measurement is capable of determining internal network losses. We show that it is not possible to determine internal losses based solely on unicast, endtoend measurement. ..."
Abstract

Cited by 97 (14 self)
 Add to MetaCart
(Show Context)
The fundamental objective of this work is to determine the extent to which unicast, endto end network measurement is capable of determining internal network losses. We show that it is not possible to determine internal losses based solely on unicast, endtoend measurement.
Objective Bayesian analysis of spatially correlated data
 Journal of the American Statistical Association
, 2001
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 96 (10 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Bayesian Analysis of Molecular Evolution using MrBayes
, 2004
"... Stochastic models of evolution play a prominent role in the field of molecular evolution; they are used in applications as far ranging as phylogeny estimation, uncovering the pattern of DNA substitution, identifying amino acids under directional selection, and in inferring the history of a populatio ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Stochastic models of evolution play a prominent role in the field of molecular evolution; they are used in applications as far ranging as phylogeny estimation, uncovering the pattern of DNA substitution, identifying amino acids under directional selection, and in inferring the history of a population using
The Strength of Statistical Evidence for Composite Hypotheses: Inference to the Best Explanation
, 2010
"... A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors o ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
(Show Context)
A general function to quantify the weight of evidence in a sample of data for one hypothesis over another is derived from the law of likelihood and from a statistical formalization of inference to the best explanation. For a fixed parameter of interest, the resulting weight of evidence that favors one composite hypothesis over another is the likelihood ratio using the parameter value consistent with each hypothesis that maximizes the likelihood function over the parameter of interest. Since the weight of evidence is generally only known up to a nuisance parameter, it is approximated by replacing the likelihood function with a reduced likelihood function on the interest parameter space. Unlike the Bayes factor and unlike the pvalue under interpretations that extend its scope, the weight of evidence is coherent in the sense that it cannot support a hypothesis over any hypothesis that it entails. Further, when comparing the hypothesis that the parameter lies outside a nontrivial interval to the hypothesis that it lies within the interval, the proposed method of weighing evidence almost always asymptotically favors the correct hypothesis
Likelihood based hierarchical clustering
 IEEE Trans. on Signal Processing
, 2004
"... This paper develops a new method for hierarchical clustering. Unlike other existing clustering schemes, our method is based on a generative, treestructured model that represents relationships between the objects to be clustered, rather than directly modeling properties of objects themselves. In cer ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
This paper develops a new method for hierarchical clustering. Unlike other existing clustering schemes, our method is based on a generative, treestructured model that represents relationships between the objects to be clustered, rather than directly modeling properties of objects themselves. In certain problems, this generative model naturally captures the physical mechanisms responsible for relationships among objects, for example, in certain evolutionary tree problems in genetics and communication network topology identification. The paper examines the networking problem in some detail, to illustrate the new clustering method. More broadly, the generative model may not reflect actual physical mechanisms, but it nonetheless provides a means for dealing with errors in the similarity matrix, simultaneously promoting two desirable features in clustering: intraclass similarity and interclass dissimilarity.
ZERUBIA J.: Estimation of blur and noise parameters in remote sensing
 In In Proc. of Int. Conf. on Acoustics, Speech and Signal Processing (2002
"... ..."
(Show Context)
ROBUST PRIORS IN NONLINEAR PANEL DATA MODELS
"... Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to di erent values of the individual e ects. Fixed e ects, random e ects, and Bayesian approaches all fall in this category. We provide a characterization of the class of weights (or p ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Many approaches to estimation of panel models are based on an average or integrated likelihood that assigns weights to di erent values of the individual e ects. Fixed e ects, random e ects, and Bayesian approaches all fall in this category. We provide a characterization of the class of weights (or priors) that produce estimators that are rstorder unbiased. We show that such bias reducing weights will depend on the data in general unless an orthogonal reparameterization or an essentially equivalent condition is available. Two intuitively appealing weighting schemes are discussed. We argue that asymptotically valid con dence intervals can be read from the posterior distribution of the common parameters when N and T grow at the same rate. Next, we show that random e ects estimators are not bias reducing in general and discuss important exceptions. Moreover, the bias depends on the KullbackLeibler distance between the population distribution of the e ects and its best approximation in the random e ects family. Finally, we show that in general standard random e ects estimation of marginal e ects is inconsistent for large T, whereas the posterior mean of the marginal e ect is largeT consistent, and we provide conditions for bias reduction. Some examples and Monte Carlo experiments illustrate the results.
Optimality and computations for relative surprise inferences
, 2005
"... Relative surprise inferences are based on how beliefs change from a priori to a posteriori. These inferences can be seen to be based on the posterior distribution of the integrated likelihood and, as such, are invariant under relabellings of the parameter of interest. In this paper we demonstrate th ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Relative surprise inferences are based on how beliefs change from a priori to a posteriori. These inferences can be seen to be based on the posterior distribution of the integrated likelihood and, as such, are invariant under relabellings of the parameter of interest. In this paper we demonstrate that relative surprise inferences possess an optimality property. Further, computational techniques are developed for implementing these inferences that are applicable whenever we have algorithms to sample from the prior and posterior distributions.