• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 19,917
Next 10 →

Learning to Estimate Traffic Matrix for M2M Management

by Rossi Kamal, Mi Jung Choi, Choong Seon Hong
"... In this paper, first we have formulated M2M service provider’s business goal as parameter estimation problem. Then, we have derived Bayesian lower bound based on Van Tree Inequality to estimate traffic matrix for M2M management. At last, we have performed numerical analysis to show how devised mecha ..."
Abstract - Add to MetaCart
In this paper, first we have formulated M2M service provider’s business goal as parameter estimation problem. Then, we have derived Bayesian lower bound based on Van Tree Inequality to estimate traffic matrix for M2M management. At last, we have performed numerical analysis to show how devised

A Heteroskedasticity-Consistent Covariance Matrix Estimator And A Direct Test For Heteroskedasticity

by Halbert White , 1980
"... This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator ..."
Abstract - Cited by 3211 (5 self) - Add to MetaCart
This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator

On kinetic waves: II) A theory of traffic Flow on long crowded roads

by M. J. Lighthill, G. B. Whitham - Proc. Royal Society A229 , 1955
"... This paper uses the method of kinematic waves, developed in part I, but may be read independently. A functional relationship between flow and concentration for traffic on crowded arterial roads has been postulated for some time, and has experimental backing (? 2). From this a theory of the propagati ..."
Abstract - Cited by 496 (2 self) - Add to MetaCart
of the propagation of changes in traffic distribution along these roads may be deduced (??2, 3). The theory is applied (?4) to the problem of estimating how a 'hump', or region of increased concentration, will move along a crowded main road. It is suggested that it will move slightly slower than the mean

How much should we trust differences-in-differences estimates?

by Marianne Bertrand, Esther Duflo, Sendhil Mullainathan , 2003
"... Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on femal ..."
Abstract - Cited by 828 (1 self) - Add to MetaCart
Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data

The Dantzig selector: statistical estimation when p is much larger than n

by Emmanuel Candes, Terence Tao , 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract - Cited by 879 (14 self) - Add to MetaCart
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n

Missing value estimation methods for DNA microarrays

by Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein, Russ B. Altman , 2001
"... Motivation: Gene expression microarray experiments can generate data sets with multiple missing expression values. Unfortunately, many algorithms for gene expression analysis require a complete matrix of gene array values as input. For example, methods such as hierarchical clustering and K-means clu ..."
Abstract - Cited by 477 (24 self) - Add to MetaCart
Motivation: Gene expression microarray experiments can generate data sets with multiple missing expression values. Unfortunately, many algorithms for gene expression analysis require a complete matrix of gene array values as input. For example, methods such as hierarchical clustering and K

Stochastic Perturbation Theory

by G. W. Stewart , 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a first-order perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract - Cited by 907 (36 self) - Add to MetaCart
. In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a first-order perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating

High dimensional graphs and variable selection with the Lasso

by Nicolai Meinshausen, Peter Bühlmann - ANNALS OF STATISTICS , 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract - Cited by 736 (22 self) - Add to MetaCart
The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso

New results in linear filtering and prediction theory

by R. E. Kalman, R. S. Bucy - TRANS. ASME, SER. D, J. BASIC ENG , 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract - Cited by 607 (0 self) - Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary

LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares

by Christopher C. Paige, Michael A. Saunders - ACM Trans. Math. Software , 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract - Cited by 653 (21 self) - Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
Next 10 →
Results 1 - 10 of 19,917
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University