Results 1 - 10
of
94,606
Development and Use of a Gold-Standard Data Set for Subjectivity Classifications
, 1999
"... and improving intercoder reliability in discourse tagging using statistical techniques. Biascorrected tags axe formulated and successfully used to guide a revision of the coding manual and develop an automatic classifier. ..."
Abstract
-
Cited by 126 (9 self)
- Add to MetaCart
and improving intercoder reliability in discourse tagging using statistical techniques. Biascorrected tags axe formulated and successfully used to guide a revision of the coding manual and develop an automatic classifier.
C.R.: Evaluation of intensity-based 2D-3D spine image registration using clinical gold-standard data
- In Gee, J.C., Maintz, J.B.A., Vannier, M.W., eds.: Proc. Second Int. Workshop on Biomedical Image Registration (WBIR 2003). Lecture Notes in Computer Science 2717
, 2003
"... Abstract. In this paper, we evaluate the accuracy and robustness of intensity-based 2D-3D registration for six image similarity measures using clinical gold-standard spine image data from four patients. The goldstandard transformations are obtained using four bone-implanted fiducial markers. The thr ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
Abstract. In this paper, we evaluate the accuracy and robustness of intensity-based 2D-3D registration for six image similarity measures using clinical gold-standard spine image data from four patients. The goldstandard transformations are obtained using four bone-implanted fiducial markers
Estimating standard errors in finance panel data sets: comparing approaches.
- Review of Financial Studies
, 2009
"... Abstract In both corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms and across time, and OLS standard errors can be biased. Historically, the two literatures have used different solut ..."
Abstract
-
Cited by 890 (7 self)
- Add to MetaCart
Abstract In both corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms and across time, and OLS standard errors can be biased. Historically, the two literatures have used different
Power-law distributions in empirical data
- ISSN 00361445. doi: 10.1137/ 070710111. URL http://dx.doi.org/10.1137/070710111
, 2009
"... Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the empirical detection and characterization of power laws is made difficult by the large fluctuations that occur in the t ..."
Abstract
-
Cited by 607 (7 self)
- Add to MetaCart
in the tail of the distribution. In particular, standard methods such as least-squares fitting are known to produce systematically biased estimates of parameters for power-law distributions and should not be used in most circumstances. Here we describe statistical techniques for making accurate parameter
Calibrating noise to sensitivity in private data analysis
- In Proceedings of the 3rd Theory of Cryptography Conference
, 2006
"... Abstract. We continue a line of research initiated in [10, 11] on privacypreserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the datab ..."
Abstract
-
Cited by 649 (60 self)
- Add to MetaCart
the ith row of the database and g maps data-base rows to [0, 1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard deviation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single
Initial Conditions and Moment Restrictions in Dynamic Panel Data Models
- Journal of Econometrics
, 1998
"... Estimation of the dynamic error components model is considered using two alternative linear estimators that are designed to improve the properties of the standard firstdifferenced GMM estimator. Both estimators require restrictions on the initial conditions process. Asymptotic efficiency comparisons ..."
Abstract
-
Cited by 2393 (16 self)
- Add to MetaCart
Estimation of the dynamic error components model is considered using two alternative linear estimators that are designed to improve the properties of the standard firstdifferenced GMM estimator. Both estimators require restrictions on the initial conditions process. Asymptotic efficiency
Analysis of relative gene expression data using real-time quantitative
- PCR and 2 ���CT method. Methods 25
, 2001
"... of the target gene relative to some reference group The two most commonly used methods to analyze data from real-time, quantitative PCR experiments are absolute quantifica-such as an untreated control or a sample at time zero tion and relative quantification. Absolute quantification deter- in a time ..."
Abstract
-
Cited by 2666 (6 self)
- Add to MetaCart
of the target gene relative to some reference group The two most commonly used methods to analyze data from real-time, quantitative PCR experiments are absolute quantifica-such as an untreated control or a sample at time zero tion and relative quantification. Absolute quantification deter- in a
Exploration, normalization, and summaries of high density oligonucleotide array probe level data.
- Biostatistics,
, 2003
"... SUMMARY In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip R system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of f ..."
Abstract
-
Cited by 854 (33 self)
- Add to MetaCart
and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model
Data cube: A relational aggregation operator generalizing group-by, cross-tab, and sub-totals
, 1996
"... Abstract. Data analysis applications typically aggregate data across many dimensions looking for anomalies or unusual patterns. The SQL aggregate functions and the GROUP BY operator produce zero-dimensional or one-dimensional aggregates. Applications need the N-dimensional generalization of these op ..."
Abstract
-
Cited by 860 (11 self)
- Add to MetaCart
Abstract. Data analysis applications typically aggregate data across many dimensions looking for anomalies or unusual patterns. The SQL aggregate functions and the GROUP BY operator produce zero-dimensional or one-dimensional aggregates. Applications need the N-dimensional generalization
Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema
, 2002
"... RDF and RDF Schema are two W3C standards aimed at enriching the Web with machine-processable semantic data. ..."
Abstract
-
Cited by 543 (11 self)
- Add to MetaCart
RDF and RDF Schema are two W3C standards aimed at enriching the Web with machine-processable semantic data.
Results 1 - 10
of
94,606