Results 1 - 10
of
37,600
Optimal Estimation
"... The first quest for optimal estimation by Fisher, [2], Cramer, Rao and others, [1], dates back to over half a century and has changed remarkably little. The covariance of the estimated parameters was taken as the quality measure of estimators, for which the main result, the Cramer-Rao inequality, se ..."
Abstract
- Add to MetaCart
The first quest for optimal estimation by Fisher, [2], Cramer, Rao and others, [1], dates back to over half a century and has changed remarkably little. The covariance of the estimated parameters was taken as the quality measure of estimators, for which the main result, the Cramer-Rao inequality
Unrealistic optimism about future life events.
- Journal of Personality and Social Psychology,
, 1980
"... Two studies investigated the tendency of people to be unrealistically optimistic about future life events. In Study 1, 258 college students estimated how much their own chances of experiencing 42 events differed from the chances of their classmates. Overall, they rated their own chances to be above ..."
Abstract
-
Cited by 535 (0 self)
- Add to MetaCart
Two studies investigated the tendency of people to be unrealistically optimistic about future life events. In Study 1, 258 college students estimated how much their own chances of experiencing 42 events differed from the chances of their classmates. Overall, they rated their own chances
Estimating the Support of a High-Dimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract
-
Cited by 783 (29 self)
- Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We
Pegasos: Primal Estimated sub-gradient solver for SVM
"... We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract
-
Cited by 542 (20 self)
- Add to MetaCart
We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a
Wattch: A Framework for Architectural-Level Power Analysis and Optimizations
- In Proceedings of the 27th Annual International Symposium on Computer Architecture
, 2000
"... Power dissipation and thermal issues are increasingly significant in modern processors. As a result, it is crucial that power/performance tradeoffs be made more visible to chip architects and even compiler writers, in addition to circuit designers. Most existing power analysis tools achieve high ..."
Abstract
-
Cited by 1320 (43 self)
- Add to MetaCart
high accuracy by calculating power estimates for designs only after layout or floorplanning are complete In addition to being available only late in the design process, such tools are often quite slow, which compounds the difficulty of running them for a large space of design possibilities.
On the optimality of the simple Bayesian classifier under zero-one loss
- MACHINE LEARNING
, 1997
"... The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containin ..."
Abstract
-
Cited by 818 (27 self)
- Add to MetaCart
containing clear attribute dependences suggest that the answer to this question may be positive. This article shows that, although the Bayesian classifier’s probability estimates are only optimal under quadratic loss if the independence assumption holds, the classifier itself can be optimal under zero
Estimating the number of clusters in a dataset via the Gap statistic
, 2000
"... We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. k-means or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference ..."
Abstract
-
Cited by 502 (1 self)
- Add to MetaCart
principal components. 1 Introduction Cluster analysis is an important tool for \unsupervised" learning| the problem of nding groups in data without the help of a response variable. A major challenge in cluster analysis is estimation of the optimal number of \clusters". Figure 1 (top right) shows
Fast and robust fixed-point algorithms for independent component analysis
- IEEE TRANS. NEURAL NETW
, 1999
"... Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. In this paper, we use a combination of two different approaches for linear ICA: Comon’s informat ..."
Abstract
-
Cited by 884 (34 self)
- Add to MetaCart
information-theoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast (objective) functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information
High dimensional graphs and variable selection with the Lasso
- ANNALS OF STATISTICS
, 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract
-
Cited by 736 (22 self)
- Add to MetaCart
show that the proposed neighborhood selection scheme is consistent for sparse high-dimensional graphs. Consistency hinges on the choice of the penalty parameter. The oracle value for optimal prediction does not lead to a consistent neighborhood estimate. Controlling instead the probability of falsely
Bundle Adjustment -- A Modern Synthesis
- VISION ALGORITHMS: THEORY AND PRACTICE, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract
-
Cited by 562 (13 self)
- Add to MetaCart
This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics
Results 1 - 10
of
37,600