Results 11  20
of
4,504,932
Entropy and the law of small numbers
 IEEE Trans. Inform. Theory
, 2005
"... Two new informationtheoretic methods are introduced for establishing Poisson approximation inequalities. First, using only elementary informationtheoretic techniques it is shown that, when Sn = �n i=1 Xi is the sum of the (possibly dependent) binary random variables X1, X2,..., Xn, with E(Xi) = p ..."
Abstract

Cited by 47 (13 self)
 Add to MetaCart
(Xi) = pi and E(Sn) = λ, then D(PSn�Po(λ)) ≤ n� i=1 p 2 i + � n � i=1 H(Xi) − H(X1, X2,..., Xn), where D(PSn�Po(λ)) is the relative entropy between the distribution of Sn and the Poisson(λ) distribution. The first term in this bound measures the individual smallness of the Xi and the second term measures
Estimating the number of clusters in a dataset via the Gap statistic
, 2000
"... We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. kmeans or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference ..."
Abstract

Cited by 492 (1 self)
 Add to MetaCart
We propose a method (the \Gap statistic") for estimating the number of clusters (groups) in a set of data. The technique uses the output of any clustering algorithm (e.g. kmeans or hierarchical), comparing the change in within cluster dispersion to that expected under an appropriate reference
Text Classification from Labeled and Unlabeled Documents using EM
 MACHINE LEARNING
, 1999
"... This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large qua ..."
Abstract

Cited by 1033 (19 self)
 Add to MetaCart
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large
Logical foundations of objectoriented and framebased languages
 JOURNAL OF THE ACM
, 1995
"... We propose a novel formalism, called Frame Logic (abbr., Flogic), that accounts in a clean and declarative fashion for most of the structural aspects of objectoriented and framebased languages. These features include object identity, complex objects, inheritance, polymorphic types, query methods, ..."
Abstract

Cited by 880 (64 self)
 Add to MetaCart
, encapsulation, and others. In a sense, Flogic stands in the same relationship to the objectoriented paradigm as classical predicate calculus stands to relational programming. Flogic has a modeltheoretic semantics and a sound and complete resolutionbased proof theory. A small number of fundamental concepts
Improving DirectMapped Cache Performance by the Addition of a Small FullyAssociative Cache and Prefetch Buffers
, 1990
"... ..."
Coupled hidden Markov models for complex action recognition
, 1996
"... We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying twohanded actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and ..."
Abstract

Cited by 497 (22 self)
 Add to MetaCart
and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm, and a clear Bayesian semantics. However, the Markovian framework makes strong restrictive assumptions about the system generating the signalthat it is a single process having a small number of states
How much should we trust differencesindifferences estimates? Quarterly Journal of Economics 119:249–75
, 2004
"... Most papers that employ DifferencesinDifferences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in statelevel data on fema ..."
Abstract

Cited by 775 (1 self)
 Add to MetaCart
” period and explicitly takes into account the effective sample size works well even for small numbers of states.
Software pipelining: An effective scheduling technique for VLIW machines
, 1988
"... This paper shows that software pipelining is an effective and viable scheduling technique for VLIW processors. In software pipelining, iterations of a loop in the source program are continuously initiated at constant intervals, before the preceding iterations complete. The advantage of software pipe ..."
Abstract

Cited by 579 (3 self)
 Add to MetaCart
hierarchical reduction scheme whereby entire control constructs are reduced to an object similar to an operation in a basic block. With this scheme, all innermost loops, including those containing conditional statements, can be software pipelined. It also diminishes the startup cost of loops with small
Inference by Believers in the Law of Small Numbers
, 2000
"... Many people believe in the "Law of Small Numbers," exaggerating the degree to which a small sample resembles the population from which it is drawn. To model this, I assume that a person exaggerates the likelihood that a short sequence of i.i.d. signals resembles the longrun rate at whi ..."
Abstract

Cited by 75 (2 self)
 Add to MetaCart
Many people believe in the "Law of Small Numbers," exaggerating the degree to which a small sample resembles the population from which it is drawn. To model this, I assume that a person exaggerates the likelihood that a short sequence of i.i.d. signals resembles the longrun rate
Rapid object detection using a boosted cascade of simple features
 ACCEPTED CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2001
, 2001
"... This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the " ..."
Abstract

Cited by 3222 (9 self)
 Add to MetaCart
the "Integral Image" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers[6]. The third contribution
Results 11  20
of
4,504,932