Results 1  10
of
1,353,056
PCA versus LDA
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2001
"... In the context of the appearancebased paradigm for object recognition, it is generally believed that algorithms based on LDA (Linear Discriminant Analysis) are superior to those based on PCA (Principal Components Analysis) . In this communication we show that this is not always the case. We present ..."
Abstract

Cited by 465 (16 self)
 Add to MetaCart
In the context of the appearancebased paradigm for object recognition, it is generally believed that algorithms based on LDA (Linear Discriminant Analysis) are superior to those based on PCA (Principal Components Analysis) . In this communication we show that this is not always the case. We
PCASIFT: A more distinctive representation for local image descriptors
, 2004
"... Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid [14] recently evaluated a variety of approaches and identified the SIFT [11] algorithm as being the most resistant to common image deforma ..."
Abstract

Cited by 572 (6 self)
 Add to MetaCart
Analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCAbased local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image
Panel Cointegration; Asymptotic and Finite Sample Properties of Pooled Time Series Tests, With an Application to the PPP Hypothesis; New Results. Working paper
, 1997
"... We examine properties of residualbased tests for the null of no cointegration for dynamic panels in which both the shortrun dynamics and the longrun slope coefficients are permitted to be heterogeneous across individual members of the panel+ The tests also allow for individual heterogeneous fixed ..."
Abstract

Cited by 499 (13 self)
 Add to MetaCart
fixed effects and trend terms, and we consider both pooled within dimension tests and group mean between dimension tests+ We derive limiting distributions for these and show that they are normal and free of nuisance parameters+ We also provide Monte Carlo evidence to demonstrate their small sample size
Texture Synthesis by Nonparametric Sampling
 In International Conference on Computer Vision
, 1999
"... A nonparametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by ..."
Abstract

Cited by 1014 (7 self)
 Add to MetaCart
by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and realworld textures. 1
Lag length selection and the construction of unit root tests with good size and power
 Econometrica
, 2001
"... It is widely known that when there are errors with a movingaverage root close to −1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the AIC and the BIC tend to select a truncation lag (k) that is very small. We conside ..."
Abstract

Cited by 534 (14 self)
 Add to MetaCart
consider a class of Modified Information Criteria (MIC) with a penalty factor that is sample dependent. It takes into account the fact that the bias in the sum of the autoregressive coefficients is highly dependent on k and adapts to the type of deterministic components present. We use a local asymptotic
Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics
, 1996
"... For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has ..."
Abstract

Cited by 548 (13 self)
 Add to MetaCart
For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain
EndtoEnd Routing Behavior in the Internet
, 1996
"... The largescale behavior of routing in the Internet has gone virtually without any formal study, the exception being Chinoy's analysis of the dynamics of Internet routing information [Ch93]. We report on an analysis of 40,000 endtoend route measurements conducted using repeated “traceroutes” ..."
Abstract

Cited by 660 (13 self)
 Add to MetaCart
The largescale behavior of routing in the Internet has gone virtually without any formal study, the exception being Chinoy's analysis of the dynamics of Internet routing information [Ch93]. We report on an analysis of 40,000 endtoend route measurements conducted using repeated “traceroutes
GPSless Low Cost Outdoor Localization For Very Small Devices
, 2000
"... Instrumenting the physical world through large networks of wireless sensor nodes, particularly for applications like environmental monitoring of water and soil, requires that these nodes be very small, light, untethered and unobtrusive. The problem of localization, i.e., determining where a given no ..."
Abstract

Cited by 994 (29 self)
 Add to MetaCart
node is physically located in a network is a challenging one, and yet extremely crucial for many of these applications. Practical considerations such as the small size, form factor, cost and power constraints of nodes preclude the reliance on GPS (Global Positioning System) on all nodes
The Cyclical Behavior of Equilibrium Unemployment and Vacancies
 American Economic Review
, 2005
"... This paper argues that a broad class of search models cannot generate the observed businesscyclefrequency fluctuations in unemployment and job vacancies in response to shocks of a plausible magnitude. In the U.S., the vacancyunemployment ratio is 20 times as volatile as average labor productivity ..."
Abstract

Cited by 839 (20 self)
 Add to MetaCart
This paper argues that a broad class of search models cannot generate the observed businesscyclefrequency fluctuations in unemployment and job vacancies in response to shocks of a plausible magnitude. In the U.S., the vacancyunemployment ratio is 20 times as volatile as average labor productivity, while under weak assumptions, search models predict that the vacancyunemployment ratio and labor productivity have nearly the same variance. I establish this claim both using analytical comparative statics in a very general deterministic search model and using simulations of a stochastic version of the model. I show that a shock that changes average labor productivity primarily alters the present value of wages, generating only a small movement along a downward sloping Beveridge curve (unemploymentvacancy locus). A shock to the job destruction rate generates a counterfactually positive correlation between unemployment and vacancies. In both cases, the shock is only slightly amplified and the model exhibits virtually no propagation. I reconcile these findings with an existing literature and argue that the source of the model’s failure is lack of wage rigidity, a consequence of the assumption that wages are determined by Nash bargaining. ∗ This is a major revision of ‘Equilibrium Unemployment Fluctuations’. I thank Daron Acemoglu, Olivier
Evaluating the Accuracy of SamplingBased Approaches to the Calculation of Posterior Moments
 IN BAYESIAN STATISTICS
, 1992
"... Data augmentation and Gibbs sampling are two closely related, samplingbased approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accurac ..."
Abstract

Cited by 583 (14 self)
 Add to MetaCart
Data augmentation and Gibbs sampling are two closely related, samplingbased approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical
Results 1  10
of
1,353,056