Results 1  10
of
2,109
Stock return predictability: Is it there?
, 2001
"... We ask whether stock returns in France, Germany, Japan ... by three instruments: the dividend yield, the earnings yield and the short rate. The predictability regression is suggested by a present value model with earnings growth, payout ratios and the short rate as state variables. We find the short ..."
Abstract

Cited by 115 (5 self)
 Add to MetaCart
We ask whether stock returns in France, Germany, Japan ... by three instruments: the dividend yield, the earnings yield and the short rate. The predictability regression is suggested by a present value model with earnings growth, payout ratios and the short rate as state variables. We find the short rate to be the only robust shortrun predictor of excess returns, and find little evidence of excess return predictability by earnings or dividend yields across all countries. There is no evidence of longhorizon return predictability once we account for finite sample influence. Crosscountry predictability is stronger than predictability using local instruments. Finally, dividend and earnings yields predict future cashflow growth
Incremental algorithms for hierarchical classification
 Journal of Machine Learning Research
, 2004
"... We study the problem of classifying data in a given taxonomy when classifications associated with multiple and/or partial paths are allowed. We introduce a new algorithm that incrementally learns a linearthreshold classifier for each node of the taxonomy. A hierarchical classification is obtained b ..."
Abstract

Cited by 111 (9 self)
 Add to MetaCart
We study the problem of classifying data in a given taxonomy when classifications associated with multiple and/or partial paths are allowed. We introduce a new algorithm that incrementally learns a linearthreshold classifier for each node of the taxonomy. A hierarchical classification is obtained by evaluating the trained node classifiers in a topdown fashion. To evaluate classifiers in our multipath framework, we define a new hierarchical loss function, the Hloss, capturing the intuition that whenever a classification mistake is made on a node of the taxonomy, then no loss should be charged for any additional mistake occurring in the subtree of that node. Making no assumptions on the mechanism generating the data instances, and assuming a linear noise model for the labels, we bound the Hloss of our online algorithm in terms of the Hloss of a reference classifier knowing the true parameters of the labelgenerating process. We show that, in expectation, the excess cumulative Hloss grows at most logarithmically in the length of the data sequence. Furthermore, our analysis reveals the precise dependence of the rate of convergence on the eigenstructure of the data each node observes. Our theoretical results are complemented by a number of experiments on texual corpora. In these experiments we show that, after only one epoch of training, our algorithm performs much better than Perceptronbased hierarchical classifiers, and reasonably close to a hierarchical support vector machine.
2009), "Misallocation and Manufacturing TFP in China and India
 Quarterly Journal of Economics
"... Resource misallocation can lower aggregate total factor productivity (TFP). We use micro data on manufacturing establishments to quantify the potential extent of misallocation in China and India compared to the U.S. Compared to the U.S., we measure sizable gaps in marginal products of labor and capi ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
Resource misallocation can lower aggregate total factor productivity (TFP). We use micro data on manufacturing establishments to quantify the potential extent of misallocation in China and India compared to the U.S. Compared to the U.S., we measure sizable gaps in marginal products of labor and capital across plants within narrowlydefined industries in China and India. When capital and labor are hypothetically reallocated to equalize marginal products to the extent observed in the U.S., we calculate manufacturing TFP gains of 3050 % in China and 4060 % in India. We are indebted to Ryoji Hiraguchi and Romans Pancs for phenomenal research assistance, and to numerous seminar participants, referees, and the editors for comments. We gratefully acknowledge the financial support of the Kauffman Foundation. Hsieh thanks the Alfred P. Sloan Foundation and Klenow thanks SIEPR for financial support. The research in this paper on U.S. manufacturing was conducted while the authors were Special Sworn Status researchers of the U.S. Census Bureau at the California Census Research Data Center at UC Berkeley. Research results and conclusions expressed are those of the authors and do not necessarily reflect the views of the Census Bureau. This paper has been screened to insure that no confidential data are revealed. Emails:
Comunicación Técnica No I0508/22082005 (PE/CIMAT) Stochastic Frontier Analysis: a matrix representation
, 2005
"... Stochastic Frontier Analysis (SFA) models have been using skewness as an intrinsic characteristic to measure technical ine ¢ ciency. We extend the use of skew normality and elliptical errors in SFA as a exible tool to model, for example, panel data. We consider stochastic frontier analysis in the co ..."
Abstract
 Add to MetaCart
Stochastic Frontier Analysis (SFA) models have been using skewness as an intrinsic characteristic to measure technical ine ¢ ciency. We extend the use of skew normality and elliptical errors in SFA as a exible tool to model, for example, panel data. We consider stochastic frontier analysis in the common setting Normal + Truncated Normal with uncorrelated errors, as well as the case with correlated errors, in a matrix representation. The connection between the SFA model and the Closed SkewNormal has been discussed in DomínguezMolina, et al (2004). We provide a matrix representation for the skewnormal distribution and skewelliptical distributions through a general setting and obtain conditional and marginal representations. Also, we obtain a useful submodel through an additive representation to be used with SFA models. We work the moment generating function and some quadratic forms of interest that allows several applications and in particular help to understand some properties in the SFA models.
A comprehensive analysis of built environment characteristics on household residential choice and auto ownership levels
, 2006
"... In this report, we identify the research designs and methodologies that may be used to test the presence of “true” causality versus residential sortingbased “spurious” associations in the landuse transportation connection. The report then develops a methodological formulation to control for reside ..."
Abstract

Cited by 73 (13 self)
 Add to MetaCart
In this report, we identify the research designs and methodologies that may be used to test the presence of “true” causality versus residential sortingbased “spurious” associations in the landuse transportation connection. The report then develops a methodological formulation to control for residential sorting effects in the analysis of the effect of built environment attributes on travel behaviorrelated choices. The formulation is applied to comprehensively examine the impact of the built environment, transportation network attributes, and demographic characteristics on residential choice and car ownership decisions. The model formulation takes the form of a joint mixed multinomial logitordered response structure that (a) accommodates differential sensitivity to the built environment and transportation network variables due to both demographic and unobserved household attributes and (b) controls for the selfselection of individuals into neighborhoods based on car ownership preferences stemming from both demographic
Aggregate Nearest Neighbor Queries in Spatial Databases
 TODS
, 2005
"... Given two spatial datasets P (e.g., facilities) and Q (queries), an aggregate nearest neighbor (ANN) query retrieves the point(s) of P with the smallest aggregate distance(s) to points in Q. Assuming, for example, n users at locations q1,... qn,anANN query outputs the facility p ∈ P that minimizes t ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
Given two spatial datasets P (e.g., facilities) and Q (queries), an aggregate nearest neighbor (ANN) query retrieves the point(s) of P with the smallest aggregate distance(s) to points in Q. Assuming, for example, n users at locations q1,... qn,anANN query outputs the facility p ∈ P that minimizes the sum of distances pqi  for 1 ≤ i ≤ n that the users have to travel in order to meet there. Similarly, another ANN query may report the point p ∈ P that minimizes the maximum distance that any user has to travel, or the minimum distance from some user to his/her closest facility. If Q fits in memory and P is indexed by an Rtree, we develop algorithms for aggregate nearest neighbors that capture several versions of the problem, including weighted queries and incremental reporting of results. Then, we analyze their performance and propose cost models for query optimization. Finally, we extend our techniques for diskresident queries and approximate ANN retrieval. The efficiency of the algorithms and the accuracy of the cost models are evaluated through extensive experiments with real and synthetic datasets.
Efficient and generalized pairing computation on Abelian varieties
, 2008
"... In this paper, we propose a new method for constructing a bilinear pairing over (hyper)elliptic curves, which we call the Rate pairing. This pairing is a generalization of the Ate and Atei pairing, and also improves efficiency of the pairing computation. Using the Rate pairing, the loop length in ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
In this paper, we propose a new method for constructing a bilinear pairing over (hyper)elliptic curves, which we call the Rate pairing. This pairing is a generalization of the Ate and Atei pairing, and also improves efficiency of the pairing computation. Using the Rate pairing, the loop length in Miller’s algorithm can be as small as log(r 1/φ(k) ) for some pairingfriendly elliptic curves which have not reached this lower bound. Therefore we obtain from 29 % to 69 % savings in overall costs compared to the Atei pairing. On supersingular hyperelliptic curves of genus 2, we show that this approach makes the loop length in Miller’s algorithm shorter than that of the Ate pairing.
How slow is the kmeans method
 in: Proceedings of the 2006 Symposium on Computational Geometry (SoCG
"... The kmeans method is an old but popular clustering algorithm known for its observed speed and its simplicity. Until recently, however, no meaningful theoretical bounds were known on its running time. In this paper, we demonstrate that the worstcase running time of kmeans is superpolynomial by imp ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
The kmeans method is an old but popular clustering algorithm known for its observed speed and its simplicity. Until recently, however, no meaningful theoretical bounds were known on its running time. In this paper, we demonstrate that the worstcase running time of kmeans is superpolynomial by improving the best known lower bound from Ω(n) iterations to 2 Ω( √ n) Categories and Subject Descriptors:
Subgroup families controlling plocal finite groups
 Proc. London Math. Soc. 91
, 2005
"... A plocal finite group consists of a finite pgroup S, together with a pair of categories which encode “conjugacy ” relations among subgroups of S, and which are modelled on the fusion in a Sylow psubgroup of a finite group. It contains enough information to define a classifying space which has man ..."
Abstract

Cited by 48 (9 self)
 Add to MetaCart
A plocal finite group consists of a finite pgroup S, together with a pair of categories which encode “conjugacy ” relations among subgroups of S, and which are modelled on the fusion in a Sylow psubgroup of a finite group. It contains enough information to define a classifying space which has many of the same properties as pcompleted classifying spaces of finite groups. In this paper, we examine which subgroups control this structure. More precisely, we prove that the question of whether an abstract fusion systemF over a finite pgroup S is saturated can be determined by just looking at smaller classes of subgroups of S. We also prove that the homotopy type of the classifying space of a given plocal finite group is independent of the family of subgroups used to define it, in the sense that it remains unchanged when that family ranges from the set ofFcentricFradical subgroups (at a minimum) to the set of Fquasicentric subgroups (at a maximum). Finally, we look at constrained fusion systems, analogous to pconstrained finite groups, and prove that they in fact all arise from groups. A plocal finite group consists of a finite pgroup S, together with a pair of categories (F, L), of which F is modeled on the conjugacy (or fusion) in a Sylow
Results 1  10
of
2,109