Results 1  10
of
2,168
The theory and practice of corporate finance: Evidence from the field
 Journal of Financial Economics
, 2001
"... We survey 392 CFOs about the cost of capital, capital budgeting, and capital structure. Large firms rely heavily on present value techniques and the capital asset pricing model, while small firms are relatively likely to use the payback criterion. We find that a surprising number of firms use their ..."
Abstract

Cited by 680 (20 self)
 Add to MetaCart
We survey 392 CFOs about the cost of capital, capital budgeting, and capital structure. Large firms rely heavily on present value techniques and the capital asset pricing model, while small firms are relatively likely to use the payback criterion. We find that a surprising number of firms use their firm risk rather than project risk in evaluating new investments. Firms are concerned about maintaining financial flexibility and a good credit rating when issuing debt, and earnings per share dilution and recent stock price appreciation when issuing equity. We find some support for the peckingorder and tradeoff capital structure hypotheses but little evidence that executives are concerned about asset substitution, asymmetric information, transactions costs, free cash flows, or personal taxes. Key words: capital structure, cost of capital, cost of equity, capital budgeting, discount rates, project valuation, survey. 1 We thank Franklin Allen for his detailed comments on the survey instrument and the overall project. We
Using Daily Stock Returns: The Case of Event Studies
 Journal of Financial Economics
, 1985
"... This paper examines properties of daily stock returns and how the particular characteristics of these data affect event study methodologies. Daily data generally present few difficulties for event studies. Standard procedures are typically wellspecified even when special daily data characteristics ..."
Abstract

Cited by 763 (2 self)
 Add to MetaCart
This paper examines properties of daily stock returns and how the particular characteristics of these data affect event study methodologies. Daily data generally present few difficulties for event studies. Standard procedures are typically wellspecified even when special daily data characteristics are ignored. However, recognition of autocorrelation in daily excess returns and changes in their variance conditional on an event can sometimes be advantageous. In addition, tests ignoring crosssectional dependence can be wellspecified and have higher power than tests which account for potential dependence. 1.
Wireless Communications
, 2005
"... Copyright c ○ 2005 by Cambridge University Press. This material is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University ..."
Abstract

Cited by 1129 (32 self)
 Add to MetaCart
Copyright c ○ 2005 by Cambridge University Press. This material is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University
Testing for Common Trends
 Journal of the American Statistical Association
, 1988
"... Cointegrated multiple time series share at least one common trend. Two tests are developed for the number of common stochastic trends (i.e., for the order of cointegration) in a multiple time series with and without drift. Both tests involve the roots of the ordinary least squares coefficient matrix ..."
Abstract

Cited by 455 (7 self)
 Add to MetaCart
Cointegrated multiple time series share at least one common trend. Two tests are developed for the number of common stochastic trends (i.e., for the order of cointegration) in a multiple time series with and without drift. Both tests involve the roots of the ordinary least squares coefficient matrix obtained by regressing the series onto its first lag. Critical values for the tests are tabulated, and their power is examined in a Monte Carlo study. Economic time series are often modeled as having a unit root in their autoregressive representation, or (equivalently) as containing a stochastic trend. But both casual observation and economic theory suggesthat many series might contain the same stochastic trendso that they are cointegrated. If each of n series is integrated of order 1 but can be jointly characterized by k < n stochastic trends, then the vecto representation of these series has k unit roots and n k distinct stationary linear combinations. Our proposed tests can be viewed alternatively as tests of the number of common trends, linearly independent cointegrating vectors, or autoregressive unit roots of the vector process. Both of the proposed tests are asymptotically similar. The firstest (qf) is developed under the assumption that certain components of the process have a finiteorder vector autoregressive (VAR) representation, and the nuisance parameters are handled by estimating this VAR. The second test (q,) entails computing the eigenvalues of a corrected sample firstorder autocorrelation matrix, where the correction is essentially a sum of the autocovariance matrices. Previous researchers have found that U.S. postwar interest rates, taken individually, appear to be integrated of order 1. In addition, the theory of the term structure implies that yields on similar assets of different maturities will be cointegrated. Applying these tests to postwar U.S. data on the federal funds rate and the three and twelvemonth treasury bill rates providesupport for this prediction: The three interest rates appear to be cointegrated.
How Social Security and Medicare Affect Retirement Behavior in a World of Incomplete Markets
 ECONOMETRICA
, 1997
"... This paper provides an empirical analysis of how the U.S. Social Security and Medicare insurance system affect the labor supply of older males in the presence of incomplete markets for loans, annuities, and health insurance. We estimate a detailed dynamic programming (DP) model of the joint labor ..."
Abstract

Cited by 363 (11 self)
 Add to MetaCart
This paper provides an empirical analysis of how the U.S. Social Security and Medicare insurance system affect the labor supply of older males in the presence of incomplete markets for loans, annuities, and health insurance. We estimate a detailed dynamic programming (DP) model of the joint labor supply and Social Security acceptance decision, focusing on a sample of males in the low to middle income brackets whose only pension is Social Security. The DP model delivers a rich set of predictions about the dynamics of retirement behavior, and comparisons of actual vs. predicted behavior show that the DP model is able to account for wide variety of phenomena observed in the data, including the pronounced peaks in the distribution of retirement ages at 62 and 65 (the ages of early and normal eligibility for Social Security benefits, respectively). We identify a significant fraction of “health insurance constrained” individuals who have no form of retiree health insurance other than Medicare, and who can only obtain fairly priced private health insurance via their employer’s group health plan. The combination of significant individual risk aversion and a long tailed (Pareto) distribution of health care expenditures implies that there is a significant “security value” for these individuals to remain employed until they are eligible for Medicare coverage at age 65. Overall, our model suggests that a number of heretofore puzzling aspects of retirement behavior can be viewed as artifacts of particular details of the Social Security rules, whose incentive effects are especially strong for lower income individuals and those who do not have access to fairly priced loans, annuities, and health insurance.
Randomized Experiments from Nonrandom Selection in the U.S. House Elections
 Journal of Econometrics
, 2008
"... This paper establishes the relatively weak conditions under which causal inferences from a regressiondiscontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discont ..."
Abstract

Cited by 355 (18 self)
 Add to MetaCart
This paper establishes the relatively weak conditions under which causal inferences from a regressiondiscontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discontinuity in any predetermined (or “baseline”) variables at the RD threshold. Specifically, consider a standard treatment evaluation problem in which treatment is assigned to an individual if and only if V> v0, but where v0 is a known threshold, and V is observable. V can depend on the individual’s characteristics and choices, but there is also a random chance element: for each individual, there exists a welldefined probability distribution for V. The density function – allowed to differ arbitrarily across the population – is assumed to be continuous. It is formally established that treatment status here is as good as randomized in a local neighborhood of V = v0. These ideas are illustrated in an analysis of U.S. House elections, where the inherent uncertainty in the final vote count is plausible, which would imply that the party that wins is essentially randomized among elections decided by a narrow margin. The evidence is consistent with this prediction, which is then used to generate “nearexperimental ” causal estimates of the electoral advantage to incumbency.
The positive false discovery rate: A Bayesian interpretation and the qvalue
 Annals of Statistics
, 2003
"... Multiple hypothesis testing is concerned with controlling the rate of false positives when testing several hypotheses simultaneously. One multiple hypothesis testing error measure is the false discovery rate (FDR), which is loosely defined to be the expected proportion of false positives among all s ..."
Abstract

Cited by 328 (8 self)
 Add to MetaCart
Multiple hypothesis testing is concerned with controlling the rate of false positives when testing several hypotheses simultaneously. One multiple hypothesis testing error measure is the false discovery rate (FDR), which is loosely defined to be the expected proportion of false positives among all significant hypotheses. The FDR is especially appropriate for exploratory analyses in which one is interested in finding several significant results among many tests. In this work, we introduce a modified version of the FDR called the “positive false discovery rate ” (pFDR). We discuss the advantages and disadvantages of the pFDR and investigate its statistical properties. When assuming the test statistics follow a mixture distribution, we show that the pFDR can be written as a Bayesian posterior probability and can be connected to classification theory. These properties remain asymptotically true under fairly general conditions, even under certain forms of dependence. Also, a new quantity called the “qvalue ” is introduced and investigated, which is a natural “Bayesian posterior pvalue, ” or rather the pFDR analogue of the pvalue.
Results 1  10
of
2,168