Results 11  20
of
134
Estimating covariation: Epps effect and microstructure noise
 Journal of Econometrics, forthcoming
, 2009
"... This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previoustick covariance e ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previoustick covariance estimator is biased, and the size of the bias is more pronounced for less liquid assets. This is an analytic characterization of the Epps effect. We also provide optimal sampling frequency which balances the tradeoff between the bias and various sources of stochastic error terms, including nonsynchronous trading, microstructure noise, and time discretization. Finally, a twoscales covariance estimator is provided which simultaneously cancels (to first order) the Epps effect and the effect of microstructure noise. The gain is demonstrated in data.
Nuclear and Trace Ideals in Tensored *Categories
, 1998
"... We generalize the notion of nuclear maps from functional analysis by defining nuclear ideals in tensored categories. The motivation for this study came from attempts to generalize the structure of the category of relations to handle what might be called "probabilistic relations". The compact closed ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
We generalize the notion of nuclear maps from functional analysis by defining nuclear ideals in tensored categories. The motivation for this study came from attempts to generalize the structure of the category of relations to handle what might be called "probabilistic relations". The compact closed structure associated with the category of relations does not generalize directly, instead one obtains nuclear ideals. Most tensored categories have a large class of morphisms which behave as if they were part of a compact closed category, i.e. they allow one to transfer variables between the domain and the codomain. We introduce the notion of nuclear ideals to analyze these classes of morphisms. In compact closed tensored categories, all morphisms are nuclear, and in the tensored category of Hilbert spaces, the nuclear morphisms are the HilbertSchmidt maps. We also introduce two new examples of tensored categories, in which integration plays the role of composition. In the first, mor...
Image restoration subject to a total variation constraint
 IEEE Transactions on Image Processing
, 2004
"... Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which total variation is used as a constraint in a general convex programming framework. This approach places no limitation on the incorporation of additional constraints in the restoration process and the resulting optimization problem can be solved efficiently via blockiterative methods. Image denoising and deconvolution applications are demonstrated. I. PROBLEM STATEMENT THE CLASSICAL linear restoration problem is to find the original form of an image in a real Hilbert space from the observation of a degraded image where
MetricBased Methods for Adaptive Model Selection and Regularization
 Machine Learning
, 2001
"... We present a general approach to model selection and regularization that exploits unlabeled data to adaptively control hypothesis complexity in supervised learning tasks. The idea is to impose a metric structure on hypotheses by determining the discrepancy between their predictions across the di ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We present a general approach to model selection and regularization that exploits unlabeled data to adaptively control hypothesis complexity in supervised learning tasks. The idea is to impose a metric structure on hypotheses by determining the discrepancy between their predictions across the distribution of unlabeled data. We show how this metric can be used to detect untrustworthy training error estimates, and devise novel model selection strategies that exhibit theoretical guarantees against overtting (while still avoiding under tting). We then extend the approach to derive a general training criterion for supervised learningyielding an adaptive regularization method that uses unlabeled data to automatically set regularization parameters. This new criterion adjusts its regularization level to the specic set of training data received, and performs well on a variety of regression and conditional density estimation tasks. The only proviso for these methods is that s...
Approximation by Fully Complex Multilayer Perceptrons
, 2003
"... We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville’s theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two realvalued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the CauchyRiemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e z that are analytic are defined as fully complex activation functions and are shown
Extracting Oscillations: Neuronal Coincidence Detection with Noisy Periodic Spike Input
, 1998
"... How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidencedetection properties of an integra ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidencedetection properties of an integrateandfire neuron. We derive an expression indicating how coincidence detection depends on neuronal parameters. Specifically, we show how coincidence detection depends on the shape of the postsynaptic response function, the number of synapses, and the input statistics, and we demonstrate that there is an optimal threshold. Our considerations can be used to predict from neuronal parameters whether and to what extent a neuron can act as a coincidence detector and thus can convert a temporal code into a rate code.
On Optimal EntropyConstrained Scalar Quantization
, 2000
"... Optimal scalar quantization subject to an entropyconstraint is studied. First the problem of nding analytically an optimal entropyconstrained scalar quantizer (ECSQ) is considered. For a wide class of dierence distortion measures including rth power distortions with r > 0, it is proved that if th ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Optimal scalar quantization subject to an entropyconstraint is studied. First the problem of nding analytically an optimal entropyconstrained scalar quantizer (ECSQ) is considered. For a wide class of dierence distortion measures including rth power distortions with r > 0, it is proved that if the source is uniformly distributed over an interval, then for any entropy constraint R (in bits), an optimal quantizer has N = 2 R interval cells such that N 1 cells have equal length d and one cell has length c d. Based on this result, a parametric representation of the minimum achievable distortion D h (R) as a function of the entropy constraint R is obtained for a uniform source. Contrary to earlier expectations, the D h (R) curve turns out to be nonconvex in general. In particular, for the squared error distortion it is shown that D h (R) is a piecewise concave function. The structural properties of optimal ECSQs for more general source distributions are also investigated. In...
A Lagrangian formulation of Zador's entropyconstrained quantization theorem
 IEEE Trans. Inform. Theory
, 2002
"... Zador's classic result for the asymptotic highrate behavior of entropyconstrained vector quantization is recast in a Lagrangian form which better matches the Lloyd algorithm used to optimize such quantizers. The equivalence of the two formulations is shown and the result is proved for source distr ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
Zador's classic result for the asymptotic highrate behavior of entropyconstrained vector quantization is recast in a Lagrangian form which better matches the Lloyd algorithm used to optimize such quantizers. The equivalence of the two formulations is shown and the result is proved for source distributions that are absolutely continuous with respect to the Lebesgue measure which satisfy an entropy condition, thereby generalizing the conditions stated by Zador under which the result holds.
A Random Set Description of a Possibility Measure and Its Natural Extension
 IEEE Transactions on Systems, Man and Cybernetics
, 1997
"...  The relationship is studied between possibility and necessity measures dened on arbitrary spaces, the theory of imprecise probabilities, and elementary random set theory. It is shown how special random sets can be used to generate normal possibility and necessity measures, as well as their natural ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
 The relationship is studied between possibility and necessity measures dened on arbitrary spaces, the theory of imprecise probabilities, and elementary random set theory. It is shown how special random sets can be used to generate normal possibility and necessity measures, as well as their natural extensions. This leads to interesting alternative formulas for the calculation of these natural extensions. KeywordsUpper probability, upper prevision, coherence, natural extension, possibility measure, random sets. I. Introduction P OSSIBILITY measures were introduced by Zadeh [1] in 1978. In his view, these supremum preserving set functions are a mathematical representation of the information conveyed by typical armative statements in natural language. For recent discussions of this interpretation within the behavioural framework of the theory of imprecise probabilities, we refer to [2], [3], [4]. Supremum preserving set functions can also be found in the literature under a number o...
Spanning and Completeness with Options
 Review of Financial Studies
, 1988
"... The role of ordinary options in facilitating the completion of securities markets is examined in the context of a model of contingent claims sufficiently general to accommodate the continuous distributions of asset pricing theory and option pricing theory. In this context, it is shown that call opti ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
The role of ordinary options in facilitating the completion of securities markets is examined in the context of a model of contingent claims sufficiently general to accommodate the continuous distributions of asset pricing theory and option pricing theory. In this context, it is shown that call options written on a single security approximately span all contingent claims written on this security and that call options written on portfolios of call options on individual primitive securities approximately span all contingent claims that can be written on these primitive securities. In the case of simple options, explicit formulas are given for the approximating options and portfolios of options. These results are applied to the pricing of contingent claims by arbitrage and to irrelevance propositions in corporate finance. The role of complete contingentclaims markets in the optimal allocation of risk bearing is well known [Arrow (1964) and Debreu (1959)] and is the cornerstone of the economic theory of financial markets [Mossin (1977)]. As a consequence, it becomes important from a practical as well as a scholarly perspective to determine how complex the securities markets must be in order to achieve the allocational efficiencies of complete markets. The literature on this question has grown to be sizable. Much of this literature has been reviewed in John (1981, 1984) and Amershi (1985). A seminal contribution concerning the complexity of complete securities markets was made by Ross (1976) in analyzing the role of conventional options in com