Results 1  10
of
209
On the impossibility of informationally efficient markets
 AMERICAN ECONOMIC REVIEW
, 1980
"... ..."
Stochastic Completion Fields: A Neural Model of Illusory Contour Shape and Salience
 Neural Computation
, 1995
"... We describe an algorithm and representation level theory of illusory contour shape and salience. Unlike previous theories, our model is derived from a single assumption namely, that the prior probability distribution of boundary completion shape can be modeled by a random walk in a lattice whose ..."
Abstract

Cited by 202 (14 self)
 Add to MetaCart
We describe an algorithm and representation level theory of illusory contour shape and salience. Unlike previous theories, our model is derived from a single assumption namely, that the prior probability distribution of boundary completion shape can be modeled by a random walk in a lattice whose points are positions and orientations in the image plane (i.e., the space which one can reasonably assume is represented by neurons of the mammalian visual cortex). Our model does not employ numerical relaxation or other explicit minimization, but instead relies on the fact that the probability that a particle following a random walk will pass through a given position and orientation on a path joining two boundary fragments can be computed directly as the product of two vectorfield convolutions. We show that for the random walk we define, the maximum likelihood paths are curves of least energy, that is, on average, random walks follow paths commonly assumed to model the shape of illusory co...
Bisimulation for Labelled Markov Processes
 INFORMATION AND COMPUTATION
, 1997
"... In this paper we introduce a new class of labelled transition systems  Labelled Markov Processes  and define bisimulation for them. Labelled Markov processes are ..."
Abstract

Cited by 186 (25 self)
 Add to MetaCart
(Show Context)
In this paper we introduce a new class of labelled transition systems  Labelled Markov Processes  and define bisimulation for them. Labelled Markov processes are
Empirics for Growth and Distribution
 Polarization, Strati and Convergence Clubs", Journal of Economic Growth
, 1997
"... [[Foreword or abstra
t goes here. Ri
hard Stone Le
tures in London. Various E
onometri
So
iety presentations. Who this book is for (graduate student ba
kground in mathemati
s and probability through to
urrent resear
h ndings, and extensions and new diretions for further work) ℄℄ ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
[[Foreword or abstra
t goes here. Ri
hard Stone Le
tures in London. Various E
onometri
So
iety presentations. Who this book is for (graduate student ba
kground in mathemati
s and probability through to
urrent resear
h ndings, and extensions and new diretions for further work) ℄℄
Learning and Design of Principal Curves
, 2000
"... Principal curves have been defined as ``self consistent'' smooth curves which pass through the ``middle'' of a $d$dimensional probability distribution or data cloud. They give a summary of the data and also serve as an efficient feature extraction tool. We take a new approach by ..."
Abstract

Cited by 94 (4 self)
 Add to MetaCart
(Show Context)
Principal curves have been defined as ``self consistent'' smooth curves which pass through the ``middle'' of a $d$dimensional probability distribution or data cloud. They give a summary of the data and also serve as an efficient feature extraction tool. We take a new approach by defining principal curves as continuous curves of a given length which minimize the expected squared distance between the curve and points of the space randomly chosen according to a given distribution. The new definition makes it possible to theoretically analyze principal curve learning from training data and it also leads to a new practical construction. Our theoretical learning scheme chooses a curve from a class of polygonal lines with $k$ segments and with a given total length, to minimize the average squared distance over $n$ training points drawn independently. Convergence properties of this learning scheme are analyzed and a practical version of this theoretical algorithm is implemented. In each iteration of the algorithm a new vertex is added to the polygonal line and the positions of the vertices are updated so that they minimize a penalized squared distance criterion. Simulation results demonstrate that the new algorithm compares favorably with previous methods both in terms of performance and computational complexity, and is more robust to varying data models.
Consistent Specification Testing With Nuisance Parameters Present Only Under The Alternative
, 1995
"... . The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly ex ..."
Abstract

Cited by 83 (13 self)
 Add to MetaCart
(Show Context)
. The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly extend the nuisance parameter approach. How and why the nuisance parameter approach works and how it can be extended bears closely on recent developments in artificial neural networks. Statistical content is provided by viewing specification tests with nuisance parameters as tests of hypotheses about Banachvalued random elements and applying the Banach Central Limit Theorem and Law of Iterated Logarithm, leading to simple procedures that can be used as a guide to when computationally more elaborate procedures may be warranted. 1. Introduction In testing whether or not a parametric statistical model is correctly specified, there are a number of apparently distinct approaches one might take. T...
Least Squares Policy Evaluation Algorithms With Linear Function Approximation
 Theory and Applications
, 2002
"... We consider policy evaluation algorithms within the context of infinitehorizon dynamic programming problems with discounted cost. We focus on discretetime dynamic systems with a large number of states, and we discuss two methods, which use simulation, temporal differences, and linear cost function ..."
Abstract

Cited by 82 (10 self)
 Add to MetaCart
We consider policy evaluation algorithms within the context of infinitehorizon dynamic programming problems with discounted cost. We focus on discretetime dynamic systems with a large number of states, and we discuss two methods, which use simulation, temporal differences, and linear cost function approximation. The first method is a new gradientlike algorithm involving leastsquares subproblems and a diminishing stepsize, which is based on the #policy iteration method of Bertsekas and Ioffe. The second method is the LSTD(#) algorithm recently proposed by Boyan, which for # =0coincides with the linear leastsquares temporaldifference algorithm of Bradtke and Barto. At present, there is only a convergence result by Bradtke and Barto for the LSTD(0) algorithm. Here, we strengthen this result by showing the convergence of LSTD(#), with probability 1, for every # [0, 1].
Evolving Aspirations and Cooperation
 Journal of Economic Theory
, 1998
"... This paper therefore builds on [3], in which a model of consistent aspirationsbased learning was introduced ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
(Show Context)
This paper therefore builds on [3], in which a model of consistent aspirationsbased learning was introduced
Estimating covariation: Epps effect and microstructure noise
 Journal of Econometrics, forthcoming
, 2009
"... This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previoustick covariance e ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previoustick covariance estimator is biased, and the size of the bias is more pronounced for less liquid assets. This is an analytic characterization of the Epps effect. We also provide optimal sampling frequency which balances the tradeoff between the bias and various sources of stochastic error terms, including nonsynchronous trading, microstructure noise, and time discretization. Finally, a twoscales covariance estimator is provided which simultaneously cancels (to first order) the Epps effect and the effect of microstructure noise. The gain is demonstrated in data.
Convergent SDPRelaxations in Polynomial Optimization with Sparsity
 SIAM Journal on Optimization
"... Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxati ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxation of order r has the following two features: (a) The number of variables is O(κ 2r) where κ = max[κ1, κ2] witth κ1 (resp. κ2) being the maximum number of variables appearing the monomials of f (resp. appearing in a single constraint gj(X) ≥ 0). (b) The largest size of the LMI’s (Linear Matrix Inequalities) is O(κ r). This is to compare with the respective number of variables O(n 2r) and LMI size O(n r) in the original SDPrelaxations defined in [11]. Therefore, great computational savings are expected in case of sparsity in the data {gj, f}, i.e. when κ is small, a frequent case in practical applications of interest. The novelty with respect to [9] is that we prove convergence to the global optimum of P when the sparsity pattern satisfies a condition often encountered in large size problems of practical applications, and known as the running intersection property in graph theory. In such cases, and as a byproduct, we also obtain a new representation result for polynomials positive on a basic closed semialgebraic set, a sparse version of Putinar’s Positivstellensatz [16]. 1.