Results 1  10
of
240,690
A Simple Model of Capital Market Equilibrium with Incomplete Information
 JOURNAL OF FINANCE
, 1987
"... The sphere of modern financial economics encompases finance, micro investment theory and much of the economics of uncertainty. As is evident from its influence on other branches of economics including public finance, industrial organization and monetary theory, the boundaries of this sphere are both ..."
Abstract

Cited by 720 (2 self)
 Add to MetaCart
The sphere of modern financial economics encompases finance, micro investment theory and much of the economics of uncertainty. As is evident from its influence on other branches of economics including public finance, industrial organization and monetary theory, the boundaries of this sphere are both permeable and flexible. The complex interactions of time and uncertainty guarantee intellectual challenge and intrinsic excitement to the study of financial economics. Indeed, the mathematics of the subject contain some of the most interesting applications of probability and optimization theory. But for all its mathematical refinement, the research has nevertheless had a direct and significant influence on practice. It was not always thus. Thirty years ago, finance theory was little more than a collection of anecdotes, rules of thumb, and manipulations of accounting data with an almost exclusive focus on corporate financial management. There is no need in this meeting of the guild to recount the subsequent evolution from this conceptual potpourri to a rigorous economic
LogP: Towards a Realistic Model of Parallel Computation
, 1993
"... A vast body of theoretical research has focused either on overly simplistic models of parallel computation, notably the PRAM, or overly specific models that have few representatives in the real world. Both kinds of models encourage exploitation of formal loopholes, rather than rewarding developme ..."
Abstract

Cited by 562 (15 self)
 Add to MetaCart
development of techniques that yield performance across a range of current and future parallel machines. This paper offers a new parallel machine model, called LogP, that reflects the critical technology trends underlying parallel computers. It is intended to serve as a basis for developing fast, portable
New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface
, 1994
"... Abstract Source parameters for historical earthquakes worldwide are compiled to develop a series of empirical relationships among moment magnitude (M), surface rupture length, subsurface rupture length, downdip rupture width, rupture area, and maximum and average displacement per event. The resultin ..."
Abstract

Cited by 524 (0 self)
 Add to MetaCart
Abstract Source parameters for historical earthquakes worldwide are compiled to develop a series of empirical relationships among moment magnitude (M), surface rupture length, subsurface rupture length, downdip rupture width, rupture area, and maximum and average displacement per event
A model of growth through creative destruction
, 1990
"... This paper develops a model based on Schumpeter's process of creative destruction. It departs from existing models of endogeneous growth in emphasizing obsolescence of old technologies induced by the accumulation of knowledge and the resulting process or industrial innovations. This has both ..."
Abstract

Cited by 1923 (29 self)
 Add to MetaCart
the log of GNP follows a random walk with drift. The size of the drift is the average growth rate of the economy and it is endogeneous to the model; in particular it depends on the size and likilihood of innovations resulting from research and also on the degree of market power available to an innovator.
Abduction in Logic Programming
"... Abduction in Logic Programming started in the late 80s, early 90s, in an attempt to extend logic programming into a framework suitable for a variety of problems in Artificial Intelligence and other areas of Computer Science. This paper aims to chart out the main developments of the field over th ..."
Abstract

Cited by 616 (76 self)
 Add to MetaCart
Abduction in Logic Programming started in the late 80s, early 90s, in an attempt to extend logic programming into a framework suitable for a variety of problems in Artificial Intelligence and other areas of Computer Science. This paper aims to chart out the main developments of the field over the last ten years and to take a critical view of these developments from several perspectives: logical, epistemological, computational and suitability to application. The paper attempts to expose some of the challenges and prospects for the further development of the field.
A Maximum Entropy approach to Natural Language Processing
 COMPUTATIONAL LINGUISTICS
, 1996
"... The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we des ..."
Abstract

Cited by 1341 (5 self)
 Add to MetaCart
The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we describe a method for statistical modeling based on maximum entropy. We present a maximumlikelihood approach for automatically constructing maximum entropy models and describe how to implement this approach efficiently, using as examples several problems in natural language processing.
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 788 (23 self)
 Add to MetaCart
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.
Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure
, 1976
"... This paper integrates elements from the theory of agency, the theory of property rights and the theory of finance to develop a theory of the ownership structure of the firm. We define the concept of agency costs, show its relationship to the ‘separation and control’ issue, investigate the nature of ..."
Abstract

Cited by 2780 (12 self)
 Add to MetaCart
This paper integrates elements from the theory of agency, the theory of property rights and the theory of finance to develop a theory of the ownership structure of the firm. We define the concept of agency costs, show its relationship to the ‘separation and control’ issue, investigate the nature of the agency costs generated by the existence of debt and outside equity, demonstrate who bears costs and why, and investigate the Pareto optimality of their existence. We also provide a new definition of the firm, and show how our analysis of the factors influencing the creation and issuance of debt and equity claims is a special case of the supply side of the completeness of markets problem.
Results 1  10
of
240,690