Results 1  10
of
5,248
An algorithm for finding best matches in logarithmic expected time
 ACM Transactions on Mathematical Software
, 1977
"... An algorithm and data structure are presented for searching a file containing N records, each described by k real valued keys, for the m closest matches or nearest neighbors to a given query record. The computation required to organize the file is proportional to kNlogN. The expected number of recor ..."
Abstract

Cited by 764 (2 self)
 Add to MetaCart
An algorithm and data structure are presented for searching a file containing N records, each described by k real valued keys, for the m closest matches or nearest neighbors to a given query record. The computation required to organize the file is proportional to kNlogN. The expected number
Thresholding of statistical maps in functional neuroimaging using the false discovery rate.
 NeuroImage
, 2002
"... Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a longstanding problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures for mult ..."
Abstract

Cited by 521 (9 self)
 Add to MetaCart
controlling procedures will be effective for the analysis of neuroimaging data. These procedures operate simultaneously on all voxelwise test statistics to determine which tests should be considered statistically significant. The innovation of the procedures is that they control the expected proportion of the rejected
Bid, ask and transaction prices in a specialist market with heterogeneously informed traders
 Journal of Financial Economics
, 1985
"... The presence of traders with superior information leads to a positive bidask spread even when the specialist is riskneutral and makes zero expected profits. The resulting transaction prices convey information, and the expectation of the average spread squared times volume is bounded by a number th ..."
Abstract

Cited by 1273 (5 self)
 Add to MetaCart
The presence of traders with superior information leads to a positive bidask spread even when the specialist is riskneutral and makes zero expected profits. The resulting transaction prices convey information, and the expectation of the average spread squared times volume is bounded by a number
Systems Competition and Network Effects
 JOURNAL OF ECONOMIC PERSPECTIVES—VOLUME 8, NUMBER 2—SPRING 1994—PAGES 93–115
, 1994
"... Many products have little or no value in isolation, but generate value when combined with others. Examples include: nuts and bolts, which together provide fastening services; home audio or video components and programming, which together provide entertainment services; automobiles, repair parts and ..."
Abstract

Cited by 544 (6 self)
 Add to MetaCart
photographic services. These are all examples of products that are strongly complementary, although they need not be consumed in fixed proportions. We describe them as forming systems, which refers to collections of two or more components together with an interface that allows the components to work together
An equilibrium characterization of the term structure.
 J. Financial Econometrics
, 1977
"... The paper derives a general form of the term structure of interest rates. The following assumptions are made: (A.l) The instantaneous (spot) interest rate follows a diffusion process; (A.2) the price of a discount bond depends only on the spot rate over its term; and (A.3) the market is efficient. ..."
Abstract

Cited by 1041 (0 self)
 Add to MetaCart
. Under these assumptions, it is shown by means of an arbitrage argument that the expected rate of return on any bond in excess of the spot rate is proportional to its standard deviation. This property is then used to derive a partial differential equation for bond prices. The solution to that equation
Policy gradient methods for reinforcement learning with function approximation.
 In NIPS,
, 1999
"... Abstract Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly repres ..."
Abstract

Cited by 439 (20 self)
 Add to MetaCart
represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actorcritic methods are examples of this approach. Our main new result is to show
The positive false discovery rate: A Bayesian interpretation and the qvalue
 Annals of Statistics
, 2003
"... Multiple hypothesis testing is concerned with controlling the rate of false positives when testing several hypotheses simultaneously. One multiple hypothesis testing error measure is the false discovery rate (FDR), which is loosely defined to be the expected proportion of false positives among all s ..."
Abstract

Cited by 337 (8 self)
 Add to MetaCart
Multiple hypothesis testing is concerned with controlling the rate of false positives when testing several hypotheses simultaneously. One multiple hypothesis testing error measure is the false discovery rate (FDR), which is loosely defined to be the expected proportion of false positives among all
Biodiversity  global biodiversity scenarios for the year 2100
, 2000
"... Scenarios of changes in biodiversity for the year 2100 can now be developed based on scenarios of changes in atmospheric carbon dioxide, climate, vegetation, and land use and the known sensitivity of biodiversity to these changes. This study identified a ranking of the importance of drivers of chang ..."
Abstract

Cited by 362 (5 self)
 Add to MetaCart
of change, a ranking of the biomes with respect to expected changes, and the major sources of uncertainties. For terrestrial ecosystems, landuse change probably will have the largest effect, followed by climate change, nitrogen deposition, biotic exchange, and elevated carbon dioxide concentration
The Average Distance in a Random Graph with Given Expected Degrees
"... Random graph theory is used to examine the “smallworld phenomenon”– any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees, the average distance is almost surely of order log n / log ˜ d whe ..."
Abstract

Cited by 289 (13 self)
 Add to MetaCart
where ˜ d is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/k β for some fixed exponent β. For the case of β> 3, we prove that the average distance of the power law
Expectationbased syntactic comprehension
, 2006
"... This paper investigates the role of resource allocation as a source of processing difficulty in human sentence comprehension. The paper proposes a simple informationtheoretic characterization of processing difficulty as the work incurred by resource reallocation during parallel, incremental, probabi ..."
Abstract

Cited by 231 (18 self)
 Add to MetaCart
, probabilistic disambiguation in sentence comprehension, and demonstrates its equivalence to the theory of Hale (2001), in which the difficulty of a word is proportional to its surprisal (its negative logprobability) in the context within which it appears. This proposal subsumes and clarifies findings that high
Results 1  10
of
5,248