Results 1  10
of
8,202,586
The Nature of Statistical Learning Theory
, 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract

Cited by 12976 (32 self)
 Add to MetaCart
on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both
Maximum likelihood from incomplete data via the EM algorithm
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1977
"... A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value situat ..."
Abstract

Cited by 11807 (17 self)
 Add to MetaCart
A broadly applicable algorithm for computing maximum likelihood estimates from incomplete data is presented at various levels of generality. Theory showing the monotone behaviour of the likelihood and convergence of the algorithm is derived. Many examples are sketched, including missing value
The information bottleneck method
 University of Illinois
, 1999
"... We define the relevant information in a signal x ∈ X as being the information that this signal provides about another signal y ∈ Y. Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. ..."
Abstract

Cited by 545 (38 self)
 Add to MetaCart
. Understanding the signal x requires more than just predicting y, it also requires specifying which features of X play a role in the prediction. We formalize this problem as that of finding a short code for X that preserves the maximum information about Y. That is, we squeeze the information that X provides
Predictive reward signal of dopamine neurons
 Journal of Neurophysiology
, 1998
"... Schultz, Wolfram. Predictive reward signal of dopamine neurons. is called rewards, which elicit and reinforce approach behavJ. Neurophysiol. 80: 1–27, 1998. The effects of lesions, receptor ior. The functions of rewards were developed further during blocking, electrical selfstimulation, and drugs ..."
Abstract

Cited by 717 (12 self)
 Add to MetaCart
of rewards, and rons show phasic activations after primary liquid and food rewards and conditioned, rewardpredicting visual and auditory stimuli. the availability of rewards determines some of the basic They show biphasic, activationdepression responses after stimuli parameters of the subject’s life
Predictive regressions
 Journal of Financial Economics
, 1999
"... When a rate of return is regressed on a lagged stochastic regressor, such as a dividend yield, the regression disturbance is correlated with the regressor's innovation. The OLS estimator's "nitesample properties, derived here, can depart substantially from the standard regression set ..."
Abstract

Cited by 452 (19 self)
 Add to MetaCart
When a rate of return is regressed on a lagged stochastic regressor, such as a dividend yield, the regression disturbance is correlated with the regressor's innovation. The OLS estimator's "nitesample properties, derived here, can depart substantially from the standard regression
Where the REALLY Hard Problems Are
 IN J. MYLOPOULOS AND R. REITER (EDS.), PROCEEDINGS OF 12TH INTERNATIONAL JOINT CONFERENCE ON AI (IJCAI91),VOLUME 1
, 1991
"... It is well known that for many NPcomplete problems, such as KSat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P != NP). This paper shows that NPcomplete problems can be summarized by at least one "order parameter", and that the hard p ..."
Abstract

Cited by 681 (1 self)
 Add to MetaCart
problems occur at a critical value of such a parameter. This critical value separates two regions of characteristically different properties. For example, for Kcolorability, the critical value separates overconstrained from underconstrained random graphs, and it marks the value at which the probability
Predicting Internet Network Distance with CoordinatesBased Approaches
 In INFOCOM
, 2001
"... In this paper, we propose to use coordinatesbased mechanisms in a peertopeer architecture to predict Internet network distance (i.e. roundtrip propagation and transmission delay) . We study two mechanisms. The first is a previously proposed scheme, called the triangulated heuristic, which is bas ..."
Abstract

Cited by 633 (5 self)
 Add to MetaCart
In this paper, we propose to use coordinatesbased mechanisms in a peertopeer architecture to predict Internet network distance (i.e. roundtrip propagation and transmission delay) . We study two mechanisms. The first is a previously proposed scheme, called the triangulated heuristic, which
Investing for the long run when returns are predictable
 Journal of Finance
, 2000
"... We examine how the evidence of predictability in asset returns affects optimal portfolio choice for investors with long horizons. Particular attention is paid to estimation risk, or uncertainty about the true values of model parameters. We find that even after incorporating parameter uncertainty, th ..."
Abstract

Cited by 438 (0 self)
 Add to MetaCart
, there is enough predictability in returns to make investors allocate substantially more to stocks, the longer their horizon. Moreover, the weak statistical significance of the evidence for predictability makes it important to take estimation risk into account; a longhorizon investor who ignores it may
Protein Secondary Structure Prediction . . .
 JO MOL B/O/. (1999) 292, 195202 .GG
, 1999
"... ..."
Results 1  10
of
8,202,586