Results 1  10
of
4,357
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 599 (51 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
SemiSupervised Learning Using Gaussian Fields and Harmonic Functions
 IN ICML
, 2003
"... An approach to semisupervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning ..."
Abstract

Cited by 752 (14 self)
 Add to MetaCart
An approach to semisupervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning
An approach to correlate tandem mass spectral data of peptides with amino acid sequences in a protein database
 J. Am. Soc. Mass Spectrom
, 1994
"... A method to correlate the uninterpreted tandem mass spectra of peptides produced under low energy (lo50 eV) collision conditions with amino acid sequences in the Genpept database has been developed. In this method the protein database is searched to identify linear amino acid sequences within a mas ..."
Abstract

Cited by 944 (19 self)
 Add to MetaCart
mass tolerance of * 1 u of the precursor ion molecular weight. A crosscorrelation function is then used to provide a measurement of similarity between the masstocharge ratios for the fragment ions predicted from amino acid sequences obtained from the database and the fragment ions observed
Fast Bilateral Filtering for the Display of HighDynamicRange Images
, 2002
"... We present a new technique for the display of highdynamicrange images, which reduces the contrast while preserving detail. It is based on a twoscale decomposition of the image into a base layer, encoding largescale variations, and a detail layer. Only the base layer has its contrast reduced, the ..."
Abstract

Cited by 453 (10 self)
 Add to MetaCart
, thereby preserving detail. The base layer is obtained using an edgepreserving filter called the bilateral filter. This is a nonlinear filter, where the weight of each pixel is computed using a Gaussian in the spatial domain multiplied by an influence function in the intensity domain that decreases
The Determinants of Credit Spread Changes.
 Journal of Finance
, 2001
"... ABSTRACT Using dealer's quotes and transactions prices on straight industrial bonds, we investigate the determinants of credit spread changes. Variables that should in theory determine credit spread changes have rather limited explanatory power. Further, the residuals from this regression are ..."
Abstract

Cited by 422 (2 self)
 Add to MetaCart
, and maturity groups. Note that this result by itself is not surprising, since theory predicts that all credit spreads should be affected by aggregate variables such as changes in the interest rate, changes in business climate, changes in market volatility, etc. The particularly surprising aspect of our results
Greedy layerwise training of deep networks
, 2006
"... Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multilayer neural networks have many levels of nonlinearities allow ..."
Abstract

Cited by 394 (48 self)
 Add to MetaCart
Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multilayer neural networks have many levels of non
Improving predictive inference under covariate shift by weighting the loglikelihood function
 JOURNAL OF STATISTICAL PLANNING AND INFERENCE
, 2000
"... ..."
Gaussian Processes for Regression
 Advances in Neural Information Processing Systems 8
, 1996
"... The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparame ..."
Abstract

Cited by 268 (21 self)
 Add to MetaCart
The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values
The robust beauty of improper linear models in decision making
 American Psychologist
, 1979
"... ABSTRACT: Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regres ..."
Abstract

Cited by 267 (1 self)
 Add to MetaCart
ABSTRACT: Proper linear models are those in which predictor variables are given weights in such a way that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge
A Technique for Drawing Directed Graphs
 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
, 1993
"... We describe a fourpass algorithm for drawing directed graphs. The first pass finds an optimal rank assignment using a network simplex algorithm. The second pass sets the vertex order within ranks by an iterative heuristic incorporating a novel weight function and local transpositions to reduce cros ..."
Abstract

Cited by 252 (18 self)
 Add to MetaCart
We describe a fourpass algorithm for drawing directed graphs. The first pass finds an optimal rank assignment using a network simplex algorithm. The second pass sets the vertex order within ranks by an iterative heuristic incorporating a novel weight function and local transpositions to reduce
Results 1  10
of
4,357