Results 1  10
of
26,916
On Spectral Clustering: Analysis and an algorithm
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS
, 2001
"... Despite many empirical successes of spectral clustering methods  algorithms that cluster points using eigenvectors of matrices derived from the distances between the points  there are several unresolved issues. First, there is a wide variety of algorithms that use the eigenvectors in slightly ..."
Abstract

Cited by 1713 (13 self)
 Add to MetaCart
Despite many empirical successes of spectral clustering methods  algorithms that cluster points using eigenvectors of matrices derived from the distances between the points  there are several unresolved issues. First, there is a wide variety of algorithms that use the eigenvectors
Instancebased learning algorithms
 Machine Learning
, 1991
"... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..."
Abstract

Cited by 1389 (18 self)
 Add to MetaCart
to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instancebased learning, that generates classification predictions using only specific instances. Instancebased learning algorithms do not maintain a set of abstractions derived from specific instances
Randomized Gossip Algorithms
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..."
Abstract

Cited by 532 (5 self)
 Add to MetaCart
method that solves the optimization problem over the network. The relation of averaging time to the second largest eigenvalue naturally relates it to the mixing time of a random walk with transition probabilities derived from the gossip algorithm. We use this connection to study the performance
Analysis of Recommendation Algorithms for ECommerce
, 2000
"... Recommender systems apply statistical and knowledge discovery techniques to the problem of making product recommendations during a live customer interaction and they are achieving widespread success in ECommerce nowadays. In this paper, we investigate several techniques for analyzing largescale pu ..."
Abstract

Cited by 523 (22 self)
 Add to MetaCart
scale purchase and preference data for the purpose of producing useful recommendations to customers. In particular, we apply a collection of algorithms such as traditional data mining, nearestneighbor collaborative ltering, and dimensionality reduction on two dierent data sets. The rst data set was derived from
A new learning algorithm for blind signal separation

, 1996
"... A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract

Cited by 622 (80 self)
 Add to MetaCart
A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number
A fast learning algorithm for deep belief nets
 Neural Computation
, 2006
"... We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in denselyconnected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a ..."
Abstract

Cited by 970 (49 self)
 Add to MetaCart
We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in denselyconnected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer
Dynamic programming algorithm optimization for spoken word recognition
 IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING
, 1978
"... This paper reports on an optimum dynamic programming (DP) based timenormalization algorithm for spoken word recognition. First, a general principle of timenormalization is given using timewarping function. Then, two timenormalized distance definitions, ded symmetric and asymmetric forms, are der ..."
Abstract

Cited by 788 (3 self)
 Add to MetaCart
This paper reports on an optimum dynamic programming (DP) based timenormalization algorithm for spoken word recognition. First, a general principle of timenormalization is given using timewarping function. Then, two timenormalized distance definitions, ded symmetric and asymmetric forms
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
, 1989
"... The exact form of a gradientfollowing learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Abstract

Cited by 534 (4 self)
 Add to MetaCart
The exact form of a gradientfollowing learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 653 (21 self)
 Add to MetaCart
numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limi ..."
Abstract

Cited by 639 (15 self)
 Add to MetaCart
methods are limited to using MRF as a general prior in an FM modelbased approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRFEM framework, an accurate and robust segmentation can be achieved. More importantly
Results 1  10
of
26,916