Results 1 - 10
of
164
Statistical shape influence in geodesic active contours
- In Proc. 2000 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Hilton Head, SC
, 2000
"... A novel method of incorporating shape information into the image segmentation process is presented. We introduce a representation for deformable shapes and define a probability distribution over the variances of a set of training shapes. The segmentation process embeds an initial curve as the zero l ..."
Abstract
-
Cited by 396 (4 self)
- Add to MetaCart
shape information and the image information. We then evolve the surface globally, towards the MAP estimate, and locally, based on image gradients and curvature. Results are demonstrated on synthetic data and medical imagery, in 2D and 3D. 1
Stochastic Gradient Descent Training for L1-regularized Log-linear Models with Cumulative Penalty
"... Stochastic gradient descent (SGD) uses approximate gradients estimated from subsets of the training data and updates the parameters in an online fashion. This learning framework is attractive because it often requires much less training time in practice than batch training algorithms. However, L1-re ..."
Abstract
-
Cited by 42 (0 self)
- Add to MetaCart
Stochastic gradient descent (SGD) uses approximate gradients estimated from subsets of the training data and updates the parameters in an online fashion. This learning framework is attractive because it often requires much less training time in practice than batch training algorithms. However, L1
Exponential Convergence of a Gradient Descent Algorithm for a Class of Recurrent Neural Networks
- In Proceedings of the 38th Midwest Symposium on Circuits and Systems
, 1995
"... We investigate the convergence properties of a gradient descent learning algorithm for a class of recurrent neural networks. The networks compute an affine combination of nonlinear (sigmoidal) functions of the outputs of biased linear dynamical systems. The learning algorithm performs approximate gr ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
gradient descent to minimize the squared error on a training sequence of input-output data. We consider the convergence of the parameter estimates produced by this algorithm when the data sequence is generated by a network in this class. We assume that the sigmoid is analytic and bounded everywhere
Gradient Feature Selection for Online Boosting
"... Boosting has been widely applied in computer vision, especially after Viola and Jones’s seminal work [23]. The marriage of rectangular features and integral-imageenabled fast computation makes boosting attractive for many vision applications. However, this popular way of applying boosting normally e ..."
Abstract
-
Cited by 21 (3 self)
- Add to MetaCart
characteristic, but yet impractical due to the huge hypothesis pool. This paper proposes a gradient-based feature selection approach. Assuming a generally trained feature set and labeled samples are given, our approach iteratively updates each feature using the gradient descent, by minimizing the weighted least
Gradient Optimization for multiple kernel’s
"... Abstract—The subject of this work is the model selection of kernels with multiple parameters for support vector machines (SVM), with the purpose of classifying hyperspectral remote sensing data. During the training process, the kernel parameters need to be tuned properly. In this work a gradient des ..."
Abstract
- Add to MetaCart
Abstract—The subject of this work is the model selection of kernels with multiple parameters for support vector machines (SVM), with the purpose of classifying hyperspectral remote sensing data. During the training process, the kernel parameters need to be tuned properly. In this work a gradient
Maximum margin training of generative kernels
, 2004
"... Generative kernels, a generalised form of Fisher kernels, are a powerful form of kernel that allow the kernel parameters to be tuned to a specific task. The standard approach to training these kernels is to use maximum likelihood estimation. This paper describes a novel approach based on maximum-mar ..."
Abstract
-
Cited by 13 (4 self)
- Add to MetaCart
-margin training of both the kernel parameters and a Support Vector Machine (SVM) classifier. It combines standard SVM training with a gradient-descent based kernel parameter optimisation scheme. This allows the kernel parameters to be explicitly trained for the data set and the SVM score-space. Initial results
Credit Card Fraud Detection with a Neural-Network,”
- Proc. 27th Hawaii Int‟l Conf. System Sciences: Information Systems: Decision Support and Knowledge-Based Systems,
, 1994
"... Abstract Using data from a credit card issuer, a neural network based fraud detection system was trained on a large sample of labelled credit card account transactions and tested on a holdout data set that consisted of all account activity over a subsequent two-month period of time. The neural netw ..."
Abstract
-
Cited by 74 (0 self)
- Add to MetaCart
probabilities associated with each of these prototype cells. P-RCE training is not subject to problems of convergence that can afflict gradient-descent training algorithms. The P-RCE network and networks like it have been applied to a variety of pattem recognition problems both within and beyond the field
Data Distribution
"... We introduce a new training algorithm for the SLS binary classifier. A combination of evolu-tive algorithms and Gradient Descent method is used to improve its accuracy. In addition, we estimate the number of straight line segments by applying the clustering algorithm X-Means. Our approach showed imp ..."
Abstract
- Add to MetaCart
We introduce a new training algorithm for the SLS binary classifier. A combination of evolu-tive algorithms and Gradient Descent method is used to improve its accuracy. In addition, we estimate the number of straight line segments by applying the clustering algorithm X-Means. Our approach showed
Training Validation Testing
"... Do you want to evaluate a classifier or a learning algorithm? Do you want to predict accuracy or predict which one is better? Do you have a lot of data or not much? Are you interested in one domain or in understanding accuracy across domains? Monday, 20 February 12For really large amounts of data... ..."
Abstract
- Add to MetaCart
in the training algorithm (e.g., size of neural network, gradient descent step size, k in k-nearest neighbour, etc., etc.) Monday, 20 February 12A: Use a validation set.
Automatic learning rate maximization by on-line estimation of the hessian's eigenvectors
, 1993
"... We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The on-line version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimatin ..."
Abstract
-
Cited by 27 (3 self)
- Add to MetaCart
We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The on-line version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique
Results 1 - 10
of
164