Results 1 
9 of
9
Extremely Randomized Trees
 MACHINE LEARNING
, 2003
"... This paper presents a new learning algorithm based on decision tree ensembles. In opposition to the classical decision tree induction method, the trees of the ensemble are built by selecting the tests during their induction fully at random. This extreme ..."
Abstract

Cited by 130 (34 self)
 Add to MetaCart
This paper presents a new learning algorithm based on decision tree ensembles. In opposition to the classical decision tree induction method, the trees of the ensemble are built by selecting the tests during their induction fully at random. This extreme
Fast SVM training algorithm with decomposition on very large data sets
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2005
"... Training a support vector machine on a data set of huge size with thousands of classes is a challenging problem. This paper proposes an efficient algorithm to solve this problem. The key idea is to introduce a parallel optimization step to quickly remove most of the nonsupport vectors, where block ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Training a support vector machine on a data set of huge size with thousands of classes is a challenging problem. This paper proposes an efficient algorithm to solve this problem. The key idea is to introduce a parallel optimization step to quickly remove most of the nonsupport vectors, where block diagonal matrices are used to approximate the original kernel matrix so that the original problem can be split into hundreds of subproblems which can be solved more efficiently. In addition, some effective strategies such as kernel caching and efficient computation of kernel matrix are integrated to speed up the training process. Our analysis of the proposed algorithm shows that its time complexity grows linearly with the number of classes and size of the data set. In the experiments, many appealing properties of the proposed algorithm have been investigated and the results show that the proposed algorithm has a much better scaling capability than Libsvm, SVM light, and SVMTorch. Moreover, the good generalization performances on several large databases have also been achieved.
Application of Support Vector Machines for recognition of handwritten Arabic/Persian digits
 PROCEEDING OF THE SECOND CONFERENCE ON MACHINE VISION AND IMAGE PROCESSING & APPLICATIONS (MVIP
, 2003
"... A new method for recognition of isolated handwritten Arabic/Persian digits is presented. This method is based on Support Vector Machines (SVMs), and a new approach of feature extraction. Each digit is considered from four different views, and from each view 16 features are extracted and combined to ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
A new method for recognition of isolated handwritten Arabic/Persian digits is presented. This method is based on Support Vector Machines (SVMs), and a new approach of feature extraction. Each digit is considered from four different views, and from each view 16 features are extracted and combined to obtain 64 features. Using these features, multiple SVM classifiers are trained to separate different classes of digits. CENPARMI Indian (Arabic/Persian) handwritten digit database is used for training and testing of SVM classifiers. Based on this database, differences between Arabic and Persian digits in digit recognition are shown. This database provides 7390 samples for training and 3035 samples for testing from the real life samples. Experiments show that the proposed features can provide a very good recognition result using Support Vector Machines at a recognition rate 94.14%, compared with 91.25 % obtained by MLP neural network classifier using the same features and test set.
Iterative Single Data Algorithm for Training Kernel Machines from Huge Data Sets: Theory and Performance
 PERFORMANCE, SUPPORT VECTOR MACHINES: THEORY AND APPLICATIONS, SPRINGERVERLAG,.STUDIES IN FUZZINESS AND SOFT COMPUTING
, 2005
"... The chapter introduces the latest developments and results of Iterative Single Data Algorithm (ISDA) for solving largescale support vector machines (SVMs) problems. First, the equality of a Kernel AdaTron (KA) method (originating from a gradient ascent learning approach) and the Sequential Minimal ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The chapter introduces the latest developments and results of Iterative Single Data Algorithm (ISDA) for solving largescale support vector machines (SVMs) problems. First, the equality of a Kernel AdaTron (KA) method (originating from a gradient ascent learning approach) and the Sequential Minimal Optimization (SMO) learning algorithm (based on an analytic quadratic programming step for a model without bias term b) in designing SVMs with positive definite kernels is shown for both the nonlinear classification and the nonlinear regression tasks. The chapter also introduces the classic GaussSeidel (GS) procedure and its derivative known as the successive overrelaxation (SOR) algorithm as viable (and usually faster) training algorithms. The convergence theorem for these related iterative algorithms is proven. The second part of the chapter presents the effects and the methods of incorporating explicit bias term b into the ISDA. The algorithms shown here implement the single training data based iteration routine (a.k.a. perpattern learning). This makes the proposed ISDAs remarkably quick. The final solution in a dual domain is not an approximate one, but it is the optimal set of dual variables which would have been obtained by using any of existing and proven QP problem solvers if they only could deal with huge data sets.
Fast modular network implementation for support vector machines
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2005
"... Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Support vector machines (SVMs) have been extensively used. However, it is known that SVMs face difficulty in solving large complex problems due to the intensive computation involved in their training algorithms, which are at least quadratic with respect to the number of training examples. This paper proposes a new, simple, and efficient network architecture which consists of several SVMs each trained on a small subregion of the whole data sampling space and the same number of simple neural quantizer modules which inhibit the outputs of all the remote SVMs and only allow a single local SVM to fire (produce actual output) at any time. In principle, this regioncomputing based modular network method can significantly reduce the learning time of SVM algorithms without sacrificing much generalization performance. The experiments on a few real large complex benchmark problems demonstrate that our method can be significantly faster than single SVMs without losing much generalization performance.
2004), Bias term b in svms again
 in ESANN’2004 proceedings  European Symposium on Artificial Neural Networks
"... ..."
Bit Reduction Support Vector Machine
"... Abstract — Support vector machines are very accurate classifiers and have been widely used in many applications. However, the training and to a lesser extent prediction time of support vector machines on very large data sets can be very long. This paper presents a fast compression method to scale up ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract — Support vector machines are very accurate classifiers and have been widely used in many applications. However, the training and to a lesser extent prediction time of support vector machines on very large data sets can be very long. This paper presents a fast compression method to scale up support vector machines to large data sets. A simple bit reduction method is applied to reduce the cardinality of the data by weighting representative examples. We then develop support vector machines trained on the weighted data. Experiments indicate that the bit reduction support vector machine produces a significant reduction of the time required for both training and prediction with minimum loss in accuracy. It is also shown to be more accurate than random sampling when the data is not overcompressed. I.
The Nonlinear Classifier..................... 124
"... По материалам Школысеминара «Современные проблемы нейроинформатики» Москва 2007УДК 001(06)+004.032.26 (06) Нейронные сети ББК 72я5+32.818я5 ..."
Abstract
 Add to MetaCart
По материалам Школысеминара «Современные проблемы нейроинформатики» Москва 2007УДК 001(06)+004.032.26 (06) Нейронные сети ББК 72я5+32.818я5
IX ВСЕРОССИЙСКАЯ НАУЧНОТЕХНИЧЕСКАЯ КОНФЕРЕНЦИЯ ЛЕКЦИИ ПО НЕЙРОИНФОРМАТИКЕ
"... В книге публикуются тексты лекций, прочитанных на Школесеминаре «Современные проблемы нейроинформатики», проходившей 24–26 января 2007 года в МИФИ в рамках IX Всероссийской конференции «Нейроинформатика–2007». Материалы лекций связаны с рядом проблем, актуальных для современного этапа развития нейр ..."
Abstract
 Add to MetaCart
В книге публикуются тексты лекций, прочитанных на Школесеминаре «Современные проблемы нейроинформатики», проходившей 24–26 января 2007 года в МИФИ в рамках IX Всероссийской конференции «Нейроинформатика–2007». Материалы лекций связаны с рядом проблем, актуальных для современного этапа развития нейроинформатики, включая ее взаимодействие с другими научнотехническими областями. Ответственный редактор Ю. В. Тюменцев, кандидат технических наук