Results 1  10
of
23
A comparative study on contentbased music genre classification
 in Proc. SIGIR, 2003
"... Contentbased music genre classification is a fundamental component of music information retrieval systems and has been gaining importance and enjoying a growing amount of attention with the emergence of digital music on the Internet. Currently little work has been done on automatic music genre clas ..."
Abstract

Cited by 109 (15 self)
 Add to MetaCart
(Show Context)
Contentbased music genre classification is a fundamental component of music information retrieval systems and has been gaining importance and enjoying a growing amount of attention with the emergence of digital music on the Internet. Currently little work has been done on automatic music genre classification, and in addition, the reported classification accuracies are relatively low. This paper proposes a new feature extraction method for music genre classification, DWCHs 1. DWCHs capture the local and global information of music signals simultaneously by computing histograms on their Daubechies wavelet coefficients. Effectiveness of this new feature and of previously studied features are compared using various machine learning classification algorithms, including Support Vector Machines and Linear Discriminant Analysis. It is demonstrated that the use of DWCHs significantly improves the accuracy of music genre classification.
Extracting Shared Subspace for Multilabel Classification
"... Multilabel problems arise in various domains such as multitopic document categorization and protein function prediction. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classification problems. Since the multipl ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
(Show Context)
Multilabel problems arise in various domains such as multitopic document categorization and protein function prediction. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classification problems. Since the multiple labels share the same input space, and the semantics conveyed by different labels are usually correlated, it is essential to exploit the correlation information contained in different labels. In this paper, we consider a general framework for extracting shared structures in multilabel classification. In this framework, a common subspace is assumed to be shared among multiple labels. We show that the optimal solution to the proposed formulation can be obtained by solving a generalized eigenvalue problem, though the problem is nonconvex. For highdimensional problems, direct computation of the solution is expensive, and we develop an efficient algorithm for this case. One appealing feature of the proposed framework is that it includes several wellknown algorithms as special cases, thus elucidating their intrinsic relationships. We have conducted extensive experiments on eleven multitopic web page categorization tasks, and results demonstrate the effectiveness of the proposed formulation in comparison with several representative algorithms.
A KernelBased TwoClass Classifier for Imbalanced Data Sets
 IEEE Trans. Neural Networks
, 2007
"... ..."
(Show Context)
Parallelization of the Incremental Proximal Support Vector Machine Classifier using a Heapbased Tree Topology
 In Parallel and Distributed computing for Machine Learning. In conjunction with the 14th European Conference on Machine Learning (ECML’03) and 7th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD’03
, 2003
"... Abstract. Support Vector Machines (SVMs) are an efficient data mining approach for classification, clustering and time series analysis. In recent years, a tremendous growth in the amount of data gathered has changed the focus of SVM classifier algorithms from providing accurate results to enabling i ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Support Vector Machines (SVMs) are an efficient data mining approach for classification, clustering and time series analysis. In recent years, a tremendous growth in the amount of data gathered has changed the focus of SVM classifier algorithms from providing accurate results to enabling incremental (and decremental) learning with new data (or unlearning old data) without the need for computationally costly retraining with the old data. In this paper we propose two efficient parallelized algorithms based on heaps of processing nodes for classification with the incremental proximal SVM introduced by Fung and Mangasarian. 1
A framework for kernelbased multicategory classification
, 2005
"... A geometric framework for understanding multicategory classification is introduced, through which many existing ‘alltogether ’ algorithms can be understood. The structure allows the derivation of a parsimonious optimisation function, which is a direct extension of the binary ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
A geometric framework for understanding multicategory classification is introduced, through which many existing ‘alltogether ’ algorithms can be understood. The structure allows the derivation of a parsimonious optimisation function, which is a direct extension of the binary
Multicategory Incremental Proximal Support Vector Classifiers
 In Proceedings of the 7th International Conference on KnowledgeBased Information & Engineering Systems (KES’2003), number 2773 in Lecture Notes in Artificial Intelligence (LNAI
, 2003
"... Abstract. Support Vector Machines (SVMs) are an efficient data mining approach for classification, clustering and time series analysis. In recent years, a tremendous growth in the amount of data gathered has changed the focus of SVM classifier algorithms from providing accurate results to enabling i ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Support Vector Machines (SVMs) are an efficient data mining approach for classification, clustering and time series analysis. In recent years, a tremendous growth in the amount of data gathered has changed the focus of SVM classifier algorithms from providing accurate results to enabling incremental (and decremental) learning with new data (or unlearning old data) without the need for computationally costly retraining with the old data. In this paper we propose an efficient algorithm for multicategory classification with the incremental proximal SVM introduced by Fung and Mangasarian. 1
Multiclass proximal support vector machines
 J. Comput. Graph. Statist
, 2006
"... This article proposes the multiclass proximal support vector machine (MPSVM) classifier, which extends the binary PSVM to the multiclass case. Unlike the oneversusrest approach that constructs the decision rule based on multiple binary classification tasks, the proposed method considers all classe ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This article proposes the multiclass proximal support vector machine (MPSVM) classifier, which extends the binary PSVM to the multiclass case. Unlike the oneversusrest approach that constructs the decision rule based on multiple binary classification tasks, the proposed method considers all classes simultaneously and has better theoretical properties and empirical performance. We formulate the MPSVM as a regularization problem in the reproducing kernel Hilbert space and show that it implements the Bayes rule for classification. In addition, the MPSVM can handle equal and unequal misclassification costs in a unified framework. We suggest an efficient algorithm to implement the MPSVM by solving a system of linear equations. This algorithm requires much less computational effort than solving the standard SVM, which often requires quadratic programming and can be slow for large problems. We also provide an alternative and more robust algorithm for illposed problems. The effectiveness of the MPSVM is demonstrated by both simulation studies and applications to cancer classifications using microarray data.
Incremental and Decremental Proximal Support Vector Classification using Decay Coefficients
 In: Proceedings of the 5th International Conference on Data Warehousing and Knowledge Discovery (DaWaK’03, forthcoming). Lecture Notes in Artificial Intelligence, SpringerVerlag
, 2003
"... Abstract. This paper presents an efficient approach for supporting decremental learning for incremental proximal support vector machines (SVM). The presented decremental algorithm based on decay coefficients is compared with an existing windowbased decremental algorithm, and is shown to perform at ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents an efficient approach for supporting decremental learning for incremental proximal support vector machines (SVM). The presented decremental algorithm based on decay coefficients is compared with an existing windowbased decremental algorithm, and is shown to perform at a similar level in accuracy, but providing significantly better computational performance. 1
Multiclass Support Vector Classification via Regression Multiclass Support Vector Classification via Regression
"... The problem of multiclass classification is considered and resolved through the multiresponse linear regression approach. Scores are used to encode the class labels into multivariate responses. The regression of scores on input attributes is used to extract a lowdimensional linear discriminant su ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The problem of multiclass classification is considered and resolved through the multiresponse linear regression approach. Scores are used to encode the class labels into multivariate responses. The regression of scores on input attributes is used to extract a lowdimensional linear discriminant subspace. The classification training and prediction are carried out in this lowdimensional subspace. A test point is classified to the nearest class centroid of fitted values in the measure of Mahalanobis distance. The multiresponse linear regression can extend to a nonlinear one by the kernel trick. The regression approach provides a simple alternative for multiclass support vector classification. Also discussed in this article are issues of encoding, decoding and the notions of equivalence of codes and scores in this regression context. Two support vector regression algorithms, the regularized least squares and the smooth insensitive support vector regression, are used as our choice of regression solvers for numerical experiments. Results show that the regression approach is a competent alternative to the multiclass support vector classification.
Use of Multicategory Proximal SVM for Data Set Reduction
"... Abstract. We present a tutorial introduction to Support Vector Machines (SVM) and try to show, using intuitive arguments, why SVM’s tend to perform so well on a variety of challenging problems. We then discuss the quadratic optimization problem that arises as a result of the SVM formulation. We talk ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present a tutorial introduction to Support Vector Machines (SVM) and try to show, using intuitive arguments, why SVM’s tend to perform so well on a variety of challenging problems. We then discuss the quadratic optimization problem that arises as a result of the SVM formulation. We talk about a few computationally cheaper alternative formulations that have been developed recently. We go on to describe the Multicategory Proximal Support Vector Machines (MPSVM) in more detail. We propose a method for data set reduction by effective use of MPSVM. The linear MPSVM formulation is used in an iterative manner to identify the outliers in the data set and eliminate them. A kNearest Neighbor (kNN) classifier is able to classify points using this reduced data set without significant loss of accuracy. We also present geometrically motivated arguments to justify our approach. Experiments on a few publicly available OCR data sets validate our claims.