• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Linear Dimensionality Reduction for Multi-label Classification

by Shuiwang Ji, Jieping Ye
Add To MetaCart

Tools

Sorted by:
Results 11 - 15 of 15

Siemens Medical Solutions and

by Shuiwang Ji, Lei Tang, Jieping Ye
"... Multi-label problems arise in various domains such as multi-topic document categorization, protein function prediction, and automatic image annotation. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classificati ..."
Abstract - Add to MetaCart
Multi-label problems arise in various domains such as multi-topic document categorization, protein function prediction, and automatic image annotation. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classification problems. Since multiple labels share the same input space, and the semantics conveyed by different labels are usually correlated, it is essential to exploit the correlation information contained in different labels. In this paper, we consider a general framework for extracting shared structures in multi-label classification. In this framework, a common subspace is assumed to be shared among multiple labels. We show that the optimal solution to the proposed formulation can be obtained by solving a generalized eigenvalue problem, though the problem is nonconvex. For high-dimensional problems, direct computation of the solution is expensive, and we develop an efficient algorithm for this case. One appealing feature of the proposed framework is that it includes several well-known algorithms as special cases, thus elucidating their intrinsic relationships. We further show that the proposed framework can be extended to the kernel-induced feature space. We have conducted extensive experiments on multi-topic web page categorization and automatic gene expression pattern image annotation tasks, and results demonstrate

Multi-label Learning via Structured Decomposition and Group Sparsity

by Tianyi Zhou, Dacheng Tao
"... ar ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...dependence between feature space and label space, and provides a data preprocessing for other multi-label learning method. A linear dimensionality reduction method for multi-label data is proposed in =-=[12]-=-. In [13], multi-label prediction is formulated as a sparse signal recovery problem. However, the problem size always significantly increases when multi-label learning are decomposed into a set of bin...

The role of dimensionality reduction in linear classification

by Weiran Wang , 2014
"... ar ..."
Abstract - Add to MetaCart
Abstract not found

A Reconstruction Error Formulation for Semi-Supervised Multi-task and Multi-view Learning

by Buyue Qian, Xiang Wang, Ian Davidson
"... ar ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

... function over graph or hypergraph. However, the performance of such approach is weakened by the lack of the concatenation of dimension reduction and learning algorithm. To the best of our knowledge, =-=[15]-=- is the first attempt to connect dimension reduction and multi-task/label learning, but it suffers from the inability of utilizing unlabeled data. Semi-supervised learning. The study of semi-supervise...

Merging SVMs with Linear Discriminant Analysis: A Combined Model

by Symeon Nikitidis, Stefanos Zafeiriou, Maja Pantic
"... A key problem often encountered by many learning al-gorithms in computer vision dealing with high dimensional data is the so called “curse of dimensionality ” which arises when the available training samples are less than the input feature space dimensionality. To remedy this problem, we propose a j ..."
Abstract - Add to MetaCart
A key problem often encountered by many learning al-gorithms in computer vision dealing with high dimensional data is the so called “curse of dimensionality ” which arises when the available training samples are less than the input feature space dimensionality. To remedy this problem, we propose a joint dimensionality reduction and classification framework by formulating an optimization problem within the maximum margin class separation task. The proposed optimization problem is solved using alternative optimiza-tion where we jointly compute the low dimensional max-imum margin projections and the separating hyperplanes in the projection subspace. Moreover, in order to reduce the computational cost of the developed optimization algo-rithm we incorporate orthogonality constraints on the de-rived projection bases and show that the resulting combined model is an alternation between identifying the optimal sep-arating hyperplanes and performing a linear discriminant analysis on the support vectors. Experiments on face, facial expression and object recognition validate the effectiveness of the proposed method against state-of-the-art dimension-ality reduction algorithms. 1.
(Show Context)

Citation Context

...e called deflation). The last hyperplane learned from the deflation approach is the final hyperplane that can be used for classification. A similar to our methodology line of research is presented in =-=[11]-=- where dimensionality reduction is attempted in the context of multi-label classification and the projection directions are derived by considering only binary classification problems one for each labe...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University