Results 11 -
15 of
15
Siemens Medical Solutions and
"... Multi-label problems arise in various domains such as multi-topic document categorization, protein function prediction, and automatic image annotation. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classificati ..."
Abstract
- Add to MetaCart
Multi-label problems arise in various domains such as multi-topic document categorization, protein function prediction, and automatic image annotation. One natural way to deal with such problems is to construct a binary classifier for each label, resulting in a set of independent binary classification problems. Since multiple labels share the same input space, and the semantics conveyed by different labels are usually correlated, it is essential to exploit the correlation information contained in different labels. In this paper, we consider a general framework for extracting shared structures in multi-label classification. In this framework, a common subspace is assumed to be shared among multiple labels. We show that the optimal solution to the proposed formulation can be obtained by solving a generalized eigenvalue problem, though the problem is nonconvex. For high-dimensional problems, direct computation of the solution is expensive, and we develop an efficient algorithm for this case. One appealing feature of the proposed framework is that it includes several well-known algorithms as special cases, thus elucidating their intrinsic relationships. We further show that the proposed framework can be extended to the kernel-induced feature space. We have conducted extensive experiments on multi-topic web page categorization and automatic gene expression pattern image annotation tasks, and results demonstrate
A Reconstruction Error Formulation for Semi-Supervised Multi-task and Multi-view Learning
"... ar ..."
(Show Context)
Merging SVMs with Linear Discriminant Analysis: A Combined Model
"... A key problem often encountered by many learning al-gorithms in computer vision dealing with high dimensional data is the so called “curse of dimensionality ” which arises when the available training samples are less than the input feature space dimensionality. To remedy this problem, we propose a j ..."
Abstract
- Add to MetaCart
(Show Context)
A key problem often encountered by many learning al-gorithms in computer vision dealing with high dimensional data is the so called “curse of dimensionality ” which arises when the available training samples are less than the input feature space dimensionality. To remedy this problem, we propose a joint dimensionality reduction and classification framework by formulating an optimization problem within the maximum margin class separation task. The proposed optimization problem is solved using alternative optimiza-tion where we jointly compute the low dimensional max-imum margin projections and the separating hyperplanes in the projection subspace. Moreover, in order to reduce the computational cost of the developed optimization algo-rithm we incorporate orthogonality constraints on the de-rived projection bases and show that the resulting combined model is an alternation between identifying the optimal sep-arating hyperplanes and performing a linear discriminant analysis on the support vectors. Experiments on face, facial expression and object recognition validate the effectiveness of the proposed method against state-of-the-art dimension-ality reduction algorithms. 1.