Results 1  10
of
312
Largemargin Weakly Supervised Dimensionality Reduction
"... This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage ..."
Abstract
 Add to MetaCart
This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage
Exponentiated gradient algorithms for largemargin structured classification
 In Advances in neural information processing systems
, 2005
"... We consider the problem of structured classification, where the task is to predict a label from an input, and has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted contextfree grammars as special cases. We describe an algorithm that solve ..."
Abstract

Cited by 50 (9 self)
 Add to MetaCart
that solves the largemargin optimization problem defined in [12], using an exponentialfamily (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels is exponential in size—provided that certain expectations under Gibbs distributions can
LargeMargin Learning of Submodular Summarization Models
"... In this paper, we present a supervised learning approach to training submodular scoring functions for extractive multidocument summarization. By taking a structured prediction approach, we provide a largemargin method that directly optimizes a convex relaxation of the desired performance measure. T ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
In this paper, we present a supervised learning approach to training submodular scoring functions for extractive multidocument summarization. By taking a structured prediction approach, we provide a largemargin method that directly optimizes a convex relaxation of the desired performance measure
S.: Weaklysupervised hashing in kernel space
 In: Proceedings of Computer Vision and Pattern Recognition
, 2010
"... The explosive growth of the vision data motivates the recent studies on efficient data indexing methods such as localitysensitive hashing (LSH). Most existing approaches perform hashing in an unsupervised way. In this paper we move one step forward and propose a supervised hashing method, i.e., the ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
.e., the LAbelregularized Maxmargin Partition (LAMP) algorithm. The proposed method generates hash functions in weaklysupervised setting, where a small portion of sample pairs are manually labeled to be “similar” or “dissimilar”. We formulate the task as a Constrained ConvexConcave Procedure (CCCP), which
LargeMargin Metric Learning for Constrained Partitioning Problems
"... We consider unsupervised partitioning problems based explicitly or implicitly on the minimization of Euclidean distortions, such as clustering, image or video segmentation, and other changepoint detection problems. We emphasize on cases with specific structure, which include many practical situat ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
datasets that share the same metric. We cast the metric learning problem as a largemargin structured prediction problem, with proper definition of regularizers and losses, leading to a convex optimization problem which can be solved efficiently. Our experiments show how learning the metric can sig
LargeMargin Metric Learning for Partitioning Problems
, 2013
"... In this paper, we consider unsupervised partitioning problems, such as clustering, image segmentation, video segmentation and other changepoint detection problems. We focus on partitioning problems based explicitly or implicitly on the minimization of Euclidean distortions, which include meanbased ..."
Abstract
 Add to MetaCart
datasets that share the same metric. We cast the metric learning problem as a largemargin structured prediction problem, with proper definition of regularizers and losses, leading to a convex optimization problem which can be solved efficiently with iterative techniques. We provide experiments where we
Graph embedding and extension: A general framework for dimensionality reduction
 IEEE TRANS. PATTERN ANAL. MACH. INTELL
, 2007
"... Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper ..."
Abstract

Cited by 271 (29 self)
 Add to MetaCart
Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present
Largemargin Predictive Latent Subspace Learning for Multiview Data Analysis
"... Abstract—Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
view observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a largemargin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a
LargeMargin Thresholded Ensembles for Ordinal Regression: Theory and Practice
"... Abstract. We propose a thresholded ensemble model for ordinal regression problems. The model consists of a weighted ensemble of confidence functions and an ordered vector of thresholds. We derive novel largemargin bounds of common error functions, such as the classification error and the absolute er ..."
Abstract
 Add to MetaCart
error. In addition to some existing algorithms, we also study two novel boosting approaches for constructing thresholded ensembles. Both our approaches not only are simpler than existing algorithms, but also have a stronger connection to the largemargin bounds. In addition, they have comparable
MANUSCRIPT 1 Largemargin Learning of Compact Binary Image Encodings
"... Abstract—The use of highdimensional features has become a normal practice in many computer vision applications. The large dimension of these features is a limiting factor upon the number of data points which may be effectively stored and processed, however. We address this problem by developing a n ..."
Abstract
 Add to MetaCart
Abstract—The use of highdimensional features has become a normal practice in many computer vision applications. The large dimension of these features is a limiting factor upon the number of data points which may be effectively stored and processed, however. We address this problem by developing a
Results 1  10
of
312