Results 1  10
of
366
Linear Hinge Loss and Average Margin
, 1998
"... We describe a unifying method for proving relative loss bounds for online linear threshold classification algorithms, such as the Perceptron and the Winnow algorithms. For classification problems the discrete loss is used, i.e., the total number of prediction mistakes. We introduce a continuous ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
loss function, called the "linear hinge loss", that can be employed to derive the updates of the algorithms. We first prove bounds w.r.t. the linear hinge loss and then convert them to the discrete loss. We introduce a notion of "average margin" of a set of examples . We show how
SVC with Modified Hinge Loss Function
"... Support vector classification(SVC) provides more complete description of the linear and nonlinear relationships between input vectors and classifiers. In this paper we propose to solve the optimization problem of SVC with a modified hinge loss function, which enables to use an iterative reweighted l ..."
Abstract
 Add to MetaCart
Support vector classification(SVC) provides more complete description of the linear and nonlinear relationships between input vectors and classifiers. In this paper we propose to solve the optimization problem of SVC with a modified hinge loss function, which enables to use an iterative reweighted
Smooth Hinge Classification 1 Smooth Hinge Loss
, 2005
"... In earlier writing [2, 1], we discussed alternate loss functions that might be used for classification. We continue our discussion here by introducing yet another loss function, the Smooth Hinge. Recall that the (Shifted) Hinge loss function is defined as Hinge(z) = max(0, 1 − z). (1) In our eyes, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In earlier writing [2, 1], we discussed alternate loss functions that might be used for classification. We continue our discussion here by introducing yet another loss function, the Smooth Hinge. Recall that the (Shifted) Hinge loss function is defined as Hinge(z) = max(0, 1 − z). (1) In our eyes
Robust truncatedhingeloss support vector machines
, 2006
"... Abstract. With its elegant margin theory and accurate classification performance, the Support Vector Machine (SVM) has been widely applied in both machine learning and statistics. Despite its success and popularity, it still has some drawbacks in certain situations. In particular, the SVM classifie ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
classifier can be very sensitive to outliers in the training sample. Moreover, the number of support vectors (SVs) can be very large in many applications. To solve these problems, [WL06] proposed a new SVM variant, the robust truncatedhingeloss SVM (RSVM), which uses a truncated hinge loss. In this paper
Multiclass Boosting with Hinge Loss based on Output Coding
"... Multiclass classification is an important and fundamental problem in machine learning. A popular family of multiclass classification methods belongs to reducing multiclass to binary based on output coding. Several multiclass boosting algorithms have been proposed to learn the coding matrix and the a ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
and the associated binary classifiers in a problemdependent way. These algorithms can be unified under a sumofexponential loss function defined in the domain of margins (Sun et al., 2005). Instead, multiclass SVM uses another type of loss function based on hinge loss. In this paper, we present a new outputcoding
L1 AND L2 REGULARIZATION FOR MULTICLASS HINGE LOSS MODELS
"... This paper investigates the relationship between the loss function, the type of regularization, and the resulting model sparsity of discriminativelytrained multiclass linear models. The effects on sparsity of optimizing log loss are straightforward: L2 regularization produces very dense models whil ..."
Abstract
 Add to MetaCart
while L1 regularization produces much sparser models. However, optimizing hinge loss yields more nuanced behavior. We give experimental evidence and theoretical arguments that, for a class of problems that arises frequently in naturallanguage processing, both L1 and L2regularized hinge loss lead
Optimizing the Classification Cost using SVMs with a Double Hinge Loss
, 2014
"... The objective of this study is to minimize the classification cost using Support Vector Machines (SVMs) Classifier with a double hinge loss. Such binary classifiers have the option to reject observations when the cost of rejection is lower than that of misclassification. To train this classifier, th ..."
Abstract
 Add to MetaCart
The objective of this study is to minimize the classification cost using Support Vector Machines (SVMs) Classifier with a double hinge loss. Such binary classifiers have the option to reject observations when the cost of rejection is lower than that of misclassification. To train this classifier
Classification with a reject option using a hinge loss
, 2006
"... We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propo ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected
Collective Activity Detection using Hingeloss Markov Random Fields
"... We propose hingeloss Markov random fields (HLMRFs), a powerful class of continuousvalued graphical models, for highlevel computer vision tasks. HLMRFs are characterized by logconcave density functions, and are able to perform efficient, exact inference. Their templated hingeloss potential fun ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We propose hingeloss Markov random fields (HLMRFs), a powerful class of continuousvalued graphical models, for highlevel computer vision tasks. HLMRFs are characterized by logconcave density functions, and are able to perform efficient, exact inference. Their templated hingeloss potential
Learning Latent Groups with Hingeloss Markov Random Fields
"... Probabilistic models with latent variables are powerful tools that can help explain related phenomena by mediating dependencies among them. Learning in the presence of latent variables can be difficult though, because of the difficulty of marginalizing them out, or, more commonly, maximizing a lower ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
lower bound on the marginal likelihood. In this work, we show how to learn hingeloss Markov random fields (HLMRFs) that contain latent variables. HLMRFs are an expressive class of undirected probabilistic graphical models for which inference of most probable explanations is a convex optimization
Results 1  10
of
366