Results 1 
3 of
3
Classification Calibration Dimension for General Multiclass Losses
"... We study consistency properties of surrogate loss functions for general multiclass classification problems, defined by a general loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 01 classification problems (and for certain other specif ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
We study consistency properties of surrogate loss functions for general multiclass classification problems, defined by a general loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 01 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting. We then introduce the notion of classification calibration dimension of a multiclass loss matrix, which measures the smallest ‘size ’ of a prediction space for which it is possible to design a convex surrogate that is classification calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, as one application, we provide a different route from the recent result of Duchi et al. (2010) for analyzing the difficulty of designing ‘lowdimensional ’ convex surrogates that are consistent with respect to pairwise subset ranking losses. We anticipate the classification calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems. 1
HOMOLOGY REPRESENTATIONS ARISING FROM THE HALF CUBE, II
, 2009
"... In a previous work, we defined a family of subcomplexes of the ndimensional half cube by removing the interiors of all half cube shaped faces of dimension at least k, and we proved that the reduced homology of such a subcomplex is concentrated in degree k − 1. This homology group supports a natur ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In a previous work, we defined a family of subcomplexes of the ndimensional half cube by removing the interiors of all half cube shaped faces of dimension at least k, and we proved that the reduced homology of such a subcomplex is concentrated in degree k − 1. This homology group supports a natural action of the Coxeter group W(Dn) of type D. In this paper, we explicitly determine the characters (over C) of these homology representations, which turn out to be multiplicity free. Regarded as representations of the symmetric group Sn by restriction, the homology representations turn out to be direct sums of certain representations induced from parabolic subgroups. The latter representations of Sn agree (over C) with the representations of Sn on the (k −2)nd homology of the complement of the kequal real hyperplane arrangement.
Convex Calibration Dimension for Multiclass Loss Matrices
, 2014
"... We study consistency properties of surrogate loss functions for general multiclass learning problems, defined by a general multiclass loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 01 classification problems (and for certain other s ..."
Abstract
 Add to MetaCart
(Show Context)
We study consistency properties of surrogate loss functions for general multiclass learning problems, defined by a general multiclass loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 01 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be calibrated with respect to a loss matrix in this setting. We then introduce the notion of convex calibration dimension of a multiclass loss matrix, which measures the smallest ‘size ’ of a prediction space in which it is possible to design a convex surrogate that is calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, we apply our framework to study various subset ranking losses, and use the convex calibration dimension as a tool to show both the existence and nonexistence of various types of convex calibrated surrogates for these losses. Our results strengthen recent results of Duchi et al. (2010) and Calauzènes et al. (2012) on the nonexistence of certain types of convex calibrated surrogates in subset ranking. We anticipate the convex calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems. 1