Results 1 - 10
of
689
Learning Structured Models with the AUC Loss and Its Generalizations
"... Many problems involve the prediction of mul-tiple, possibly dependent labels. The struc-tured output prediction framework builds predictors that take these dependencies into account and use them to improve accuracy. In many such tasks, performance is evalu-ated by the Area Under the ROC Curve (AUC). ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
). While a framework for optimizing the AUC loss for unstructured models exists, it does not naturally extend to structured mod-els. In this work, we propose a representa-tion and learning formulation for optimizing structured models over the AUC loss, show how our approach generalizes the unstruc
1 Structure-embedded AUC-SVM
"... Abstract: AUC-SVM directly maximizes the area under the ROC curve (AUC) through minimizing its hinge loss relaxation, and the decision function is determined by those support vector sample pairs playing the same roles as the support vector samples in SVM. Such a learning paradigm generally emphasize ..."
Abstract
- Add to MetaCart
Abstract: AUC-SVM directly maximizes the area under the ROC curve (AUC) through minimizing its hinge loss relaxation, and the decision function is determined by those support vector sample pairs playing the same roles as the support vector samples in SVM. Such a learning paradigm generally
On the Consistency of AUC Pairwise Optimization
"... AUC (Area Under ROC Curve) has been an impor-tant criterion widely used in diverse learning tasks. To optimize AUC, many learning approaches have been developed, most working with pairwise surro-gate losses. Thus, it is important to study the AUC consistency based on minimizing pairwise surro-gate l ..."
Abstract
- Add to MetaCart
-gate losses. In this paper, we introduce the general-ized calibration for AUC optimization, and prove that it is a necessary condition for AUC consis-tency. We then provide a sufficient condition for AUC consistency, and show its usefulness in study-ing the consistency of various surrogate losses, as well
A principal-components analysis of the Narcissistic Personality Inventory and further evidence of its construct validity.
- Journal of Personality and Social Psychology,
, 1988
"... We examined the internal and external validity of the Narcissistic Personality Inventory (NPI). Study 1 explored the internal structure of the NPI responses of 1,018 subjects. Using principal-components analysis, we analyzed the tetrachoric correlations among the NPI item responses and found eviden ..."
Abstract
-
Cited by 209 (1 self)
- Add to MetaCart
in his metapsychological and clinical thinking, so much so that contemporary historians of the psychoanalytic movement generally agree that Freud's explorations into narcissism were central to the development of his (a) structural model (id, ego, and superego); (b) concept of the ego ideal
Self-determination and persistence in a real-life setting: Toward a motivational model of high school dropout.
- Journal of Personality and Social Psychology,
, 1997
"... The purpose of this study was to propose and test a motivational model of high school dropout. The model posits that teachers, parents, and the school administration's behaviors toward students influence students' perceptions of competence and autonomy. The less autonomy supportive the so ..."
Abstract
-
Cited by 183 (19 self)
- Add to MetaCart
intentions to drop out of high school, which are later implemented, leading to actual dropout behavior. This model was tested with high school students (N = 4,537) by means of a prospective design. Results from analyses of variance and a structural equation modeling analysis (with L1SREL) were found
Modeling Latent Variable Uncertainty for Loss-based Learning
- ICML
, 2012
"... We consider the problem of parameter estimation using weakly supervised datasets, where a training sample consists of the input and a partially specified annotation, which we refer to as the output. The missing information in the annotation is modeled using latent variables. Previous methods overbur ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
-based dissimilarity coefficient. Our approach generalizes latent svm in two important ways: (i) it models the uncertainty over latent variables instead of relying on a pointwise estimate; and (ii) it allows the use of loss functions that depend on latent variables, which greatly increases its applicability. We
Indicators for Social and Economic Coping Capacity - Moving Toward a Working Definition of Adaptive Capacity”, Wesleyan-CMU Working Paper.
, 2001
"... Abstract This paper offers a practically motivated method for evaluating systems' abilities to handle external stress. The method is designed to assess the potential contributions of various adaptation options to improving systems' coping capacities by focusing attention directly on the u ..."
Abstract
-
Cited by 109 (14 self)
- Add to MetaCart
import would be felt on a micro-scale. Indeed, decision-rules and public perceptions could take on forms that would be quite particular to the set of available options. Taken in its most general form, the vulnerability model reflected in equations (1) and (2) can lead to the conclusion that everything
Formal models of incremental learning and their analysis
"... Abstract — We consider concept learning from examples. The learner receives – step by step – larger and larger initial segments of a sequence of examples describing an unknown target concept, processes these examples, and computes hypotheses. The learner is successful, if its hypotheses stabilize on ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
on a correct representation of the target concept. The underlying model is called identification in the limit. The present study concerns different versions of incremental learning in the limit. In contrast to the general case, now the learner has only limited access to the examples provided so far
Is random model better? on its accuracy and efficiency
- In Proceedings of Third IEEE International Conference on Data Mining (ICDM-2003
, 2003
"... Inductive learning searches an optimal hypothesis that minimizes a given loss function. It is usually assumed that the simplest hypothesis that fits the data is the best approximate to an optimal hypothesis. Since finding the simplest hypothesis is NP-hard for most representations, we generally empl ..."
Abstract
-
Cited by 29 (11 self)
- Add to MetaCart
Inductive learning searches an optimal hypothesis that minimizes a given loss function. It is usually assumed that the simplest hypothesis that fits the data is the best approximate to an optimal hypothesis. Since finding the simplest hypothesis is NP-hard for most representations, we generally
Learning with the Maximum Correntropy Criterion Induced Losses for Regression
"... Within the statistical learning framework, this paper studies the regression model associ-ated with the correntropy induced losses. The correntropy, as a similarity measure, has been frequently employed in signal processing and pattern recognition. Motivated by its empirical successes, this paper ai ..."
Abstract
- Add to MetaCart
Within the statistical learning framework, this paper studies the regression model associ-ated with the correntropy induced losses. The correntropy, as a similarity measure, has been frequently employed in signal processing and pattern recognition. Motivated by its empirical successes, this paper
Results 1 - 10
of
689