Results 1 
9 of
9
Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods
 ADVANCES IN LARGE MARGIN CLASSIFIERS
, 1999
"... The output of a classifier should be a calibrated posterior probability to enable postprocessing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. Howev ..."
Abstract

Cited by 699 (0 self)
 Add to MetaCart
The output of a classifier should be a calibrated posterior probability to enable postprocessing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. However, training with a maximum likelihood score will produce nonsparse kernel machines. Instead, we train an SVM, then train the parameters of an additional sigmoid function to map the SVM outputs into probabilities. This chapter compares classification error rate and likelihood scores for an SVM plus sigmoid versus a kernel method trained with a regularized likelihood error function. These methods are tested on three dataminingstyle data sets. The SVM+sigmoid yields probabilities of comparable quality to the regularized maximum likelihood kernel method, while still retaining the sparseness of the SVM.
Learning with Matrix Factorization
, 2004
"... Matrices that can be factored into a product of two simpler matrices can serve as a useful and often natural model in the analysis of tabulated or highdimensional data. Models based on matrix factorization (Factor Analysis, PCA) have been extensively used in statistical analysis and machine learning ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
Matrices that can be factored into a product of two simpler matrices can serve as a useful and often natural model in the analysis of tabulated or highdimensional data. Models based on matrix factorization (Factor Analysis, PCA) have been extensively used in statistical analysis and machine learning for over a century, with many new formulations and models suggested in recent
A Measure for Module Cohesion
, 1995
"... Module cohesion is a property of a module that describes the degree to which actions performed within the module contribute to single behavior/function. The concept of module cohesion was originally introduced by Stevens, Myers, Constantine, and Yourdon. However, the subjective nature of their defin ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Module cohesion is a property of a module that describes the degree to which actions performed within the module contribute to single behavior/function. The concept of module cohesion was originally introduced by Stevens, Myers, Constantine, and Yourdon. However, the subjective nature of their definitions has made it difficult to compute the cohesion of a module precisely. In this
Experimental Evaluation of Agreement Between Programmers in Applying the Rules of Cohesion
, 1998
"... In the early 1970s Stevens, Myers, Constantine, and Yourdon introduced the notion of module cohesion and presented rules that could be used to assess the cohesion of a module. Their rules consisted of a set of relations ordered to constitute levels, and the criterion that a module was assigned the l ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In the early 1970s Stevens, Myers, Constantine, and Yourdon introduced the notion of module cohesion and presented rules that could be used to assess the cohesion of a module. Their rules consisted of a set of relations ordered to constitute levels, and the criterion that a module was assigned the lowest of the levels of the relations it satisfied. Stevens et al.'s rules of cohesion are now covered in most software engineering textbook, even though they have never been subjected to any experimental analysis. This paper presents the results of an experiment analyzing these rules of cohesion. The experiment, using fifteen computer science graduate students as subjects, was conducted to assess whether Stevens et al.'s rules were objective, i.e., whether there is abovechance agreement in the cohesion levels assigned by different programmers. The data collected indicates that, even though the subjects were assessed to have understood the concepts well, there is a significant variation in the cohesion levels assigned by them. This result raises questions about the precision of the material taught in the software engineering curriculum. Index terms: Software metrics, software measures, cohesion, experimentation in software engineering, experiment design.
of the Timber Management/Wildlife Interactions in Northern California Forest Types
, 1990
"... A basic sampling scheme is proposed to estimate the proportion of sampled units (Spotted Owl Habitat Areas (SOHAs) or randomly sampled 1000acre polygon areas (RSAs)) occupied by spotted owl pairs. A bias adjustment for the possibility of missing a pair given its presence on a SOHA or RSA is suggest ..."
Abstract
 Add to MetaCart
A basic sampling scheme is proposed to estimate the proportion of sampled units (Spotted Owl Habitat Areas (SOHAs) or randomly sampled 1000acre polygon areas (RSAs)) occupied by spotted owl pairs. A bias adjustment for the possibility of missing a pair given its presence on a SOHA or RSA is suggested. The sampling scheme is based on a fixed number of visits to a sample unit (a SOHA or RSA) in which the occupancy is to be determined. Once occupancy is determined, or the maximum number of visits is reached, the sampling is completed for that unit. The resulting data are summarized as a set of independent Bernoulli trials; a zero (no occupancy) or one (occupancy) is recorded for each unit. The occupancy proportion is the sum of these Bernoulli trials divided by the sample size. The bias adjustment estimates this occupancy proportion for the estimated number of units on which a pair of owls was present but not detected. The bias adjustment requires the recording of the number of the visit during which occupancy was first detected. The distributional assumptions are checked with five different sets of data.
Contextual Valuedefiniteness and the KochenSpecker Paradox
, 2005
"... Compatibility between the realist tenants of valuedefiniteness and causality is called into question by several realism impossibility proofs in which their formal elements are shown to conflict. We review how this comes about in the KochenSpecker and von Neumann proofs and point out a connection b ..."
Abstract
 Add to MetaCart
Compatibility between the realist tenants of valuedefiniteness and causality is called into question by several realism impossibility proofs in which their formal elements are shown to conflict. We review how this comes about in the KochenSpecker and von Neumann proofs and point out a connection between their key assumptions: a constraint on realist causality via additivity in the latter proof, noncontextuality in the former. We conclude that valuedefiniteness and contextuality are indeed not mutually exclusive. 1 overview In contrast to Bell’s theorem which draws a contradiction between the predictions of quantum mechanics and realism, the theorem of Kochen and Specker (KS), ”the second important nogo theory against hidden variable theories”, rather calls into question the very logic in realist thinking. The argument is directed against a brand of realism characterized by valuedefiniteness and noncontextuality, formulated here in section 6.1 as propositions p(1) and p(2), respectively. When these are applied to an elementary QM description of spin1 particle measurements, a contradiction
The Journal of Risk and Uncertainty, 28:3; 195–215, 2004 c ○ 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. The Ecology of Risk Taking
"... We analyze the risk level chosen by agents who have private information regarding their quality. We show that even riskneutral agents will choose risk strategically to enhance their reputation in the market, and that such choices will be influenced by the mix of other agents ’ types. Assuming that ..."
Abstract
 Add to MetaCart
We analyze the risk level chosen by agents who have private information regarding their quality. We show that even riskneutral agents will choose risk strategically to enhance their reputation in the market, and that such choices will be influenced by the mix of other agents ’ types. Assuming that the market has no strong prior about whether the agents are good or bad, good agents will choose low levels of risk, and bad agents high levels. Empirical evidence is gathered on 2462 firms over 24 years. The results support the model: agents of higher quality have less variable performance. Keywords: risk, incomplete information