Results 1  10
of
56
Decision Combination in Multiple Classifier Systems
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 16. NO. I. JANUARY 1994
, 1994
"... A multiple classifier system is a powerful solution to difficult pattern recognition problems involving large class sets and noisy input because it allows simultaneous use of arbitrary feature descriptors and classification procedures. Decisions by the classifiers can be represented as rankings of ..."
Abstract

Cited by 373 (5 self)
 Add to MetaCart
A multiple classifier system is a powerful solution to difficult pattern recognition problems involving large class sets and noisy input because it allows simultaneous use of arbitrary feature descriptors and classification procedures. Decisions by the classifiers can be represented as rankings of classes so that they are comparable across different types of classifiers and different instances of a problem. The rankings can be combined by methods that either reduce or rerank a given set of classes. An intersection method and a union method are proposed for class set reduction. Three methods based on the highest rank, the Borda count, and logistic regression are proposed for class set reranking. These methods have been tested in applications on degraded machineprinted characters and words from large lexicons, resulting in substantial improvement in overall correctness.
Residual analysis for spatial point processes (with discussion
 Journal of the Royal Statistical Society (series B
, 2005
"... [Read before The Royal Statistical Society at a meeting organized by the Research Section on ..."
Abstract

Cited by 48 (8 self)
 Add to MetaCart
(Show Context)
[Read before The Royal Statistical Society at a meeting organized by the Research Section on
Piecewisepolynomial regression trees
 Statistica Sinica
, 1994
"... A nonparametric function 1 estimation method called SUPPORT (“Smoothed and Unsmoothed PiecewisePolynomial Regression Trees”) is described. The estimate is typically made up of several pieces, each piece being obtained by fitting a polynomial regression to the observations in a subregion of the data ..."
Abstract

Cited by 48 (8 self)
 Add to MetaCart
A nonparametric function 1 estimation method called SUPPORT (“Smoothed and Unsmoothed PiecewisePolynomial Regression Trees”) is described. The estimate is typically made up of several pieces, each piece being obtained by fitting a polynomial regression to the observations in a subregion of the data space. Partitioning is carried out recursively as in a treestructured method. If the estimate is required to be smooth, the polynomial pieces may be glued together by means of weighted averaging. The smoothed estimate is thus obtained in three steps. In the first step, the regressor space is recursively partitioned until the data in each piece are adequately fitted by a polynomial of a fixed order. Partitioning is guided by analysis of the distributions of residuals and crossvalidation estimates of prediction mean square error. In the second step, the data within a neighborhood of each partition are fitted by a polynomial. The final estimate of the regression function is obtained by averaging the polynomial pieces, using smooth weight functions each of which diminishes rapidly to zero outside its associated partition. Estimates of derivatives of the regression function may be
A Theory of Multiple Classifier Systems And Its Application to Visual Word Recognition
, 1992
"... Despite the success of many pattern recognition systems in constrained domains, problems that involve noisy input and many classes remain difficult. A promising direction is to use several classifiers simultaneously, such that they can complement each other in correctness. This thesis is concerned w ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
Despite the success of many pattern recognition systems in constrained domains, problems that involve noisy input and many classes remain difficult. A promising direction is to use several classifiers simultaneously, such that they can complement each other in correctness. This thesis is concerned with decision combination in a multiple classifier system that is critical to its success. A multiple classifier system consists of a set of classifiers and a decision combination function. It is a preferred solution to a complex recognition problem because it allows simultaneous use of feature descriptors of many types, corresponding measures of similarity, and many classification procedures. It also allows dynamic selection, so that classifiers adapted to inputs of a particular type may be applied only when those inputs are encountered. Decisions by the classifiers are represented as rankings of the class set that are derivable from the results of feature matching. Rank scores contain more ...
Exploratory Data Analysis for Complex Models
, 2002
"... Exploratory" and "confirmatory" data analysis can both be viewed as methods for comparing observed data to what would be obtained under an implicit or explicit statistical model. ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
Exploratory" and "confirmatory" data analysis can both be viewed as methods for comparing observed data to what would be obtained under an implicit or explicit statistical model.
Tutorial in Biostatistics: Multivariable prognostic models. Statistics in Medicine
, 1996
"... Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can resu ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can result in models that poorly fit the dataset at hand, or, even more likely, inaccurately predict outcomes on new subjects. One must know how to measure qualities of a model's fit in order to avoid poorly fitted or overfitted models. Measurement of predictive accuracy can be difficult for survival time data in the presence of censoring. We discuss an easily interpretable index of predictive discrimination as well as methods for assessing calibration of predicted survival probabilities. Both types of predictive accuracy should be unbiasedly validated using bootstrapping or crossvalidation, before using predictions in a new data series. We discuss some of the hazards of poorly fitted and overfitted regression models and present one modelling strategy that avoids many of the problems discussed. The methods described are applicable to all regression models, but are particularly needed for binary, ordinal, and timetoevent outcomes. Methods are illustrated with a survival analysis in prostate cancer using Cox regression. 1.
Diagnostic Checks for DiscreteData Regression Models Using Posterior Predictive Simulations
, 1997
"... Model checking with discrete data regressions can be difficult because usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodnessoffit ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Model checking with discrete data regressions can be difficult because usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodnessoffit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fit to a historical data set on behavioral learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: (a) structured displays of the entire data set, (b) general discrepancy variables based on plots of binned or smoothed residuals versus predictors, and (c) specific discrepancy variables created based on the particul...
Validating ObjectOriented Design Metrics on a Commercial Java Application
, 2000
"... Many of the objectoriented metrics that have been developed by the research community are believed to measure some aspect of complexity. As such, they can serve as leading indicators of problematic classes, for example, those classes that are most faultprone. If faulty classes can be detected earl ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Many of the objectoriented metrics that have been developed by the research community are believed to measure some aspect of complexity. As such, they can serve as leading indicators of problematic classes, for example, those classes that are most faultprone. If faulty classes can be detected early in the development project's life cycle, mitigating actions can be taken, such as focused inspections. Prediction models using design metrics can be used to identify faulty classes early on. In this paper, we present a cognitive theory of objectoriented metrics and an empirical study which has as objectives to formally test this theory while validating the metrics and to build a postrelease faultproneness prediction model. The cognitive mechanisms which we apply in this study to objectoriented metrics are based on contemporary models of human memory. They are: familiarity, interference, and fan effects. Our empirical study was performed with data from a commercial Java application. We found that Depth of Inheritance Tree (DIT) is a good measure of familiarity and, as predicted, has a quadratic relationship with faultproneness. Our hypotheses were confirmed for Import Coupling to other classes, Export Coupling and Number of Children metrics. The Ancestor based Import Coupling metrics were not associated with faultproneness after controlling for the confounding effect of DIT. The prediction model constructed had a good accuracy. Finally, we formulated a cost savings model and applied it to our predictive model. This demonstrated a 42% reduction in postrelease costs if the prediction model is used to identify the classes that should be inspected.
On Multiple Classifier Systems for Pattern Recognition
 IEEE Trans. Pattern Anal. Machine Intell
, 1992
"... Difficult pattern recognition problems involving large class sets and noisy input can be solved by a multiple classifier system, which allows simultaneous use of arbitrary feature descriptors and classification procedures. Independent decisions by each classifier can be combined by methods of the hi ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Difficult pattern recognition problems involving large class sets and noisy input can be solved by a multiple classifier system, which allows simultaneous use of arbitrary feature descriptors and classification procedures. Independent decisions by each classifier can be combined by methods of the highest rank, Borda count, and logistic regression, resulting in substantial improvement in overall correctness. 1