Results 1  10
of
1,196
Probabilistic Latent Semantic Analysis
 In Proc. of Uncertainty in Artificial Intelligence, UAI’99
, 1999
"... Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of twomode and cooccurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Sema ..."
Abstract

Cited by 771 (9 self)
 Add to MetaCart
Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of cooccurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order
SIGNAL RECOVERY BY PROXIMAL FORWARDBACKWARD SPLITTING
 MULTISCALE MODEL. SIMUL. TO APPEAR
"... We show that various inverse problems in signal recovery can be formulated as the generic problem of minimizing the sum of two convex functions with certain regularity properties. This formulation makes it possible to derive existence, uniqueness, characterization, and stability results in a unifi ..."
Abstract

Cited by 509 (24 self)
 Add to MetaCart
We show that various inverse problems in signal recovery can be formulated as the generic problem of minimizing the sum of two convex functions with certain regularity properties. This formulation makes it possible to derive existence, uniqueness, characterization, and stability results in a
A Comparison of Methods for Multiclass Support Vector Machines
 IEEE TRANS. NEURAL NETWORKS
, 2002
"... Support vector machines (SVMs) were originally designed for binary classification. How to effectively extend it for multiclass classification is still an ongoing research issue. Several methods have been proposed where typically we construct a multiclass classifier by combining several binary class ..."
Abstract

Cited by 952 (22 self)
 Add to MetaCart
larger optimization problem is required so up to now experiments are limited to small data sets. In this paper we give decomposition implementations for two such “alltogether” methods. We then compare their performance with three methods based on binary classifications: “oneagainstall,” “one
Elasticnet prefiltering for twoclass classification
 IEEE Trans. Cybern. 43 (February
, 2013
"... Abstract—A twostage linearintheparameter model construction algorithm is proposed aimed at noisy twoclass classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linearinthep ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—A twostage linearintheparameter model construction algorithm is proposed aimed at noisy twoclass classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear
Svmknn: Discriminative nearest neighbor classification for visual category recognition
 in CVPR
, 2006
"... We consider visual category recognition in the framework of measuring similarities, or equivalently perceptual distances, to prototype examples of categories. This approach is quite flexible, and permits recognition based on color, texture, and particularly shape, in a homogeneous framework. While n ..."
Abstract

Cited by 342 (10 self)
 Add to MetaCart
nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in biasvariance decomposition) in the case of limited sampling. Alternatively, one could use support vector machines but they involve timeconsuming optimization and computation of pairwise distances
Boosting with the L_2Loss: Regression and Classification
, 2001
"... This paper investigates a variant of boosting, L 2 Boost, which is constructed from a functional gradient descent algorithm with the L 2 loss function. Based on an explicit stagewise re tting expression of L 2 Boost, the case of (symmetric) linear weak learners is studied in detail in both regressi ..."
Abstract

Cited by 208 (17 self)
 Add to MetaCart
regression and twoclass classification. In particular, with the boosting iteration m working as the smoothing or regularization parameter, a new exponential biasvariance trade off is found with the variance (complexity) term bounded as m tends to infinity. When the weak learner is a smoothing spline
M.: Task Decomposition and Module Combination Based on Class Relations: A Modular Neural Network for Pattern Classification
 IEEE Transcations on Neural Network
, 1999
"... Abstract — In this paper, we propose a new method for decomposing pattern classification problems based on the class relations among training data. By using this method, we can divide a Kclass classification problem into a series of K 2 twoclass problems. These twoclass problems are to discrimin ..."
Abstract

Cited by 84 (37 self)
 Add to MetaCart
Abstract — In this paper, we propose a new method for decomposing pattern classification problems based on the class relations among training data. By using this method, we can divide a Kclass classification problem into a series of K 2 twoclass problems. These twoclass problems
Entanglement of Formation of an Arbitrary State of Two Qubits
, 1998
"... The entanglement of a pure state of a pair of quantum systems is defined as the entropy of either member of the pair. The entanglement of formation of a mixed state ρ is defined as the minimum average entanglement of a set of pure states constituting a decomposition of ρ. An earlier paper [Phys. Rev ..."
Abstract

Cited by 200 (0 self)
 Add to MetaCart
The entanglement of a pure state of a pair of quantum systems is defined as the entropy of either member of the pair. The entanglement of formation of a mixed state ρ is defined as the minimum average entanglement of a set of pure states constituting a decomposition of ρ. An earlier paper [Phys
Structure and Majority Classes in Decision Tree Learning
"... To provide good classification accuracy on unseen examples, a decision tree, learned by an algorithm such as ID3, must have sufficient structure and also identify the correct majority class in each of its leaves. If there are inadequacies in respect of either of these, the tree will have a percentag ..."
Abstract
 Add to MetaCart
of majority class error permits separation of the sampling error at the leaves from the possible bias introduced by the attribute selection method of the induction algorithm. It is shown that sampling error can extend to 25 % when there are more than two classes. Decompositions are obtained from experiments
Results 1  10
of
1,196