Results 1  10
of
54
A.Blake. Cosegmentation of image pairs by histogram matching  incorporating a global constraint into MRFs
 In CVPR
, 2006
"... We introduce the term cosegmentation which denotes the task of segmenting simultaneously the common parts of an image pair. A generative model for cosegmentation is presented. Inference in the model leads to minimizing an energy with an MRF term encoding spatial coherency and a global constraint whi ..."
Abstract

Cited by 100 (3 self)
 Add to MetaCart
(Show Context)
We introduce the term cosegmentation which denotes the task of segmenting simultaneously the common parts of an image pair. A generative model for cosegmentation is presented. Inference in the model leads to minimizing an energy with an MRF term encoding spatial coherency and a global constraint which attempts to match the appearance histograms of the common parts. This energy has not been proposed previously and its optimization is challenging and NPhard. For this problem a novel optimization scheme which we call trust region graph cuts is presented. We demonstrate that this framework has the potential to improve a wide range of research: Object driven image retrieval, video tracking and segmentation, and interactive image editing. The power of the framework lies in its generality, the common part can be a rigid/nonrigid object (or scene), observed from different viewpoints or even similar objects of the same class. 1.
Large scale transductive svms
 JMLR
"... We show how the ConcaveConvex Procedure can be applied to Transductive SVMs, which traditionally require solving a combinatorial search problem. This provides for the first time a highly scalable algorithm in the nonlinear case. Detailed experiments verify the utility of our approach. Software is a ..."
Abstract

Cited by 69 (5 self)
 Add to MetaCart
(Show Context)
We show how the ConcaveConvex Procedure can be applied to Transductive SVMs, which traditionally require solving a combinatorial search problem. This provides for the first time a highly scalable algorithm in the nonlinear case. Detailed experiments verify the utility of our approach. Software is available at
Trading convexity for scalability
 ICML06, 23rd International Conference on Machine Learning
, 2006
"... Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show h ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
(Show Context)
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concaveconvex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs. 1.
Latent Hierarchical Structural Learning for Object Detection
, 2010
"... We present a latent hierarchical structural learning method for object detection. An object is represented by a mixture of hierarchical tree models where the nodes represent object parts. The nodes can move spatially to allow both local and global shape deformations. The models can be trained discri ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
We present a latent hierarchical structural learning method for object detection. An object is represented by a mixture of hierarchical tree models where the nodes represent object parts. The nodes can move spatially to allow both local and global shape deformations. The models can be trained discriminatively using latent structural SVM learning, where the latent variables are the node positions and the mixture component. But current learning methods are slow, due to the large number of parameters and latent variables, and have been restricted to hierarchies with two layers. In this paper we describe an incremental concaveconvex procedure (iCCCP) which allows us to learn both two and three layer models efficiently. We show that iCCCP leads to a simple training algorithm which avoids complex multistage layerwise training, careful part selection, and achieves good performance without requiring elaborate initialization. We perform object detection using our learnt models and obtain performance comparable with stateoftheart methods when evaluated on challenging public PASCAL datasets. We demonstrate the advantages of three layer hierarchies – outperforming Felzenszwalb et al.’s two layer models on all 20 classes.
Multiple Instance Learning for Sparse Positive Bags
 In ICML
, 2007
"... We present a new approach to multiple instance learning (MIL) that is particularly effective when the positive bags are sparse (i.e. contain few positive instances). Unlike other SVMbased MIL methods, our approach more directly enforces the desired constraint that at least one of the instances in a ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
(Show Context)
We present a new approach to multiple instance learning (MIL) that is particularly effective when the positive bags are sparse (i.e. contain few positive instances). Unlike other SVMbased MIL methods, our approach more directly enforces the desired constraint that at least one of the instances in a positive bag is positive. Using both artificial and realworld data, we experimentally demonstrate that our approach achieves greater accuracy than stateoftheart MIL methods when positive bags are sparse, and performs competitively when they are not. In particular, our approach is the best performing method for image region classification. 1.
Discriminative models for speech recognition
 In Information Theory and Applications Workshop
, 1997
"... Abstract — The vast majority of automatic speech recognition systems use Hidden Markov Models (HMMs) as the underlying acoustic model. Initially these models were trained based on the maximum likelihood criterion. Significant performance gains have been obtained by using discriminative training crit ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
(Show Context)
Abstract — The vast majority of automatic speech recognition systems use Hidden Markov Models (HMMs) as the underlying acoustic model. Initially these models were trained based on the maximum likelihood criterion. Significant performance gains have been obtained by using discriminative training criteria, such as maximum mutual information and minimum phone error. However, the underlying acoustic model is still generative, with the associated constraints on the state and transition probability distributions, and classification is based on Bayes ’ decision rule. Recently, there has been interest in examining discriminative, or direct, models for speech recognition. This paper briefly reviews the forms of discriminative models that have been investigated. These include maximum entropy Markov models, hidden conditional random fields and conditional augmented models. The relationships between the various models and issues with applying them to large vocabulary continuous speech recognition will be discussed. I.
D.: Continuous ratio optimization via convex relaxation with applications to multiview 3d reconstruction
 In: CVPR
, 2009
"... We introduce a convex relaxation framework to optimally minimize continuous surface ratios. The key idea is to minimize the continuous surface ratio by solving a sequence of convex optimization problems. We show that such minimal ratios are superior to traditionally used minimal surface formulations ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We introduce a convex relaxation framework to optimally minimize continuous surface ratios. The key idea is to minimize the continuous surface ratio by solving a sequence of convex optimization problems. We show that such minimal ratios are superior to traditionally used minimal surface formulations in that they do not suffer from a shrinking bias and no longer require the choice of a regularity parameter. The absence of a shrinking bias in the minimal ratio model is proven analytically. Furthermore we demonstrate that continuous ratio optimization can be applied to derive a new algorithm for reconstructing threedimensional silhouetteconsistent objects from multiple views. Experimental results confirm that our approach allows to accurately reconstruct deep concavities even without the specification of tuning parameters. 1.
UNCONDITIONALLY STABLE SCHEMES FOR HIGHER ORDER INPAINTING
"... Abstract. Inpainting methods with third and fourth order equations have certain advantages in comparison with equations of second order such as the smooth interpolation of image information even over large distances. Because of this such methods became very popular in the last couple of years. Solvi ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Inpainting methods with third and fourth order equations have certain advantages in comparison with equations of second order such as the smooth interpolation of image information even over large distances. Because of this such methods became very popular in the last couple of years. Solving higher order equations numerically can be a computational demanding task though. Discretizing a fourth order evolution equation with a brute force method may restrict the time steps to a size up to order ∆x 4 where ∆x denotes the step size of the spatial grid. In this work we will present a more educated way of discretization, namely efficient semiimplicit schemes that are guaranteed to be unconditionally stable. We will explain the main idea of these schemes and present applications in image processing for inpainting with the CahnHilliard equation, TVH −1 inpainting, and inpainting with LCIS (low curvature image simplifiers). 1.
Structured Support Vector Machines for Noise Robust Continuous Speech Recognition
"... The use of discriminative models is an interesting alternative to generative models for speech recognition. This paper examines one form of these models, structured support vector machines (SVMs), for noise robust speech recognition. One important aspect of structured SVMs is the form of the joint f ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
The use of discriminative models is an interesting alternative to generative models for speech recognition. This paper examines one form of these models, structured support vector machines (SVMs), for noise robust speech recognition. One important aspect of structured SVMs is the form of the joint feature space. In this work features based on generative models are used, which allows modelbased compensation schemes to be applied to yield robust joint features. However, these features require the segmentation of frames into words, or subwords, to be specified. In previous work this segmentation was obtained using generative models. Here the segmentations are refined using the parameters of the structured SVM. A Viterbilike scheme for obtaining “optimal ” segmentations, and modifications to the training algorithm to allow them to be efficiently used, are described. The performance of the approach is evaluated on a noise corrupted continuous digit task: AURORA 2. Index Terms: speech recognition, structural SVMs, optimal alignment, large margin, log linear model
DIFFUSE INTERFACE MODELS ON GRAPHS FOR CLASSIFICATION OF HIGH DIMENSIONAL DATA ∗
"... Abstract. There are currently several communities working on algorithms for classification of high dimensional data. This work develops a class of variational algorithms that combine recent ideas from spectral methods on graphs with nonlinear edge/region detection methods traditionally used in in th ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Abstract. There are currently several communities working on algorithms for classification of high dimensional data. This work develops a class of variational algorithms that combine recent ideas from spectral methods on graphs with nonlinear edge/region detection methods traditionally used in in the PDEbased imaging community. The algorithms are based on the GinzburgLandau functional which has classical PDE connections to total variation minimization. Convexsplitting algorithms allow us to quickly find minimizers of the proposed model and take advantage of fast spectral solvers of linear graphtheoretic problems. We present diverse computational examples involving both basic clustering and semisupervised learning for different applications. Case studies include feature identification in images, segmentation in social networks, and segmentation of shapes in high dimensional datasets. Key words. Nyström extension, diffuse interfaces, image processing, high dimensional data AMS subject classifications. Insert AMS subject classifications. This work brings together ideas from different communities and for this reason we review various components of the algorithms in order to make the paper accessible to readers familiar with either the PDEbased or graphtheoretic approaches. In