Results 11  20
of
59
Variational algorithms for marginal map
 In UAI
, 2011
"... Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a convergent alternative using CCCP to solve the BPtype approximations. Theoretically, we give conditions under which the decoded solution is a global or local optimum, and obtain novel upper bounds on solutions. Experimentally we demonstrate that our algorithms outperform related approaches. We also show that EM and variational EM comprise a special case of our framework. 1
Spectralspatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields
 IEEE Trans. Geosci. Remote Sens
, 2012
"... Abstract—This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distri ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Abstract—This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distributions from the spectral information, using a subspace projection method to better characterize noise and highly mixed pixels. Then, contextual information is included using a multilevel logistic Markov–Gibbs Markov random field prior. Finally, a maximum a posteriori segmentation is efficiently computed by the αExpansion mincutbased integer optimization algorithm. The proposed segmentation approach is experimentally evaluated using both simulated and real hyperspectral data sets, exhibiting stateoftheart performance when compared with recently introduced hyperspectral image classification methods. The integration of subspace projection methods with the MLR algorithm, combined with the use of spatial–contextual information, represents an innovative contribution in the literature. This approach is shown to provide accurate characterization of hyperspectral imagery in both the spectral and the spatial domain. Index Terms—Hyperspectral image segmentation, Markov random field (MRF), multinomial logistic regression (MLR), subspace projection method. I.
Supplement to “A mixture of experts model for rank data with applications in election studies
, 2008
"... A voting bloc is defined to be a group of voters who have similar voting preferences. The cleavage of the Irish electorate into voting blocs is of interest. Irish elections employ a “single transferable vote” electoral system; under this system voters rank some or all of the electoral candidates in ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
A voting bloc is defined to be a group of voters who have similar voting preferences. The cleavage of the Irish electorate into voting blocs is of interest. Irish elections employ a “single transferable vote” electoral system; under this system voters rank some or all of the electoral candidates in order of preference. These rank votes provide a rich source of preference information from which inferences about the composition of the electorate may be drawn. Additionally, the influence of social factors or covariates on the electorate composition is of interest. A mixture of experts model is a mixture model in which the model parameters are functions of covariates. A mixture of experts model for rank data is developed to provide a modelbased method to cluster Irish voters into voting blocs, to examine the influence of social factors on this clustering and to examine the characteristic preferences of the voting blocs. The Benter model for rank data is employed as the
Penalized classification using fisher’s linear discriminant
 Journal of the Royal Statistical Society, Series B
, 2011
"... Summary. We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high dimensional setting where p n, LD ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Summary. We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high dimensional setting where p n, LDA is not appropriate for two reasons. First, the standard estimate for the withinclass covariance matrix is singular, and so the usual discriminant rule cannot be applied. Second, when p is large, it is difficult to interpret the classification rule that is obtained from LDA, since it involves all p features.We propose penalized LDA, which is a general approach for penalizing the discriminant vectors in Fisher’s discriminant problem in a way that leads to greater interpretability. The discriminant problem is not convex, so we use a minorization–maximization approach to optimize it efficiently when convex penalties are applied to the discriminant vectors. In particular, we consider the use of L1 and fused lasso penalties. Our proposal is equivalent to recasting Fisher’s discriminant problem as a biconvex problem. We evaluate the performances of the resulting methods on a simulation study, and on three gene expression data sets. We also survey past methods for extending LDA to the high dimensional setting and explore their relationships with our proposal.
HessianBased Norm Regularization for Image Restoration With Biomedical Applications
"... methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the totalvariation (TV) functional, we extend it to also include secondorder differential operators. Specifically, we derive secondorder regularizers that involve matr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the totalvariation (TV) functional, we extend it to also include secondorder differential operators. Specifically, we derive secondorder regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted leastsquare type and results from a majorization–minimization approach. It relies on a problemspecific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images. Index Terms—Biomedical imaging, Frobenius norm, Hessian matrix, image deblurring, linear inverse problems, majorization–minimization (MM) algorithms, spectral norm. I.
Supervised Feature Selection in Graphs with Path Coding Penalties and Network Flows ∗
"... We consider supervised learning problems where the features are embedded in a graph, such as gene expressions in a gene network. In this context, it is of much interest to take into account the problem structure, and automatically select a subgraph with a small number of connected components. By exp ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We consider supervised learning problems where the features are embedded in a graph, such as gene expressions in a gene network. In this context, it is of much interest to take into account the problem structure, and automatically select a subgraph with a small number of connected components. By exploiting prior knowledge, one can indeed improve the prediction performance and/or obtain better interpretable results. Regularization or penalty functions for selecting features in graphs have recently been proposed but they raise new algorithmic challenges. For example, they typically require solving a combinatorially hard selection problem among all connected subgraphs. In this paper, we propose computationally feasible strategies to select a sparse and “well connected” subset of features sitting on a directed acyclic graph (DAG). We introduce structured sparsity penalties over paths on a DAG called “path coding ” penalties. Unlike existing regularization functions, path coding penalties can both model long range interactions between features in the graph and be tractable. The penalties and their proximal operators involve path selection problems, which we efficiently solve by leveraging network flow optimization. We experimentally show on synthetic, image, and genomic data that our approach is scalable and lead to more connected subgraphs than other regularization functions for graphs.
Simplified statistical image reconstruction algorithm for polyenergetic Xray CT
 In Proc. IEEE Nuc. Sci. Symp. Med. Im. Conf
, 2005
"... would not be possible without the direct and indirect help and support of a lot of people. First and foremost is Prof. Jeff Fessler, whose invaluable guidance and constant support as a research advisor helped me take the first steps in engineering research. JeanBaptiste Thibault, Ph.D., (CT Scienti ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
would not be possible without the direct and indirect help and support of a lot of people. First and foremost is Prof. Jeff Fessler, whose invaluable guidance and constant support as a research advisor helped me take the first steps in engineering research. JeanBaptiste Thibault, Ph.D., (CT Scientist, GE Healthcare) collaborated closely with us on one of the investigatations and provided practical knowledge about actual Xray CT scanners, scanner data, and image reconstruction software libraries. I also wish to acknowledge the insightful discussions I had with Roy Nilsen (GE Healthcare), Bruno De Man, Ph.D., (GE Global Research), and with members of Prof. Randy Ten Haken’s
Abstract Author's personal copy
, 2007
"... www.elsevier.com/locate/acha Coordinate and subspace optimization methods for linear least squares with nonquadratic regularization ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
www.elsevier.com/locate/acha Coordinate and subspace optimization methods for linear least squares with nonquadratic regularization
Efficient Maximum Entropy Reconstruction of Nuclear Magnetic Resonance T1T2 Spectra
"... © 2010 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
© 2010 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. Abstract—This paper deals with the reconstruction of T1T2 correlation spectra in nuclear magnetic resonance relaxometry. The illposed character and the large size of this inverse problem are the main difficulties to tackle. While maximum entropy is retained as an adequate regularization approach, the choice of an efficient optimization algorithm remains a challenging task. Our proposal is to apply a truncated Newton algorithm with two original features. First, a theoretically sound line search strategy suitable for the entropy function is applied to ensure the convergence of the algorithm. Second, an appropriate preconditioning structure based on a singular value decomposition of the forward model matrix is used to speed up the algorithm convergence. Furthermore, we exploit the specific structures of the observation model and the Hessian of the criterion to reduce the computation cost of the algorithm. The performances of the proposed strategy are illustrated by means of synthetic and real data processing. Index Terms—Laplace inversion, line search, maximum entropy, nuclear magnetic resonance, SVD preconditioning, T1T2 spectrum, truncated Newton. I.