Results 11  20
of
103
Spectralspatial hyperspectral image segmentation using subspace multinomial logistic regression and Markov random fields
 IEEE Trans. Geosci. Remote Sens
, 2012
"... Abstract—This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distri ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces a new supervised segmentation algorithm for remotely sensed hyperspectral image data which integrates the spectral and spatial information in a Bayesian framework. A multinomial logistic regression (MLR) algorithm is first used to learn the posterior probability distributions from the spectral information, using a subspace projection method to better characterize noise and highly mixed pixels. Then, contextual information is included using a multilevel logistic Markov–Gibbs Markov random field prior. Finally, a maximum a posteriori segmentation is efficiently computed by the αExpansion mincutbased integer optimization algorithm. The proposed segmentation approach is experimentally evaluated using both simulated and real hyperspectral data sets, exhibiting stateoftheart performance when compared with recently introduced hyperspectral image classification methods. The integration of subspace projection methods with the MLR algorithm, combined with the use of spatial–contextual information, represents an innovative contribution in the literature. This approach is shown to provide accurate characterization of hyperspectral imagery in both the spectral and the spatial domain. Index Terms—Hyperspectral image segmentation, Markov random field (MRF), multinomial logistic regression (MLR), subspace projection method. I.
Efficient Minimization Method for a Generalized Total Variation Functional
, 2009
"... Replacing the ℓ² data fidelity term of the standard Total Variation (TV) functional with an ℓ¹ data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this ℓ¹TV functional have only recently begun to be developed, the fastest of ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Replacing the ℓ² data fidelity term of the standard Total Variation (TV) functional with an ℓ¹ data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this ℓ¹TV functional have only recently begun to be developed, the fastest of which exploit graph representations, and are restricted to the denoising problem. We describe an alternative approach that minimizes a generalized TV functional, including both ℓ²TV and ℓ¹TV as special cases, and is capable of solving more general inverse problems than denoising (e.g. deconvolution). This algorithm is competitive with the graphbased methods in the denoising case, and is the fastest algorithm of which we are aware for general inverse problems involving a nontrivial forward linear operator.
A generative model for brain tumor segmentation in multimodal images
 IN: PROC MICCAI, LNCS 6362
, 2010
"... We introduce a generative probabilistic model for segmentation of tumors in multidimensional images. The model allows for different tumor boundaries in each channel, reflecting difference in tumor appearance across modalities. We augment a probabilistic atlas of healthy tissue priors with a laten ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
(Show Context)
We introduce a generative probabilistic model for segmentation of tumors in multidimensional images. The model allows for different tumor boundaries in each channel, reflecting difference in tumor appearance across modalities. We augment a probabilistic atlas of healthy tissue priors with a latent atlas of the lesion and derive the estimation algorithm to extract tumor boundaries and the latent atlas from the image data. We present experiments on 25 glioma patient data sets, demonstrating significant improvement over the traditional multivariate tumor segmentation.
A Fast Multilevel Algorithm for WaveletRegularized Image Restoration
 IEEE Trans. Image Processing
"... Abstract—We present a multilevel extension of the popular “thresholded Landweber ” algorithm for waveletregularized image restoration that yields an order of magnitude speed improvement over the standard fixedscale implementation. The method is generic and targeted towards largescale linear inver ..."
Abstract

Cited by 18 (8 self)
 Add to MetaCart
(Show Context)
Abstract—We present a multilevel extension of the popular “thresholded Landweber ” algorithm for waveletregularized image restoration that yields an order of magnitude speed improvement over the standard fixedscale implementation. The method is generic and targeted towards largescale linear inverse problems, such as 3D deconvolution microscopy. The algorithm is derived within the framework of bound optimization. The key idea is to successively update the coefficients in the various wavelet channels using fixed, subbandadapted iteration parameters (step sizes and threshold levels). The optimization problem is solved efficiently via a proper chaining of basic iteration modules. The higher level description of the algorithm is similar to that of a multigrid solver for PDEs, but there is one fundamental difference: the latter iterates though a sequence of multiresolution versions of the original problem, while, in our case, we cycle through the wavelet subspaces corresponding to the difference between successive approximations. This strategy is motivated by the special structure of the problem and the preconditioning properties of the wavelet representation. We establish that the solution of the restoration problem corresponds to a fixed point of our multilevel optimizer. We also provide experimental evidence that the improvement in convergence rate is essentially determined by the (unconstrained) linear part of the algorithm, irrespective of the type of wavelet. Finally, we illustrate the technique with some image deconvolution examples, including some real 3D fluorescence microscopy data. Index Terms—Bound optimization, confocal, convergence acceleration, deconvolution, fast, fluorescence, inverse problems,regularization, majorizeminimize, microscopy, multigrid, multilevel, multiresolution, multiscale, nonlinear, optimization transfer, preconditioning, reconstruction, restoration, sparsity,
Hyperspectral Image Segmentation Using a New Bayesian Approach with Active Learning
"... This paper introduces a new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of two main steps: (a) learning, for each class label, the posterior probability distributions using a multinomial logistic regression model; (b) segmenting the hyperspec ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
(Show Context)
This paper introduces a new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of two main steps: (a) learning, for each class label, the posterior probability distributions using a multinomial logistic regression model; (b) segmenting the hyperspectral image based on the posterior probability distribution learned in step (a) and on a multilevel logistic prior which encodes the spatial information. The multinomial logistic regressors are learned by using the recently introduced logistic regression via splitting and augmented Lagrangian (LORSAL) algorithm. The maximum a posteriori segmentation is efficiently computed by the αExpansion mincut based integer optimization algorithm. Aiming at reducing the costs of acquiring large training sets, active learning is performed using a mutual information based criterion. The stateoftheart performance of the proposed approach is illustrated using both simulated and real hyperspectral data sets in a number of experimental comparisons with recently introduced hyperspectral image classification methods. Index Terms Hyperspectral image segmentation, sparse multinomial logistic regression, illposed problems, graph cuts, integer optimization, mutual information, active learning. I.
Variational algorithms for marginal map
 In UAI
, 2011
"... Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Marginal MAP problems are notoriously difficult tasks for graphical models. We derive a general variational framework for solving marginal MAP problems, in which we apply analogues of the Bethe, treereweighted, and mean field approximations. We then derive a “mixed ” message passing algorithm and a convergent alternative using CCCP to solve the BPtype approximations. Theoretically, we give conditions under which the decoded solution is a global or local optimum, and obtain novel upper bounds on solutions. Experimentally we demonstrate that our algorithms outperform related approaches. We also show that EM and variational EM comprise a special case of our framework. 1
Spectral Analysis of Nonuniformly Sampled Data and Applications
, 2012
"... Signal acquisition, signal reconstruction and analysis of spectrum of the signal are the three most important steps in signal processing and they are found in almost all of the modern day hardware. In most of the signal processing hardware, the signal of interest is sampled at uniform intervals sati ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Signal acquisition, signal reconstruction and analysis of spectrum of the signal are the three most important steps in signal processing and they are found in almost all of the modern day hardware. In most of the signal processing hardware, the signal of interest is sampled at uniform intervals satisfying some conditions like Nyquist rate. However, in some cases the privilege of having uniformly sampled data is lost due to some constraints on the hardware resources. In this thesis an important problem of signal reconstruction and spectral analysis from nonuniformly sampled data is addressed and a variety of methods are presented. The proposed methods are tested via numerical experiments on both artificial and reallife data sets. The thesis starts with a brief review of methods available in the literature for signal reconstruction and spectral analysis from non uniformly sampled data. The methods discussed in the thesis are classified into two broad categories dense and sparse methods, the classification is based on the kind of spectra for which they are applicable. Under dense spectral methods the main contribution of the thesis is a nonparametric approach named LIMES, which recovers the smooth spectrum from non uniformly sampled data. Apart from recovering
Penalized classification using fisher’s linear discriminant
 Journal of the Royal Statistical Society, Series B
, 2011
"... Summary. We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high dimensional setting where p n, LD ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Summary. We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high dimensional setting where p n, LDA is not appropriate for two reasons. First, the standard estimate for the withinclass covariance matrix is singular, and so the usual discriminant rule cannot be applied. Second, when p is large, it is difficult to interpret the classification rule that is obtained from LDA, since it involves all p features.We propose penalized LDA, which is a general approach for penalizing the discriminant vectors in Fisher’s discriminant problem in a way that leads to greater interpretability. The discriminant problem is not convex, so we use a minorization–maximization approach to optimize it efficiently when convex penalties are applied to the discriminant vectors. In particular, we consider the use of L1 and fused lasso penalties. Our proposal is equivalent to recasting Fisher’s discriminant problem as a biconvex problem. We evaluate the performances of the resulting methods on a simulation study, and on three gene expression data sets. We also survey past methods for extending LDA to the high dimensional setting and explore their relationships with our proposal.
Supplement to “A mixture of experts model for rank data with applications in election studies
, 2008
"... A voting bloc is defined to be a group of voters who have similar voting preferences. The cleavage of the Irish electorate into voting blocs is of interest. Irish elections employ a “single transferable vote” electoral system; under this system voters rank some or all of the electoral candidates in ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
A voting bloc is defined to be a group of voters who have similar voting preferences. The cleavage of the Irish electorate into voting blocs is of interest. Irish elections employ a “single transferable vote” electoral system; under this system voters rank some or all of the electoral candidates in order of preference. These rank votes provide a rich source of preference information from which inferences about the composition of the electorate may be drawn. Additionally, the influence of social factors or covariates on the electorate composition is of interest. A mixture of experts model is a mixture model in which the model parameters are functions of covariates. A mixture of experts model for rank data is developed to provide a modelbased method to cluster Irish voters into voting blocs, to examine the influence of social factors on this clustering and to examine the characteristic preferences of the voting blocs. The Benter model for rank data is employed as the
Learning Social Infectivity in Sparse Lowrank Networks Using Multidimensional Hawkes Processes
"... How will the behaviors of individuals in a social network be influenced by their neighbors, the authorities and the communities in a quantitative way? Such critical and valuable knowledge is unfortunately not readily accessible and we tend to only observe its manifestation in the form of recurrent ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
How will the behaviors of individuals in a social network be influenced by their neighbors, the authorities and the communities in a quantitative way? Such critical and valuable knowledge is unfortunately not readily accessible and we tend to only observe its manifestation in the form of recurrent and timestamped events occurring at the individuals involved in the social network. It is an important yet challenging problem to infer the underlying network of social inference based on the temporal patterns of those historical events that we can observe. In this paper, we propose a convex optimization approach to discover the hidden network of social influence by modeling the recurrent events at different individuals as multidimensional Hawkes processes, emphasizing the mutualexcitation nature of the dynamics of event occurrence. Furthermore, our estimation procedure, using nuclear and!1 norm regularization simultaneously on the parameters, is able to take into account the prior knowledge of the presence of neighbor interaction, authority influence, and community coordination in the social network. To efficiently solve the resulting optimization problem, we also design an algorithm ADM4 which combines techniques of alternating direction method of multipliers and majorization minimization. We experimented with both synthetic and real world data sets, and showed that the proposed method can discover the hidden network more accurately and produce a better predictive model than several baselines.