Results 1  10
of
15
Sparse Reconstruction by Separable Approximation
, 2008
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing ( ..."
Abstract

Cited by 170 (28 self)
 Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing (CS) are a few wellknown areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsityinducing (usually ℓ1) regularization term. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (which is therefore separable in the unknowns) plus the original sparsityinducing regularizer. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our framework yields an efficient solution technique for other regularizers, such as an ℓ∞norm regularizer and groupseparable (GS) regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard ℓ2 − ℓ1 problem, as well as being efficient on problems with other separable regularization terms.
Joint Bayesian Model Selection and Estimation of Noisy Sinusoids via Reversible Jump MCMC
, 1999
"... In this paper, the problem of joint Bayesian model selection and parameter estimation for sinusoids in white Gaussian noise is addressed. An original Bayesian model is proposed that allows us to define a posterior distribution on the parameter space. All Bayesian inference is then based on this dist ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
In this paper, the problem of joint Bayesian model selection and parameter estimation for sinusoids in white Gaussian noise is addressed. An original Bayesian model is proposed that allows us to define a posterior distribution on the parameter space. All Bayesian inference is then based on this distribution. Unfortunately, a direct evaluation of this distribution and of its features, including posterior model probabilities, requires evaluation of some complicated highdimensional integrals. We develop an efficient stochastic algorithm based on reversible jump Markov chain Monte Carlo methods to perform the Bayesian computation. A convergence result for this algorithm is established. In simulation, it appears that the performance of detection based on posterior model probabilities outperforms conventional detection schemes.
Robust Full Bayesian Learning for Radial Basis Networks
, 2001
"... We propose a hierachical full Bayesian model for radial basis networks. This model treats the model dimension (number of neurons), model parameters,... ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We propose a hierachical full Bayesian model for radial basis networks. This model treats the model dimension (number of neurons), model parameters,...
Activation detection in functional MRI using subspace modeling and maximum likelihood estimation
 IEEE Trans. Med. Imag
, 1999
"... Abstract — A statistical method for detecting activated pixels in functional MRI (fMRI) data is presented. In this method, the fMRI time series measured at each pixel is modeled as the sum of a response signal which arises due to the experimentally controlled activationbaseline pattern, a nuisance ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
Abstract — A statistical method for detecting activated pixels in functional MRI (fMRI) data is presented. In this method, the fMRI time series measured at each pixel is modeled as the sum of a response signal which arises due to the experimentally controlled activationbaseline pattern, a nuisance component representing effects of no interest, and Gaussian white noise. For periodic activationbaseline patterns, the response signal is modeled by a truncated Fourier series with a known fundamental frequency but unknown Fourier coefficients. The nuisance subspace is assumed to be unknown. A maximum likelihood estimate is derived for the component of the nuisance subspace which is orthogonal to the response signal subspace. An estimate for the order of the nuisance subspace is obtained from an information theoretic criterion. A statistical test is derived and shown to be the uniformly most powerful (UMP) test invariant to a group of transformations which are natural to the hypothesis testing problem. The maximal invariant statistic used in this test has an F distribution. The theoretical F distribution under the null hypothesis strongly concurred with the experimental frequency distribution obtained by performing null experiments in which the subjects did not perform any activation task. Application of the theory to motor activation and visual stimulation fMRI studies is presented. Index Terms — Brain, functional MRI, maximum likelihood estimation, statistical analysis.
Robust Full Bayesian Learning for Neural Networks
, 1999
"... In this paper, we propose a hierarchical full Bayesian model for neural networks. This model treats the model dimension (number of neurons), model parameters, regularisation parameters and noise parameters as random variables that need to be estimated. We develop a reversible jump Markov chain Monte ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
In this paper, we propose a hierarchical full Bayesian model for neural networks. This model treats the model dimension (number of neurons), model parameters, regularisation parameters and noise parameters as random variables that need to be estimated. We develop a reversible jump Markov chain Monte Carlo (MCMC) method to perform the necessary computations. We find that the results obtained using this method are not only better than the ones reported previously, but also appear to be robust with respect to the prior specification. In addition, we propose a novel and computationally efficient reversible jump MCMC simulated annealing algorithm to optimise neural networks. This algorithm enables us to maximise the joint posterior distribution of the network parameters and the number of basis function. It performs a global search in the joint space of the parameters and number of parameters, thereby surmounting the problem of local minima. We show that by calibrating the full hierarchical ...
Reversible Jump MCMC for Joint Detection and Estimation of Sources in Coloured Noise
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2000
"... This paper presents a novel Bayesian solution to the difficult problem of joint detection and estimation of sources impinging on a single array of sensors in spatially coloured noise with arbitrary covariance structure. Robustness to the noise covariance structure is achieved by integrating out the ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper presents a novel Bayesian solution to the difficult problem of joint detection and estimation of sources impinging on a single array of sensors in spatially coloured noise with arbitrary covariance structure. Robustness to the noise covariance structure is achieved by integrating out the unknown covariance matrix in an appropriate posterior distribution. The proposed procedure uses the Reversible Jump Markov Chain Monte Carlo method to extract the desired model order and direction of arrival parameters. We show that the determination of model order is consistent provided a particular hyperparameter is within a specified range. Simulation results support the effectiveness of the method.
Bayesian Methods for Neural Networks
, 1999
"... Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and meas ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and measured by probabilities. This formulation allows for a probabilistic treatment of our a priori knowledge, domain specific knowledge, model selection schemes, parameter estimation methods and noise estimation techniques. Many researchers have contributed towards the development of the Bayesian learning approach for neural networks. This thesis advances this research by proposing several novel extensions in the areas of sequential learning, model selection, optimisation and convergence assessment. The first contribution is a regularisation strategy for sequential learning based on extended Kalman filtering and noise estimation via evidence maximisation. Using the expectation maximisation (EM) algorithm, a similar algorithm is derived for batch learning. Much of the thesis is, however, devoted to Monte Carlo simulation methods. A robust Bayesian method is proposed to estimate,
MAP model order selection rule for 2D sinusoids in white noise
 IEEE TRANS. SIGNAL PROCESS
, 2005
"... We consider the problem of jointly estimating the number as well as the parameters of twodimensional (2D) sinusoidal signals, observed in the presence of an additive white Gaussian noise field. Existing solutions to this problem are based on model order selection rules and are derived for the par ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We consider the problem of jointly estimating the number as well as the parameters of twodimensional (2D) sinusoidal signals, observed in the presence of an additive white Gaussian noise field. Existing solutions to this problem are based on model order selection rules and are derived for the parallel onedimensional (1D) problem. These criteria are then adapted to the 2D problem using heuristic arguments. Employing asymptotic considerations, we derive a maximum a posteriori (MAP) model order selection criterion for jointly estimating the parameters of the 2D sinusoids and their number. The proposed model order selection rule is strongly consistent. As an example, the model order selection criterion is applied as a component in an algorithm for parametric estimation and synthesis of textured images.
A Shift InvarianceBased OrderSelection Technique for Exponential Data Modelling
"... Abstract—This paper presents a new subspacebased technique for automatic detection of the number of exponentially damped sinusoids. It consists in studying the shiftinvariance of the dominant subspace of the Hankel data matrix. No threshold setting and no penalization terms are necessary. This mod ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract—This paper presents a new subspacebased technique for automatic detection of the number of exponentially damped sinusoids. It consists in studying the shiftinvariance of the dominant subspace of the Hankel data matrix. No threshold setting and no penalization terms are necessary. This modelbased method, easy to implement, can be plugged into most subspacebased harmonic retrieval algorithms. Index Terms—Exponentially damped sinusoids, harmonic retrieval, order selection, singular value decomposition, subspace, total least squares. I.
A combined order selection and parameter estimation algorithm for undamped exponentials
 IEEE Trans. Signal Process
, 2000
"... Abstract—We propose an approximate maximum likelihood parameter estimation algorithm, combined with a model order estimator, for superimposed undamped exponentials in noise. The algorithm combines the robustness of Fourierbased estimators and the highresolution capabilities of parametric methods. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—We propose an approximate maximum likelihood parameter estimation algorithm, combined with a model order estimator, for superimposed undamped exponentials in noise. The algorithm combines the robustness of Fourierbased estimators and the highresolution capabilities of parametric methods. We use a combination of a Wald statistic and a MAP test for order selection and initialize an iterative maximum likelihood descent algorithm recursively based on estimates at higher candidate model orders. Experiments using simulated data and synthetic radar data demonstrate improved performance over MDL, MAP, and AIC in cases of practical interest. Index Terms—Combined detection and estimation, resolution bounds, undamped exponentials. I.