Results 1  10
of
27,318
Sparsitypromoting sensor selection for nonlinear measurement models
 IEEE Trans. Signal Process. (Submitted
, 2013
"... Abstract—The problem of choosing the best subset of sensors that guarantees a certain estimation performance is referred to as sensor selection. In this paper, we focus on observations that are related to a general nonlinear model. The proposed framework is valid as long as the observations are ind ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
Abstract—The problem of choosing the best subset of sensors that guarantees a certain estimation performance is referred to as sensor selection. In this paper, we focus on observations that are related to a general nonlinear model. The proposed framework is valid as long as the observations
1 Latent Variable Bayesian Models for Promoting Sparsity
, 2010
"... Many practical methods for finding maximally sparse coefficient expansions involve solving a regression problem using a particular class of concave penalty functions. From a Bayesian perspective, this process is equivalent to maximum a posteriori (MAP) estimation using a sparsityinducing prior dist ..."
Abstract
 Add to MetaCart
Many practical methods for finding maximally sparse coefficient expansions involve solving a regression problem using a particular class of concave penalty functions. From a Bayesian perspective, this process is equivalent to maximum a posteriori (MAP) estimation using a sparsityinducing prior
A modified, sparsity promoting, GaussNewton algorithm for seismic waveform inversion
"... Images obtained from seismic data are used by the oil and gas industry for geophysical exploration. Cuttingedge methods for transforming the data into interpretable images are moving away from linear approximations and highfrequency asymptotics towards Full Waveform Inversion (FWI), a nonlinear da ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
datafitting procedure based on full data modeling using the waveequation. The size of the problem, the nonlinearity of the forward model, and illposedness of the formulation all contribute to a pressing need for fast algorithms and novel regularization techniques to speed up and improve inversion
Chapter Eight A Constellation of Superlinear Algorithms
"... a realvalued cost is the archetypal superlinear optimization method. The Newton method, however, suffers from a lack of global convergence and the prohibitive numerical cost of solving the Newton equation (6.2) necessary for each iteration. The trustregion approach, presented in Chapter 7, provide ..."
Abstract
 Add to MetaCart
a realvalued cost is the archetypal superlinear optimization method. The Newton method, however, suffers from a lack of global convergence and the prohibitive numerical cost of solving the Newton equation (6.2) necessary for each iteration. The trustregion approach, presented in Chapter 7
NewtonLike Solver for Elastoplastic Problems with Hardening and its Local SuperLinear Convergence
"... We discuss a new solution algorithm for quasistatic elastoplastic problems with hardening. Such problems are described by a time dependent variational inequality, where the displacement and the plastic strain fields serve as primal variables. After discretization in time, one variational inequality ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
derivative is proposed and implemented numerically. The local superlinear convergence of the Newtonlike method in the discrete case is shown and sufficient regularity assumptions are formulated to guarantee local superlinear convergence also in the continuous case. 1
MaximumAPosteriori Estimates in Linear Inverse Problems with Logconcave Priors are Proper Bayes
"... Abstract. A frequent matter of debate in Bayesian inversion is the question, which of the two principle pointestimators, the maximumaposteriori (MAP) or the conditional mean (CM) estimate is to be preferred. As the MAP estimate corresponds to the solution given by variational regularization techn ..."
Abstract
 Add to MetaCart
Abstract. A frequent matter of debate in Bayesian inversion is the question, which of the two principle pointestimators, the maximumaposteriori (MAP) or the conditional mean (CM) estimate is to be preferred. As the MAP estimate corresponds to the solution given by variational regularization
Listeria pathogenesis and molecular virulence determinants
, 2001
"... PATHOPHYSIOLOGY OF LISTERIA INFECTION.............................................................................................586 ..."
Abstract

Cited by 188 (23 self)
 Add to MetaCart
PATHOPHYSIOLOGY OF LISTERIA INFECTION.............................................................................................586
Inexact Newton Methods For Semismooth Equations With Applications To Variational Inequality Problems
"... : We consider the local behaviour of inexact Newton methods for the solution of a semismooth system of equations. In particular, we give a complete characterization of the Qsuperlinear and Qquadratic convergence of inexact Newton methods. We then apply these results to a particular semismooth syst ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
system of equations arising from variational inequality problems, and present a globally and locally fast convergent algorithm for its solution. Key words: Semismoothness, inexact Newton methods, variational inequality problems, global convergence, superlinear convergence, quadratic convergence. 1
LogConcavity: a review
"... Abstract: We review and formulate results concerning logconcavity and stronglogconcavity in both discrete and continuous settings. We show how preservation of logconcavity and strongly logconcavity on R under convolution follows from a fundamental monotonicity result of Efron (1969). We provid ..."
Abstract
 Add to MetaCart
Abstract: We review and formulate results concerning logconcavity and stronglogconcavity in both discrete and continuous settings. We show how preservation of logconcavity and strongly logconcavity on R under convolution follows from a fundamental monotonicity result of Efron (1969). We
1Sprase/Robust Estimation and Kalman Smoothing with Nonsmooth LogConcave Densities: Modeling, Computation, and Theory
"... We introduce a new class of quadratic support (QS) functions, many of which already play a crucial role in a variety of applications, including machine learning, robust statistical inference, sparsity promotion, and inverse problems such as Kalman smoothing. Well known examples of QS penalties inclu ..."
Abstract
 Add to MetaCart
We introduce a new class of quadratic support (QS) functions, many of which already play a crucial role in a variety of applications, including machine learning, robust statistical inference, sparsity promotion, and inverse problems such as Kalman smoothing. Well known examples of QS penalties
Results 1  10
of
27,318