Results 1  10
of
13
A framework for the adaptive finite element solution of large inverse problems. I. Basic techniques
, 2004
"... Abstract. Since problems involving the estimation of distributed coefficients in partial differential equations are numerically very challenging, efficient methods are indispensable. In this paper, we will introduce a framework for the efficient solution of such problems. This comprises the use of a ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Since problems involving the estimation of distributed coefficients in partial differential equations are numerically very challenging, efficient methods are indispensable. In this paper, we will introduce a framework for the efficient solution of such problems. This comprises the use of adaptive finite element schemes, solvers for the large linear systems arising from discretization, and methods to treat additional information in the form of inequality constraints on the parameter to be recovered. The methods to be developed will be based on an allatonce approach, in which the inverse problem is solved through a Lagrangian formulation. The main feature of the paper is the use of a continuous (function space) setting to formulate algorithms, in order to allow for discretizations that are adaptively refined as nonlinear iterations proceed. This entails that steps such as the description of a Newton step or a line search are first formulated on continuous functions and only then evaluated for discrete functions. On the other hand, this approach avoids the dependence of finite dimensional norms on the mesh size, making individual steps of the algorithm comparable even if they used differently refined meshes. Numerical examples will demonstrate the applicability and efficiency of the method for problems with several million unknowns and more than 10,000 parameters. Key words. Adaptive finite elements, inverse problems, Newton method on function spaces. AMS subject classifications. 65N21,65K10,35R30,49M15,65N50 1. Introduction. Parameter
Stochastic algorithms for inverse problems involving PDEs and many measurements. Submitted
, 2012
"... Inverse problems involving systems of partial differential equations (PDEs) can be very expensive to solve numerically. This is so especially when many experiments, involving different combinations of sources and receivers, are employed in order to obtain reconstructions of acceptable quality. The m ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Inverse problems involving systems of partial differential equations (PDEs) can be very expensive to solve numerically. This is so especially when many experiments, involving different combinations of sources and receivers, are employed in order to obtain reconstructions of acceptable quality. The mere evaluation of a misfit function (the distance between predicted and observed data) often requires hundreds and thousands of PDE solves. This article develops and assesses dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden. We present in detail our methods for solving such inverse problems for the famous DC resistivity and EIT problems. These methods involve incorporation of a priori information such as piecewise smoothness, bounds on the sought conductivity surface, or even a piecewise constant solution. We then assume that all experiments share the same set of receivers and concentrate on methods for reducing the number of combinations of experiments, called simultaneous sources, that are used at each stabilized GaussNewton iteration. Algorithms for controlling the number of such combined sources are proposed and justified. Evaluating the misfit approximately, except for the final verification for terminating the process, always involves random sampling. Methods for Selecting the combined simultaneous sources, involving either random sampling or truncated SVD, are proposed and compared. Highly efficient variants of the resulting algorithms are identified. 1
The lost honour of ℓ2based regularization
, 2012
"... In the past two decades, regularization methods based on the ℓ1 norm, including sparse wavelet representations and total variation, have become immensely popular. So much so, that we were led to consider the question whether ℓ1based techniques ought to altogether replace the simpler, faster and bet ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In the past two decades, regularization methods based on the ℓ1 norm, including sparse wavelet representations and total variation, have become immensely popular. So much so, that we were led to consider the question whether ℓ1based techniques ought to altogether replace the simpler, faster and better known ℓ2based alternatives as the default approach to regularization techniques. The occasionally tremendous advances of ℓ1based techniques are not in doubt. However, such techniques also have their limitations. This article explores advantages and disadvantages compared to ℓ2based techniques using several practical case studies. Taking into account the considerable added hardship in calculating solutions of the resulting computational problems, ℓ1based techniques must offer substantial advantages to be worthwhile. In this light our results suggest that in many applications, though not all, ℓ2based recovery may still be preferred. 1
Data completion and stochastic algorithms for PDE inversion problems with many measurements
, 2013
"... Inverse problems involving systems of partial differential equations (PDEs) with many measurements or experiments can be very expensive to solve numerically. In a recent paper we examined dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden, assumi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Inverse problems involving systems of partial differential equations (PDEs) with many measurements or experiments can be very expensive to solve numerically. In a recent paper we examined dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden, assuming that all experiments share the same set of receivers. In the present article we consider the more general and practically important case where receivers are not shared across experiments. We propose a data completion approach to alleviate this problem. This is done by means of an approximation using a gradient or Laplacian regularization, extending existing data for each experiment to the union of all receiver locations. Results using the method of simultaneous sources with the completed data are then compared to those obtained by a more general but slower random subset method which requires no modifications. 1
ASSESSING STOCHASTIC ALGORITHMS FOR LARGE SCALE NONLINEAR LEAST SQUARES PROBLEMS USING EXTREMAL PROBABILITIES OF LINEAR COMBINATIONS OF GAMMA RANDOM VARIABLES
"... Abstract. This article considers stochastic algorithms for efficiently solving a class of large scale nonlinear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps ar ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This article considers stochastic algorithms for efficiently solving a class of large scale nonlinear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps are quantified. Such stochastic steps involve approximating the NLS objective function using MonteCarlo methods, and this is equivalent to the estimation of the trace of corresponding symmetric positive semidefinite (SPSD) matrices. For the latter, we prove tight necessary and sufficient conditions on the sample size (which translates to cost) to satisfy the prescribed probabilistic accuracy. We show that these conditions are practically computable and yield small sample sizes. They are then incorporated in our stochastic algorithm to quantify the uncertainty in each randomized step. The bounds we use are applications of more general results regarding extremal tail probabilities of linear combinations of gamma distributed random variables. We derive and prove new results concerning the maximal and minimal tail probabilities of such linear combinations, which can be considered independently of the rest of this paper.
Recovering a thin dipping conductor with 3D electromagnetic inversion over the Caber deposit
"... SUMMARY Airborne timedomain electromagnetic (EM) data were collected in 2012 over the Caber volcanogenic massive sulfide (VMS) deposit in western Quebec. We inverted the data in threedimensions (3D) to produce a conductivity inversion model that helped image the thin dipping conductor and surroun ..."
Abstract
 Add to MetaCart
(Show Context)
SUMMARY Airborne timedomain electromagnetic (EM) data were collected in 2012 over the Caber volcanogenic massive sulfide (VMS) deposit in western Quebec. We inverted the data in threedimensions (3D) to produce a conductivity inversion model that helped image the thin dipping conductor and surrounding geology. The 3D inversion method consisted of a twostep approach. The first step employed a parametric inversion to recover a bestfitting shape of the dipping conductor using only data exhibiting an anomalous response over the deposit. With the parametric result as an initial and reference model, the second step used a conventional 3D EM inversion with data locations over the entire survey area. The second stage allowed for fine tuning of the shape and conductivity of the central dipping anomaly, while filling in features, such as overburden, in the remaining areas of the domain. The shape of the central conductive anomaly in the 3D inversion compared well with the known outline of the Caber deposit, based on geologic knowledge from past drilling. The overburden layer in the inversion model also agreed with previous geologic mapping. Preliminary results from this twostage process show that it possible to recover a thin, dipping conductor with sharp boundaries through 3D EM inversion, which has been a difficult challenge in recent years.
unknown title
"... FaIMS: A fast algorithm for the inverse medium problem with multiple frequencies and multiple sources for the scalar Helmholtz equation ..."
Abstract
 Add to MetaCart
(Show Context)
FaIMS: A fast algorithm for the inverse medium problem with multiple frequencies and multiple sources for the scalar Helmholtz equation
Iterative Reconstruction of SPECT Images using Adaptive MultiLevel Refinement
"... Abstract. We present a novel method for iterative reconstruction of high resolution images. Our method is based on the observation that constant regions in an image can be represented at much lower resolution than region with fine details. Therefore, we combine adaptive refinement based on quadtrees ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We present a novel method for iterative reconstruction of high resolution images. Our method is based on the observation that constant regions in an image can be represented at much lower resolution than region with fine details. Therefore, we combine adaptive refinement based on quadtrees with iterative reconstruction to reduce the computational costs. In our experiments we found a speed up factor of approximately two compared to a standard multilevel method. 1