• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Adaptive finite volume method for distributed non-smooth parameter identification: (2007)

by E Haber, S Heldmann, U Ascher
Venue:Inverse Problems,
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 13
Next 10 →

A framework for the adaptive finite element solution of large inverse problems. I. Basic techniques

by Wolfgang Bangerth , 2004
"... Abstract. Since problems involving the estimation of distributed coefficients in partial differential equations are numerically very challenging, efficient methods are indispensable. In this paper, we will introduce a framework for the efficient solution of such problems. This comprises the use of a ..."
Abstract - Cited by 23 (7 self) - Add to MetaCart
Abstract. Since problems involving the estimation of distributed coefficients in partial differential equations are numerically very challenging, efficient methods are indispensable. In this paper, we will introduce a framework for the efficient solution of such problems. This comprises the use of adaptive finite element schemes, solvers for the large linear systems arising from discretization, and methods to treat additional information in the form of inequality constraints on the parameter to be recovered. The methods to be developed will be based on an all-at-once approach, in which the inverse problem is solved through a Lagrangian formulation. The main feature of the paper is the use of a continuous (function space) setting to formulate algorithms, in order to allow for discretizations that are adaptively refined as nonlinear iterations proceed. This entails that steps such as the description of a Newton step or a line search are first formulated on continuous functions and only then evaluated for discrete functions. On the other hand, this approach avoids the dependence of finite dimensional norms on the mesh size, making individual steps of the algorithm comparable even if they used differently refined meshes. Numerical examples will demonstrate the applicability and efficiency of the method for problems with several million unknowns and more than 10,000 parameters. Key words. Adaptive finite elements, inverse problems, Newton method on function spaces. AMS subject classifications. 65N21,65K10,35R30,49M15,65N50 1. Introduction. Parameter
(Show Context)

Citation Context

... efficient methods such as adaptive finite element techniques have not yet found widespread application to inverse problems and are only slowly adopted in the solution of PDE constrained optimization =-=[10, 11, 13, 21, 22, 30, 31, 32, 35, 39, 46, 50, 51]-=-. Rather, in most cases, the continuous inverse problem is first discretized on a predetermined mesh, and the resulting nonlinear problem is then solved using wellunderstood finite dimensional methods...

Stochastic algorithms for inverse problems involving PDEs and many measurements. Submitted

by Farbod Roosta-khorasani, Kees Van Den Doel, Uri Ascher , 2012
"... Inverse problems involving systems of partial differential equations (PDEs) can be very expensive to solve numerically. This is so especially when many experiments, involving different combinations of sources and receivers, are employed in order to obtain reconstructions of acceptable quality. The m ..."
Abstract - Cited by 5 (5 self) - Add to MetaCart
Inverse problems involving systems of partial differential equations (PDEs) can be very expensive to solve numerically. This is so especially when many experiments, involving different combinations of sources and receivers, are employed in order to obtain reconstructions of acceptable quality. The mere evaluation of a misfit function (the distance between predicted and observed data) often requires hundreds and thousands of PDE solves. This article develops and assesses dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden. We present in detail our methods for solving such inverse problems for the famous DC resistivity and EIT problems. These methods involve incorporation of a priori information such as piecewise smoothness, bounds on the sought conductivity surface, or even a piecewise constant solution. We then assume that all experiments share the same set of receivers and concentrate on methods for reducing the number of combinations of experiments, called simultaneous sources, that are used at each stabilized Gauss-Newton iteration. Algorithms for controlling the number of such combined sources are proposed and justified. Evaluating the misfit approximately, except for the final verification for terminating the process, always involves random sampling. Methods for Selecting the combined simultaneous sources, involving either random sampling or truncated SVD, are proposed and compared. Highly efficient variants of the resulting algorithms are identified. 1
(Show Context)

Citation Context

...ractical situations. These include electromagnetic data inversion in mining exploration (e.g., [25, 13, 18, 27]), seismic data inversion in oil exploration (e.g., [15, 22, 30]), DC resistivity (e.g., =-=[32, 29, 20, 19, 11]-=-) and EIT (e.g., [6, 8]; see more specifically Example 5.5 in [12] 2 ). The last of these 1 Throughout this article we use the ℓ2 vector norm unless otherwise specified. 2 See also the Wikipedia descr...

The lost honour of ℓ2-based regularization

by Kees Van Den Doel, Uri Ascher, Eldad Haber , 2012
"... In the past two decades, regularization methods based on the ℓ1 norm, including sparse wavelet representations and total variation, have become immensely popular. So much so, that we were led to consider the question whether ℓ1-based techniques ought to altogether replace the simpler, faster and bet ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
In the past two decades, regularization methods based on the ℓ1 norm, including sparse wavelet representations and total variation, have become immensely popular. So much so, that we were led to consider the question whether ℓ1-based techniques ought to altogether replace the simpler, faster and better known ℓ2-based alternatives as the default approach to regularization techniques. The occasionally tremendous advances of ℓ1-based techniques are not in doubt. However, such techniques also have their limitations. This article explores advantages and disadvantages compared to ℓ2-based techniques using several practical case studies. Taking into account the considerable added hardship in calculating solutions of the resulting computational problems, ℓ1-based techniques must offer substantial advantages to be worthwhile. In this light our results suggest that in many applications, though not all, ℓ2-based recovery may still be preferred. 1
(Show Context)

Citation Context

...e case when using p = 2). The rather fundamental importance of the above two reasons for using p = 1 is not in doubt. Among many other researchers we have ourselves contributed to this volume of work =-=[1, 30, 36]-=-. We have found that for well-conditioned problems with sufficient high quality data, 3 ℓ1-based regularization can, in many cases, “deliver on its promise”. However, for problems with poor data, or i...

Data completion and stochastic algorithms for PDE inversion problems with many measurements

by Farbod Roosta-khorasani, Kees Van Den Doel, Uri Ascher , 2013
"... Inverse problems involving systems of partial differential equations (PDEs) with many measurements or experiments can be very expensive to solve numerically. In a recent paper we examined dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden, assumi ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Inverse problems involving systems of partial differential equations (PDEs) with many measurements or experiments can be very expensive to solve numerically. In a recent paper we examined dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden, assuming that all experiments share the same set of receivers. In the present article we consider the more general and practically important case where receivers are not shared across experiments. We propose a data completion approach to alleviate this problem. This is done by means of an approximation using a gradient or Laplacian regularization, extending existing data for each experiment to the union of all receiver locations. Results using the method of simultaneous sources with the completed data are then compared to those obtained by a more general but slower random subset method which requires no modifications. 1
(Show Context)

Citation Context

...e overall computational costs are certainly possible. These include adapting the number of inner PCG iterations in the modified GN outer iteration (see [9]) and adaptive gridding for m(x) (see, e.g., =-=[21]-=- and references therein). Such techniques are essentially independent of the focus here. At the same time, they can be incorporated or fused together with our stochastic algorithms, further improving ...

ASSESSING STOCHASTIC ALGORITHMS FOR LARGE SCALE NONLINEAR LEAST SQUARES PROBLEMS USING EXTREMAL PROBABILITIES OF LINEAR COMBINATIONS OF GAMMA RANDOM VARIABLES

by Farbod Roosta-khorasani, Uri, M. Ascher
"... Abstract. This article considers stochastic algorithms for efficiently solving a class of large scale non-linear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps ar ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract. This article considers stochastic algorithms for efficiently solving a class of large scale non-linear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps are quantified. Such stochastic steps involve approxi-mating the NLS objective function using Monte-Carlo methods, and this is equivalent to the estimation of the trace of corresponding symmetric positive semi-definite (SPSD) matrices. For the latter, we prove tight necessary and sufficient conditions on the sample size (which translates to cost) to satisfy the prescribed probabilistic accuracy. We show that these conditions are practically computable and yield small sample sizes. They are then incorporated in our stochastic algorithm to quantify the uncertainty in each randomized step. The bounds we use are applications of more general results regarding extremal tail probabilities of linear combinations of gamma distributed random variables. We derive and prove new results concerning the maximal and minimal tail probabilities of such linear combinations, which can be considered independently of the rest of this paper.
(Show Context)

Citation Context

...ersion in oil exploration (e.g., [13, 20, 24]), diffuse optical tomography (DOT) (e.g., [3, 5]), quantitative photo-acoustic tomography (QPAT) (e.g., [14, 33]), direct current (DC) resistivity (e.g., =-=[28, 23, 18, 17, 9]-=-), and electrical impedance tomography (EIT) (e.g., [6, 7, 10]). If the locations where data are measured do not change from one experiment to another, i.e., P = Pi,∀i, we get f(m,qi) = PL(m) −1qi,(4....

Case History Cooperative

by Michael S. Mcmillan
"... constrained inversion of multiple ..."
Abstract - Add to MetaCart
constrained inversion of multiple

Recovering a thin dipping conductor with 3D electromagnetic inversion over the Caber deposit

by Michael S Mcmillan , Christoph Schwarzbach , Douglas W Oldenburg , Eldad Haber
"... SUMMARY Airborne time-domain electromagnetic (EM) data were collected in 2012 over the Caber volcanogenic massive sulfide (VMS) deposit in western Quebec. We inverted the data in three-dimensions (3D) to produce a conductivity inversion model that helped image the thin dipping conductor and surroun ..."
Abstract - Add to MetaCart
SUMMARY Airborne time-domain electromagnetic (EM) data were collected in 2012 over the Caber volcanogenic massive sulfide (VMS) deposit in western Quebec. We inverted the data in three-dimensions (3D) to produce a conductivity inversion model that helped image the thin dipping conductor and surrounding geology. The 3D inversion method consisted of a two-step approach. The first step employed a parametric inversion to recover a best-fitting shape of the dipping conductor using only data exhibiting an anomalous response over the deposit. With the parametric result as an initial and reference model, the second step used a conventional 3D EM inversion with data locations over the entire survey area. The second stage allowed for fine tuning of the shape and conductivity of the central dipping anomaly, while filling in features, such as overburden, in the remaining areas of the domain. The shape of the central conductive anomaly in the 3D inversion compared well with the known outline of the Caber deposit, based on geologic knowledge from past drilling. The overburden layer in the inversion model also agreed with previous geologic mapping. Preliminary results from this two-stage process show that it possible to recover a thin, dipping conductor with sharp boundaries through 3D EM inversion, which has been a difficult challenge in recent years.
(Show Context)

Citation Context

... ht ; s ee T er m s of U se a t h ttp :// lib ra ry .s eg .o rg / 3D EM Inversion at Caber inverted in 3D with TDOcTreeInv from the University of British Columbia (Haber and Schwarzbach, submitted in 2014, Inverse Problems). This code, hereafter called a conventional EM inversion, is a regularized algorithm using Gauss-Newton based optimization. It solves quasi-static Maxwell’s equations ∇×E+µHt = 0 (1) ∇×H−σE = s (2) subject to boundary and initial conditions n×E = 0 (3) E(x,y,z, t = 0) = E0 (4) H(x,y,z, t = 0) = H0 (5) in space and time using a finite volume discretization on OcTree meshes (Haber et al., 2007). Here, E = electric field vector, H = magnetic field vector, µ = magnetic permeability, σ = electrical conductivity, s = source vector, n = normal vector, x,y,z = spatial coordinates and t = time. Many transmitters are encountered in airborne EM surveys, and they are efficiently handled through direct solvers (Oldenburg et al., 2013; Amestoy et al., 2001; Schenk et al., 2001), which compute a Cholesky decomposition of the forward modeling matrix. Due to the difficult nature of a thin conductive target beneath conductive overburden, initial 3D inversion models had trouble recovering an appropr...

unknown title

by unknown authors
"... FaIMS: A fast algorithm for the inverse medium problem with multiple frequencies and multiple sources for the scalar Helmholtz equation ..."
Abstract - Add to MetaCart
FaIMS: A fast algorithm for the inverse medium problem with multiple frequencies and multiple sources for the scalar Helmholtz equation
(Show Context)

Citation Context

...ould be superior to a preconditioned Conjugate Gradients method for the normal equation. Also, we are considering neither sparse reconstruction ideas for η [7, 17] nor adaptive reconstruction schemes =-=[4, 15]-=-. We assume that the location of the detectors is independent of the source location and frequency. Finally, we commit an "inverse crime", since we use the same forward solver to both generate the dat...

Iterative Reconstruction of SPECT Images using Adaptive Multi-Level Refinement

by Hanno Schumacher, Stefan Heldmann, Eldad Haber, Bernd Fischer
"... Abstract. We present a novel method for iterative reconstruction of high resolution images. Our method is based on the observation that constant regions in an image can be represented at much lower resolution than region with fine details. Therefore, we combine adaptive refinement based on quadtrees ..."
Abstract - Add to MetaCart
Abstract. We present a novel method for iterative reconstruction of high resolution images. Our method is based on the observation that constant regions in an image can be represented at much lower resolution than region with fine details. Therefore, we combine adaptive refinement based on quadtrees with iterative reconstruction to reduce the computational costs. In our experiments we found a speed up factor of approximately two compared to a standard multi-level method. 1
(Show Context)

Citation Context

...own and have already been used in many other fields, too. Recent examples particularly incorporating quad-/octrees include image registration [2, 3, 4], computer graphicss319 [5], or inverse problems =-=[6]-=-. A similar approach for image reconstruction using a fixed non-uniform mesh generated from external knowledge is presented in [7]. However, to our best knowledge adaptive multi-level refinement has n...

The elliptic PDE

by Kees Van Den Doel, Uri M. Ascher
"... Adaptive and stochastic algorithms for EIT and DC ..."
Abstract - Add to MetaCart
Adaptive and stochastic algorithms for EIT and DC
(Show Context)

Citation Context

...larization also smears out discontinuities and is not useful when such solution features are present in m(x) and are important to reconstruct. A popular alternative is some variant of total variation =-=[32, 3, 19]-=- (TV), preferably using a Huber switching function with an adaptive switching parameter, which penalizes discontinuities more agreeably (essentially, it avoids attempts to integrate the square of a δ-...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University