Results 1  10
of
48
Dynamical systems method (DSM) and nonlinear problems
 IN THE BOOK: SPECTRAL THEORY AND NONLINEAR ANALYSIS, SINGAPORE, WORLD SCIENTIFIC PUBLISHERS, 2005
, 2005
"... The dynamical systems method (DSM), for solving operator equations, especially nonlinear and illposed, is developed in this paper. Consider an operator equation F (u) = 0 in a Hilbert space H and assume that this equation is solvable. Let us call the problem of solving this equation illposed if th ..."
Abstract

Cited by 27 (24 self)
 Add to MetaCart
(Show Context)
The dynamical systems method (DSM), for solving operator equations, especially nonlinear and illposed, is developed in this paper. Consider an operator equation F (u) = 0 in a Hilbert space H and assume that this equation is solvable. Let us call the problem of solving this equation illposed if the operator F ′ (u) is not boundedly invertible, and wellposed otherwise. The DSM for solving linear and nonlinear illposed problems in H consists of the construction of a dynamical system, that is, a Cauchy problem, which has the following properties: (1) it has a global solution, (2) this solution tends to a limit as time tends to infinity, (3) the limit solves the original linear or nonlinear problem. The DSM is justified for: (a) an arbitrary linear solvable equations with bounded operator, (b) for wellposed solvable nonlinear equations with twice Fréchet differentiable operator F, (c) for illposed solvable nonlinear equations with monotone operators, (d) for illposed solvable nonlinear equations with nonmonotone operators from a wide class of operators, (e) for illposed solvable nonlinear equations with operators F such that A: = F ′ (u) satisfies the spectral assumption of the type �(A+sI) −1 � ≤ c/s, where c> 0 is a constant, and s ∈ (0, s0), s0> 0 is a fixed number, arbitrarily small, c does not depend on s and u,
Stable numerical differentiation: when is it possible
 Jour. Korean SIAM
"... Abstract. Two principally different statements of the problem of stable numerical differentiation are considered. It is analyzed when it is possible in principle to get a stable approximation to the derivative f ′ given noisy data fδ. Computational aspects of the problem are discussed and illustrate ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Two principally different statements of the problem of stable numerical differentiation are considered. It is analyzed when it is possible in principle to get a stable approximation to the derivative f ′ given noisy data fδ. Computational aspects of the problem are discussed and illustrated by examples. These examples show the practical value of the new understanding of the problem of stable differentiation. 1.
Reconstructing singularities of a function from its Radon transform
 MATH. COMPUT. MODELLING
, 1993
"... We study the relation between the singularities of a function f and its Radon transform R(f). We prove that their singular loci are related via Legendre transform. Geometric properties of the singular locus of R(f) are studied. The problem of computing the Legendre transform from approximately know ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We study the relation between the singularities of a function f and its Radon transform R(f). We prove that their singular loci are related via Legendre transform. Geometric properties of the singular locus of R(f) are studied. The problem of computing the Legendre transform from approximately known data is discussed.
Inequalities for the derivatives
 MATHEM. INEQUALITIES AND APPLICATIONS, 3, N1, (2000), PP.129132
, 2000
"... The following question is studied and answered: Is it possible to stably approximate f ′ if one knows: 1) fδ ∈ L ∞ (R) such that �f − fδ � < δ, and 2) f ∈ C ∞ (R), �f � + �f ′ � ≤ c? Here �f �: = sup x∈R f(x)  and c> 0 is a given constant. By a stable approximation one means �Lδfδ − f ′ � ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
The following question is studied and answered: Is it possible to stably approximate f ′ if one knows: 1) fδ ∈ L ∞ (R) such that �f − fδ � < δ, and 2) f ∈ C ∞ (R), �f � + �f ′ � ≤ c? Here �f �: = sup x∈R f(x)  and c> 0 is a given constant. By a stable approximation one means �Lδfδ − f ′ � ≤ η(δ) → 0 as δ → 0. By Lδfδ one denotes an estimate of f ′. The basic result of this paper is the inequality for �Lδfδ − f ′ �, a proof of the impossibility to approximate stably f ′ given the above data 1) and 2), and a derivation of the inequality η(δ) ≤ cδ a 1+a if 2) is replaced by �f�1+a ≤ m1+a, 0 < a ≤ 1. An explicit formula for the estimate Lδfδ is given.
NUMERICAL DIFFERENTIATION FROM A VIEWPOINT OF REGULARIZATION THEORY
"... Abstract. In this paper, we discuss the classical illposed problem of numerical differentiation, assuming that the smoothness of the function to be differentiated is unknown. Using recent results on adaptive regularization of general illposed problems, we propose new rules for the choice of the st ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we discuss the classical illposed problem of numerical differentiation, assuming that the smoothness of the function to be differentiated is unknown. Using recent results on adaptive regularization of general illposed problems, we propose new rules for the choice of the stepsize in the finitedifference methods, and for the regularization parameter choice in numerical differentiation regularized by the iterated Tikhonov method. These methodsareshowntobeeffectiveforthedifferentiationofnoisyfunctions, and the orderoptimal convergence results for them are proved. 1.
A highly accurate technique for interpolations using very highorder polynomials, and its applications to some illposed linear problems
 CMES: Computer Modeling in Engineering & Sciences
, 2009
"... Abstract: Since the works of Newton and Lagrange, interpolation had been a mature technique in the numerical mathematics. Among the many interpolation methods, global or piecewise, the polynomial interpolation p(x) = a0+a1x+...+ anx n expanded by the monomials is the simplest one, which is easy to ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract: Since the works of Newton and Lagrange, interpolation had been a mature technique in the numerical mathematics. Among the many interpolation methods, global or piecewise, the polynomial interpolation p(x) = a0+a1x+...+ anx n expanded by the monomials is the simplest one, which is easy to handle mathematically. For higher accuracy, one always attempts to use a higherorder polynomial as an interpolant. But, Runge gave a counterexample, demonstrating that the polynomial interpolation problem may be illposed. Very highorder polynomial interpolation is very hard to realize by numerical computations. In this paper we propose a new polynomial interpolation by p(x) = ā0 + ā1x/R0 +...+ ānxn/Rn0, where R0 is a characteristic length used as a parameter, and chosen by the user. The resulting linear equations system to solve the coefficients āα is wellconditioned, if a suitable R0 is chosen. We define a nondimensional parameter, R∗0 = R0/(b−a) [where a and b are the endpoints of the interval for x]. The range of values for R∗0 for numerical stability is identified, and one can overcome the difficulty due to
Finding discontinuities of piecewisesmooth functions
"... Formulas for stable differentiation of piecewisesmooth functions are given. The data are noisy values of these functions. The locations of discontinuity points and the sizes of the jumps across these points are not assumed known, but found stably from the noisy data. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Formulas for stable differentiation of piecewisesmooth functions are given. The data are noisy values of these functions. The locations of discontinuity points and the sizes of the jumps across these points are not assumed known, but found stably from the noisy data.