Results 11 
19 of
19
Operator Equations, Multiscale Concepts and Complexity
 LECTURES IN APPLIED MATHEMATICS
, 1996
"... In this paper, we review several recent developments centering upon the application of multiscale basis methods for the numerical solution of operator equations with special emphasis on complexity questions. In particular, issues like preconditioning, matrix compression, construction of special wav ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
In this paper, we review several recent developments centering upon the application of multiscale basis methods for the numerical solution of operator equations with special emphasis on complexity questions. In particular, issues like preconditioning, matrix compression, construction of special wavelet bases and adapted error estimators are addressed.
Applied and Computational Aspects of Nonlinear Wavelet Approximation
, 2001
"... Nonlinear approximation has recently found computational applications such as data compression, statistical estimation or adaptive schemes for partial differential or integral equations, especially through the development of waveletbased methods. The goal of this paper is to provide with a short su ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Nonlinear approximation has recently found computational applications such as data compression, statistical estimation or adaptive schemes for partial differential or integral equations, especially through the development of waveletbased methods. The goal of this paper is to provide with a short survey of nonlinear wavelet approximation in the perspective of these applications, as well as to stress some remaining open questions. 1. Introduction Numerous problems of approximation theory have in common the following general setting: we are given a family of subspaces (SN ) N0 of a normed space X, and for f 2 X, we consider the best approximation error oe N (f) := inf g2SN kf \Gamma gkX : (1) Typically, N represents the number of parameters needed to describe an element in SN , and in most cases of interest, oe N (f) goes to zero as this number tends to infinity. For a given f , we can then study the rate of approximation, i.e. the range of r 0 for which there exists C ? 0 such th...
The Brouwer Lecture 2005 : Statistical estimation with model selection. Available at arXiv: math.ST/0605187
"... The purpose of this paper is to explain the interest and importance of (approximate) models and model selection in Statistics. Starting from the very elementary example of histograms we present a general notion of finite dimensional model for statistical estimation and we explain what type of risk b ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The purpose of this paper is to explain the interest and importance of (approximate) models and model selection in Statistics. Starting from the very elementary example of histograms we present a general notion of finite dimensional model for statistical estimation and we explain what type of risk bounds can be expected from the use of one such model. We then give the performance of suitable model selection procedures from a family of such models. We illustrate our point of view by two main examples: the choice of a partition for designing a histogram from an nsample and the problem of variable selection in the context of Gaussian regression. 1 Introduction: a story of histograms 1.1 Histograms as graphical tools Assume we are given a (large) set of real valued measurements or data x1,...,xn, corresponding to lifetimes of some human beings in a specific area, or lifetimes of some manufactured goods, or to the annual income of families in some country,....
Adaptive Approximation of Curves ∗
, 2004
"... We propose adaptive multiscale refinement algorithms for approximating and encoding curves by polygonal curves. We establish rates of approximation of these algorithms in the Hausdorff metric. For example, we show that under the mere assumption that the original curve has finite length then the firs ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We propose adaptive multiscale refinement algorithms for approximating and encoding curves by polygonal curves. We establish rates of approximation of these algorithms in the Hausdorff metric. For example, we show that under the mere assumption that the original curve has finite length then the first of these algorithms gives a rate of convergence O(1/n) where n is the number of vertices of the approximating polygonal curve. Similar results giving second order approximation are proven under weak assumptions on the curvature such as Lp integrability, p> 1. Note that for nonadaptive algorithms, to obtain the same order of approximation would require that the curvature is bounded. Key Words: Polygonal approximation of planar curves, adaptive multiscale refinement, maximal functions, convergence rates, encoding
Access Provided by Penn State Univ Libraries at 03/01/13 2:44AM GMTNONLINEAR APPROXIMATION AND THE SPACE BV (R 2)
"... Abstract. Given a function f 2 L2(Q), Q: = [0, 1) 2 and a real number t> 0, let U ( f, t):= inf g2BV (Q) kf, gk 2 L 2(I) + t VQ (g), where the infimum is taken over all functions g 2 BV of bounded variation on I. This and related extremal problems arise in several areas of mathematics such as interp ..."
Abstract
 Add to MetaCart
Abstract. Given a function f 2 L2(Q), Q: = [0, 1) 2 and a real number t> 0, let U ( f, t):= inf g2BV (Q) kf, gk 2 L 2(I) + t VQ (g), where the infimum is taken over all functions g 2 BV of bounded variation on I. This and related extremal problems arise in several areas of mathematics such as interpolation of operators and statistical estimation, as well as in digital image processing. Techniques for finding minimizers g for U ( f, t) based on variational calculus and nonlinear partial differential equations have been put forward by several authors [DMS], [RO], [MS], [CL]. The main disadvantage of these approaches is that they are numerically intensive. On the other hand, it is well known that more elementary methods based on wavelet shrinkage solve related extremal problems, for example, the above problem with BV replaced by the Besov space B 1 1 (L1(I)) (see e.g. [CDLL]). However, since BV has no simple description in terms of wavelet coefficients, it is not clear that minimizers for U ( f, t) can be realized in this way. We shall show in this paper that simple methods based on Haar thresholding provide near minimizers for U ( f, t). Our analysis of this extremal problem brings forward many interesting relations between Haar decompositions and the space BV. 1. Introduction. Nonlinear
ESTIMATION OF THE TRANSITION DENSITY OF A MARKOV CHAIN MATHIEU SART
"... Abstract. We present two datadriven procedures to estimate the transition density of an homogeneous Markov chain. The first yields to a piecewise constant estimator on a suitable random partition. By using an Hellingertype loss, we establish nonasymptotic risk bounds for our estimator when the sq ..."
Abstract
 Add to MetaCart
Abstract. We present two datadriven procedures to estimate the transition density of an homogeneous Markov chain. The first yields to a piecewise constant estimator on a suitable random partition. By using an Hellingertype loss, we establish nonasymptotic risk bounds for our estimator when the square root of the transition density belongs to possibly inhomogeneous Besov spaces with possibly small regularity index. Some simulations are also provided. The second procedure is of theoretical interest and leads to a general model selection theorem from which we derive rates of convergence over a very wide range of possibly inhomogeneous and anisotropic Besov spaces. We also investigate the rates that can be achieved under structural assumptions on the transition density. 1. Introduction. Consider a timehomogeneous Markov chain (Xi)i∈N defined on an abstract probability space (Ω, E, P) with values in the measured space (X, F, µ). We assume that for each x ∈ X, the conditional law L(Xi+1  Xi = x) admits a density s(x, ·) with respect to µ. Our aim is to
AND
"... It has been understood for sometime that the classical smoothness spaces, such as the Sobolev and Besov classes, are not satisfactory for certain problems in image processing and nonlinear PDEs. Their deficiency lies in their isotropy. Functions in these smoothness spaces must be simultaneously smoo ..."
Abstract
 Add to MetaCart
It has been understood for sometime that the classical smoothness spaces, such as the Sobolev and Besov classes, are not satisfactory for certain problems in image processing and nonlinear PDEs. Their deficiency lies in their isotropy. Functions in these smoothness spaces must be simultaneously smooth in all directions. The anisotropic generalizations of these spaces also have the deficiency that they are biased in coordinate directions. While they allow different smoothness in certain directions, these directions must be aligned to the coordinate axes. In the application areas mentioned above, it would be desirable to measure smoothness in new ways that would allow one to have more local control over the smoothness directions. We introduce one possible approach to this problem based on defining smoothness via level sets. We present this approach in the case of functions defined on Rd. Our smoothness spaces depend on two smoothness indices.s1;s2/. The first reflects the smoothness of the level sets of the function, while the second index reflects how smoothly the level sets themselves are changing. As a motivation, we start with d D 2 and investigate Besov smooth domains. c 2007 Wiley Periodicals, Inc. 1