Results 11  20
of
40
EFFICIENT MOMENT COMPUTATION OVER POLYGONAL DOMAINS WITH AN APPLICATION TO RAPID WEDGELET APPROXIMATION
"... Abstract. Many algorithms in image processing rely on the computation of sums of pixel values over a large variety of subsets of the image domain. This includes the computation of image moments for pattern recognition purposes, or adaptive smoothing and regression methods, such as wedgelets. In the ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. Many algorithms in image processing rely on the computation of sums of pixel values over a large variety of subsets of the image domain. This includes the computation of image moments for pattern recognition purposes, or adaptive smoothing and regression methods, such as wedgelets. In the first part of the paper, we present a general method which allows the fast computation of sums over a large class of polygonal domain. The approach relies on the idea of considering polygonal domains with a fixed angular resolution, combined with an efficient implementation of a discrete version of Green’s theorem. The second part deals with the application of the new methodology to a particular computational problem, namely wedgelet approximation. Our technique results in a speedup of O(10 3) by comparison to preexisting implementations. A further attractive feature of our implementation is the instantaneous access to the full scale of wedgelet minimizers. We introduce a new scheme that replaces the locally constant regression underlying wedgelets by basically arbitrary local regression models. Due to the speedup obtained by the techniques explained in the first part, this scheme is computationally efficient, and at the same time much more flexible than previously suggested methods such as wedgelets or platelets. In the final section we present numerical experiments showing the increase in speed and flexibility. Key words. Wedgelets, platelets, image approximation, image moments, polygonal domains, discrete Green’s theorem, digital lines. AMS subject classifications. 68U10, 65K10, 26B20, 52C99.
Multiscale Detection of Filamentary Features
 in Image Data. SPIE WaveletX
, 2003
"... Taking advantage of the new developments in mathematical statistics, a multiscale approach is designed to detect filament or filamentlike features in noisy images. The major contribution is to introduce a general framework in cases when the data is digital. Our detection method can detect the prese ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Taking advantage of the new developments in mathematical statistics, a multiscale approach is designed to detect filament or filamentlike features in noisy images. The major contribution is to introduce a general framework in cases when the data is digital. Our detection method can detect the presence of an underlying curvilinear feature with the lowest possible strength that are still detectible in theory. Simulation results on synthetic data will be reported to illustrate its effectiveness in finite digital situations.
Nonparametric Denoising of Signals with Unknown Local Structure, I: Oracle Inequalities
, 2008
"... We consider the problem of pointwise estimation of multidimensional signals s, from noisy observations (yτ) on the regular grid Z d. Our focus is on the adaptive estimation in the case when the signal can be well recovered using a (hypothetical) linear filter, which can depend on the unknown signal ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We consider the problem of pointwise estimation of multidimensional signals s, from noisy observations (yτ) on the regular grid Z d. Our focus is on the adaptive estimation in the case when the signal can be well recovered using a (hypothetical) linear filter, which can depend on the unknown signal itself. The basic setting of the problem we address here is as follows: suppose that the signal s is “wellfiltered”, i.e. there exists an adopted timeinvariant linear filter q ∗ T with the coefficients which vanish outside the “cube ” {0,..., T} d which recovers s0 from observations with small meansquared error. We suppose that we do not know the filter q ∗ , although, we do know that such a filter exists. We give partial answers to the following questions: – is it possible to construct an adaptive estimator of s0, which relies upon observations and recovers s0 with basically the same estimation error as the unknown filter q ∗ T? – how rich is the family of wellfiltered (in the above sense) signals? We show that the answer to the first question is affirmative and provide a numerically efficient construction of a nonlinear adaptive filter. Further, we establish a simple calculus of “wellfiltered ” signals, and show that their family is quite large: it contains, for instance, sampled smooth signals, sampled modulated smooth signals and sampled harmonic functions.
Exact lower bound for proportion of maximally embedded beamlet
 Applied Mathematics Letters
, 2005
"... ..."
Detecting Highly Oscillatory Signals by Chirplet Path Pursuit
, 2006
"... This paper considers the problem of detecting nonstationary phenomena, and chirps in particular, from very noisy data. Chirps are waveforms of the very general form A(t) exp(iλ ϕ(t)), where λ is a (large) base frequency, the phase ϕ(t) is timevarying and the amplitude A(t) is slowly varying. Given ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper considers the problem of detecting nonstationary phenomena, and chirps in particular, from very noisy data. Chirps are waveforms of the very general form A(t) exp(iλ ϕ(t)), where λ is a (large) base frequency, the phase ϕ(t) is timevarying and the amplitude A(t) is slowly varying. Given a set of noisy measurements, we would like to test whether there is signal or whether the data is just noise. One particular application of note in conjunction with this problem is the detection of gravitational waves predicted by Einstein’s Theory of General Relativity. We introduce detection strategies which are very sensitive and more flexible than existing feature detectors. The idea is to use structured algorithms which exploit information in the socalled chirplet graph to chain chirplets together adaptively as to form chirps with polygonal instantaneous frequency. We then search for the path in the graph which provides the best tradeoff between complexity and goodness of fit. Underlying our methodology is the idea that while the signal may be extremely weak so that none of the individual empirical coefficients is statistically significant, one can still reliably detect by combining several coefficients into a
15 Years of Reproducible Research in Computational Harmonic Analysis
, 2008
"... Scientific Computation is emerging as absolutely central to the scientific method. Unfortunately, it is errorprone and currently immature: traditional scientific publication is incapable of finding and rooting out errors in scientific computation; this must be recognized as a crisis. Reproducible c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Scientific Computation is emerging as absolutely central to the scientific method. Unfortunately, it is errorprone and currently immature: traditional scientific publication is incapable of finding and rooting out errors in scientific computation; this must be recognized as a crisis. Reproducible computational research, in which the full computational environment that produces a result is published along with the article, is an important recent development, and a necessary response to this crisis. We have been practicing reproducible computational research for 15 years and integrated it with our scientific research, and with doctoral and postdoctoral education. In this article, we review our approach, how the approach has spread over time, and how science funding agencies could help spread the idea more rapidly. 1
Denoising Signals of Unknown Local Structure
, 2005
"... In this paper, we focus on the nonparametric regression problem as follows: Given noisy observations yt = f(xt) + σet, t = (t1,..., td) ∈ Z d, 0 ≤ tj ≤ m (1) of a “signal ” f: [0, 1] d → C taken along the equidistant grid Γn = {xt = m −1 t: ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In this paper, we focus on the nonparametric regression problem as follows: Given noisy observations yt = f(xt) + σet, t = (t1,..., td) ∈ Z d, 0 ≤ tj ≤ m (1) of a “signal ” f: [0, 1] d → C taken along the equidistant grid Γn = {xt = m −1 t:
Some Examples of Untraditional Statistical Computing
 In Proceedings of Joint Statistical Meeting
, 2001
"... In many cases, finding a maximum likelihood estimator or a Generalized Likelihood Ratio Test estimate becomes an optimization problem. The challenges come when the problem is not formulated as what we usually see, or the size of the problem is too large to be manageable by conventional methods. We p ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In many cases, finding a maximum likelihood estimator or a Generalized Likelihood Ratio Test estimate becomes an optimization problem. The challenges come when the problem is not formulated as what we usually see, or the size of the problem is too large to be manageable by conventional methods. We present three examples that have this flavor. They are (1) finding the maximum likelihood estimate of a curve embedded in a noisy picture, (2) finding a penalized maximum likelihood estimate for embedded linear features, and (3) computing an Mestimate. We present that they can be solved by applying (a) network flow algorithms from Operation Research, (b) a "best basis" algorithm from Computational Harmonic Analysis, and (c) methods from nonlinear optimization. We present some computational results. The philosophical point that we advocate is that incorporating optimization techniques from fields other than statistics enables us to solve some "hard" problems in statistical estimation. 1 1
Recovering filamentary objects in severely degraded binary images using beamletdecorated partitioning
 International Conference on Acoustic Speech and Signal Processing (ICASSP), May,Orlando,FL
, 2002
"... We consider the problem of recovering a binary image consisting of many filaments or linear fragments in the presence of severe binary noise. Our approach exploits beamlets—a dyadically organized, multiscale system of line segments—and associated fast algorithms for beamlet analysis and complexityp ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We consider the problem of recovering a binary image consisting of many filaments or linear fragments in the presence of severe binary noise. Our approach exploits beamlets—a dyadically organized, multiscale system of line segments—and associated fast algorithms for beamlet analysis and complexitypenalized model fitting. Simulation results demonstrate the effectiveness of the method. 1.
Signal and Image Approximation using Interval Wavelet Transform
"... In signal approximation, classical wavelet synthesis are known to produce Gibbslike phenomenon around discontinuities when wavelet coefficients in the cone of influence of the discontinuities are quantized. By analyzing a function in a piecewise manner, filtering across discontinuities can be avoid ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In signal approximation, classical wavelet synthesis are known to produce Gibbslike phenomenon around discontinuities when wavelet coefficients in the cone of influence of the discontinuities are quantized. By analyzing a function in a piecewise manner, filtering across discontinuities can be avoided. Using this principle, the interval wavelet transform can generate sparser representations in the vicinity of discontinuities than classical wavelet transforms. This work introduces two new constructions of interval wavelets and shows how they can be used for image compression and upscaling. Index Terms wavelets on the interval, boundary filters, image approximation. I.