Results 11  20
of
143
Contour Fragment Grouping and Shared, Simple Occluders
"... Bounding contours of physical objects are often fragmented by other occluding objects. Longdistance perceptual grouping seeks to join fragments belonging to the same object. Approaches to grouping based on invariants assume objects are in restricted classes, while those based on minimal energy cont ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Bounding contours of physical objects are often fragmented by other occluding objects. Longdistance perceptual grouping seeks to join fragments belonging to the same object. Approaches to grouping based on invariants assume objects are in restricted classes, while those based on minimal energy continuations assume a shape for the missing contours and require this shape to drive the grouping process. While these assumptions may be appropriate for certain specific tasks or when contour gaps are small, in general occlusion can give rise to large gaps, and thus longdistance contour fragment grouping is a different type of perceptual organization problem. We propose the longdistance principle that those fragments should be grouped whose fragmentation could have arisen from a shared, simple occluder. The gap skeleton is introduced as a representation of this virtual occluder, and an algorithm for computing it is given. Finally, we show that a view of the virtual occluder as a disc can be interpreted as an equivalence class of curves interpolating the fragment endpoints. 1 Figure 1: Different distance scales for contour fragmentation. (left) The bounding contour of a camel is broken by a foreground palm tree. (center) Curve fragments remaining after depth separation using Tjunctions. This is longscale fragmentation. (right) Magnification of rear leg. Observe slight contour gaps can be caused by sensor noise. This is shortscale fragmentation. The techniques developed in this paper are for longscale fragmentation. 1
A New Algebraic Approach For Calculating The Heat Kernel In Gauge Theories
, 1993
"... It is shown that the heat kernel for any Laplace  like operator on covariantly constant background in flat space may be presented in form of an average over corresponding Lie group with a Gaussian measure. Explicit expression for the heat kernel is obtained using this representation. Related topics ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
It is shown that the heat kernel for any Laplace  like operator on covariantly constant background in flat space may be presented in form of an average over corresponding Lie group with a Gaussian measure. Explicit expression for the heat kernel is obtained using this representation. Related topics are discussed. I. G. Avramidi: Physics Letters B 305 (1993) 2734 2 1. Introduction The heat kernel is a very powerful tool in quantum field theory and quantum qravity [112] as well as in mathematical physics [1318]. It is associated with an elliptic second order differential operator acting on the sections of a smooth vector bundle V over a ddimensional Riemannian manifold M of Euclidean signature which has a general Laplace  like form (with leading symbol given by the metric tensor) H = \Gamma +Q+m 2 (1) where = g ¯ r ¯ r , r is a connection on the vector bundle V and Q is an endomorphism of this bundle. In other words, operator H acts on a multiplet of quantum fields ' = f'...
Spaces of Distributions and Interpolation by Translates of a Basis Function
 Numer. Math
, 1997
"... Interpolation with translates of a basis function is a common process in approximation theory. One starts with a single function (the basis function) and a set of interpolation points. The most elementary form of the interpolant then consists of a linear combination of all translates by interpolatio ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Interpolation with translates of a basis function is a common process in approximation theory. One starts with a single function (the basis function) and a set of interpolation points. The most elementary form of the interpolant then consists of a linear combination of all translates by interpolation points of the basis function. Frequently, low degree polynomials are added to the interpolant. One of the significant features of this type of interpolant is that it is often the solution of a variational problem. To make this work, one needs an appropriate space in which to carry out the variational arguments. In this paper we concentrate on developing such spaces for a wide class of basis functions. We also show how the theory leads to efficient ways of calculating the interpolant and to error estimates. 1 Introduction Radial basis function interpolation is now a wellestablished method for performing interpolation to data specified at points in IR n . The form of the interpolant used...
A Segmentationbased regularization term for image deconvolution
 IEEE Trans. Image Process
, 2006
"... In image restoration with Bayesian methods, the solution is regularized by introducing a priori constraints [1]. Expressed as a prior distribution PX(x) of the unknown image x or analytically encoded through an energy function Ω(x) added to the ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
In image restoration with Bayesian methods, the solution is regularized by introducing a priori constraints [1]. Expressed as a prior distribution PX(x) of the unknown image x or analytically encoded through an energy function Ω(x) added to the
Resolution of singularities in DenjoyCarleman classes
 Selecta Math. (N.S
"... Abstract. We show that a version of the desingularization theorem of Hironaka holds for certain classes of C ∞ functions (essentially, for subrings that exclude flat functions and are closed under differentiation and the solution of implicit equations). Examples are quasianalytic classes, introduced ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. We show that a version of the desingularization theorem of Hironaka holds for certain classes of C ∞ functions (essentially, for subrings that exclude flat functions and are closed under differentiation and the solution of implicit equations). Examples are quasianalytic classes, introduced by E. Borel a century ago and characterized by the DenjoyCarleman theorem. These classes have been poorly understood in dimension> 1. Resolution of singularities can be used to obtain many new results; for example, topological Noetherianity, ̷Lojasiewicz inequalities, division properties.
A LargeScale TrustRegion Approach to the Regularization of Discrete IllPosed Problems
 RICE UNIVERSITY
, 1998
"... We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares pro ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We consider the problem of computing the solution of largescale discrete illposed problems when there is noise in the data. These problems arise in important areas such as seismic inversion, medical imaging and signal processing. We pose the problem as a quadratically constrained least squares problem and develop a method for the solution of such problem. Our method does not require factorization of the coefficient matrix, it has very low storage requirements and handles the high degree of singularities arising in discrete illposed problems. We present numerical results on test problems and an application of the method to a practical problem with real data.
Stochastic Modeling and Estimation of Multispectral Image Data
 IEEE Trans. Image Processing
, 1995
"... Multispectral images consist of multiple channels, each containing data acquired from a different band within the frequency spectrum. Since most objects emit or reflect energy over a large spectral bandwidth, there usually exists a significant correlation between channels. Due to often harsh imaging ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Multispectral images consist of multiple channels, each containing data acquired from a different band within the frequency spectrum. Since most objects emit or reflect energy over a large spectral bandwidth, there usually exists a significant correlation between channels. Due to often harsh imaging environments, the acquired data may be degraded by both blur and noise. Simply applying a monochromatic restoration algorithm to each frequency band ignores the crosschannel correlation present within a multispectral image. A Gibbs prior is proposed for multispectral data modeled as a Markov random field, containing both spatial and spectral cliques. Spatial components of the model use a nonlinear operator to preserve discontinuities within each frequency band, while spectral components incorporate nonstationary crosschannel correlations. The multispectral model is used in a Bayesian algorithm for the restoration of color images, in which the resulting nonlinear estimates are shown to be ...
A new short proof of the local index formula and some of its applications
 Comm. Math. Phys
, 2004
"... We give a new short proof of the index formula of Atiyah and Singer based on combining Getzler’s rescaling with the (fairly standard) Greiner’s approach of the heat kernel asymptotics. As application we can rather easily compute the ConnesMoscovici cyclic cocycle of even and odd Dirac spectral trip ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
We give a new short proof of the index formula of Atiyah and Singer based on combining Getzler’s rescaling with the (fairly standard) Greiner’s approach of the heat kernel asymptotics. As application we can rather easily compute the ConnesMoscovici cyclic cocycle of even and odd Dirac spectral triples, and then recover the AtiyahSinger index formula (even case) and the AtiyahPatodiSinger spectral flow formula (odd case). The AtiyahSinger index theorem ([AS1], [AS2]) gives a cohomological interpretation of the Fredholm index of an elliptic operator, but it reaches its true geometric content in the case Dirac operator for which the index is given by a local geometric formula. The local formula is somehow as important as the index theorem since, on the one hand, all the common geometric operators are locally Dirac operators ([ABP], [BGV], [LM], [Ro]) and, on the other hand, the local index formula is equivalent to the full index theorem ([ABP], [LM]). It was then attempted to bypass the index theorem to prove the local index formula. The first direct proofs were made by Patodi, Gilkey, AtiyahBottPatodi, partly by using invariant theory (see [ABP], [Gi]). Some years later Getzler ([Ge1], [Ge2]) and Bismut [Bi] gave purely analytic proofs, which then led to many generalizations
Evaluation of singular and hypersingular Galerkin integrals: direct limits and symbolic computation
 in Singular Integrals in the Boundary Element Method
, 1998
"... Algorithms are presented for evaluating singular and hypersingular boundary integrals arising from a Galerkin approximation in two dimensions. The integrals involving derivatives of the Green's function are defined as limits from the interior, allowing a simple and direct treatment of these terms. A ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Algorithms are presented for evaluating singular and hypersingular boundary integrals arising from a Galerkin approximation in two dimensions. The integrals involving derivatives of the Green's function are defined as limits from the interior, allowing a simple and direct treatment of these terms. An efficient scheme is obtained by using a combined analytical and numerical approach, the analytic formulas easily derived with a symbolic computation program. The analytic integration also permits exact cancellation of potentially divergent terms, and thus the method is accurate as well. These algorithms are first presented in the simplest context, a linear element. The integrals resulting from higher order curved interpolation are shown to be reducible to the linear case, and can therefore be treated with the same techniques. Example calculations employing the SymmetricGalerkin approximation are presented for the Laplace equation and for orthotropic elasticity. The postprocessing evaluat...
Group Testing with Probabilistic Tests: Theory, Design and Application
"... Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a class ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design nonadaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items.