Results 1  10
of
117
Illposed problems in early vision
 Proceedings of the IEEE
, 1988
"... The first processing stage in computational vision, also called early vision, consists of decoding twodimensional images in terms of properties of 30 surfaces. Early vision includes problems such as the recovery of motion and optical flow, shape from shading, surface interpolation, and edge detect ..."
Abstract

Cited by 179 (13 self)
 Add to MetaCart
The first processing stage in computational vision, also called early vision, consists of decoding twodimensional images in terms of properties of 30 surfaces. Early vision includes problems such as the recovery of motion and optical flow, shape from shading, surface interpolation, and edge detection. These are inverse problems, which are often illposed or illconditioned. We review here the relevant mathematical results on illposed and illconditioned problems and introduce the formal aspects of regularization theory in the linear and nonlinear case. Specific topics in early vision and their regularization are then analyzed rigorously, characterizing existence, uniqueness, and stability of solutions.
Representation of spatial orientation by the intrinsic dynamics of the headdirection cell ensemble: A theory
 J. Neurosci
, 1996
"... The headdirection (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both selfmotion information for inettiall ..."
Abstract

Cited by 130 (4 self)
 Add to MetaCart
The headdirection (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both selfmotion information for inettially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without 1
Image Mosaicing and Superresolution
, 2004
"... The thesis investigates the problem of how information contained in multiple, overlapping images of the same scene may be combined to produce images of superior quality. This area, generically titled frame fusion, offers the possibility of reducing noise, extending the field of view, removal of movi ..."
Abstract

Cited by 50 (4 self)
 Add to MetaCart
The thesis investigates the problem of how information contained in multiple, overlapping images of the same scene may be combined to produce images of superior quality. This area, generically titled frame fusion, offers the possibility of reducing noise, extending the field of view, removal of moving objects, removing blur, increasing spatial resolution and improving dynamic range. As such, this research has many applications in fields as diverse as forensic image restoration, computer generated special effects, video image compression, and digital video editing. An essential enabling step prior to performing frame fusion is image registration, by which an accurate estimate of the pointtopoint mapping between views is computed. A robust and efficient algorithm is described to automatically register multiple images using only information contained within the images themselves. The accuracy of this method, and the statistical assumptions upon which it relies, are investigated empirically. Two forms of framefusion are investigated. The first is image mosaicing, which is the alignment of multiple images into a single composition representing part of a 3D scene.
Computer Vision Applied to Superresolution
 IEEE Signal Processing Magazine
, 2003
"... this article is outlined in figure 1. The input images are first mutually aligned onto a common reference frame. This alignment involves not only a geometric component, but also a photometric component, modelling illumination, gain or colour balance variations among the images. After alignment a com ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
this article is outlined in figure 1. The input images are first mutually aligned onto a common reference frame. This alignment involves not only a geometric component, but also a photometric component, modelling illumination, gain or colour balance variations among the images. After alignment a composite image mosaic may be rendered and superresolution restoration may be applied to any chosen region of interest
Optimal rates for the regularized leastsquares algorithm
 Foundations of Computational Mathematics
"... We develop a theoretical analysis of generalization performances of regularized leastsquares on reproducing kernel Hilbert spaces for supervised learning. We show that the concept of effective dimension of an integral operator plays a central role in the definition of a criterion for the choice of t ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
We develop a theoretical analysis of generalization performances of regularized leastsquares on reproducing kernel Hilbert spaces for supervised learning. We show that the concept of effective dimension of an integral operator plays a central role in the definition of a criterion for the choice of the regularization parameter as a function of the number of samples. In fact a minimax analysis is performed which shows asymptotic optimality of the above mentioned criterion.
Choosing regularization parameters in iterative methods for illposed problems
 SIAM J. MATRIX ANAL. APPL
, 2001
"... Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
Numerical solution of illposedproblems is often accomplishedby discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present a common framework for efficient algorithms that regularize after this second projection rather than before it. We show that determining regularization parameters based on the final projectedproblem rather than on the original discretization has firmer justification andoften involves less computational expense. We prove some results on the approximate equivalence of this approach to other forms of regularization, andwe present numerical examples.
Nonparametric Methods for Inference in the Presence of Instrumental Variables
 Annals of Statistics
, 2005
"... We suggest two nonparametric approaches, based on kernel methods and orthogonal series to estimating regression functions in the presence of instrumental variables. For the first time in this class of problems, we derive optimal convergence rates, and show that they are attained by particular estima ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
We suggest two nonparametric approaches, based on kernel methods and orthogonal series to estimating regression functions in the presence of instrumental variables. For the first time in this class of problems, we derive optimal convergence rates, and show that they are attained by particular estimators. In the presence of instrumental variables the relation that identifies the regression function also defines an illposed inverse problem, the “difficulty ” of which depends on eigenvalues of a certain integral operator which is determined by the joint density of endogenous and instrumental variables. We delineate the role played by problem difficulty in determining both the optimal convergence rate and the appropriate choice of smoothing parameter. 1. Introduction. Data (Xi,Yi
A Regularizing LevenbergMarquardt Scheme, With Applications To Inverse Groundwater Filtration Problems
 Inverse Problems
, 1997
"... . The first part of this paper studies a LevenbergMarquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies alw ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
. The first part of this paper studies a LevenbergMarquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fr'echet derivative of the nonlinear operator at the exact solution, the new LevenbergMarquardt scheme is suitable for illposed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and domain topologies. Estimates of this type are established in the second part of the paper for illposed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new LevenbergMarquardt scheme is studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineer...
Learning from examples as an inverse problem
 Journal of Machine Learning Research
, 2005
"... Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regulari ..."
Abstract

Cited by 28 (14 self)
 Add to MetaCart
Many works related learning from examples to regularization techniques for inverse problems, emphasizing the strong algorithmic and conceptual analogy of certain learning algorithms with regularization algorithms. In particular it is well known that regularization schemes such as Tikhonov regularization can be effectively used in the context of learning and are closely related to algorithms such as support vector machines. Nevertheless the connection with inverse problem was considered only for the discrete (finite sample) problem and the probabilistic aspects of learning from examples were not taken into account. In this paper we provide a natural extension of such analysis to the continuous (population) case and study the interplay between the discrete and continuous problems. From a theoretical point of view, this allows to draw a clear connection between the consistency approach in learning theory and the stability convergence property in illposed inverse problems. The main mathematical result of the paper is a new probabilistic bound for the regularized leastsquares algorithm. By means of standard results on the approximation term, the consistency of the algorithm easily follows.
Regularizing properties of a truncated NewtonCG algorithm for nonlinear inverse problems
 NUM. FUNCT. ANAL. OPTIM
, 1997
"... This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are illposed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are illposed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled, e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data.