Results 1  10
of
231
Regularization Theory and Neural Networks Architectures
 Neural Computation
, 1995
"... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..."
Abstract

Cited by 309 (31 self)
 Add to MetaCart
We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, som...
Numerical methods for image registration
, 2004
"... In this paper we introduce a new framework for image registration. Our formulation is based on consistent discretization of the optimization problem coupled with a multigrid solution of the linear system which evolve in a GaussNewton iteration. We show that our discretization is helliptic independ ..."
Abstract

Cited by 148 (25 self)
 Add to MetaCart
In this paper we introduce a new framework for image registration. Our formulation is based on consistent discretization of the optimization problem coupled with a multigrid solution of the linear system which evolve in a GaussNewton iteration. We show that our discretization is helliptic independent of parameter choice and therefore a simple multigrid implementation can be used. To overcome potential large nonlinearities and to further speed up computation, we use a multilevel continuation technique. We demonstrate the efficiency of our method on a realistic highly nonlinear registration problem. 1 Introduction and problem setup Image registration is one of today’s challenging image processing problems. Given a socalled reference R and a socalled template image T, the basic idea is to find a “reasonable ” transformation such that a transformed version of the template image becomes “similar ” to the reference image. Image registration
Expression Cloning
, 2001
"... We present a novel approach to producing facial expression animations for new models. Instead of creating new facial animations from scratch for each new model created, we take advantage of existing animation data in the form of vertex motion vectors. Our method allows animations created by any tool ..."
Abstract

Cited by 103 (8 self)
 Add to MetaCart
We present a novel approach to producing facial expression animations for new models. Instead of creating new facial animations from scratch for each new model created, we take advantage of existing animation data in the form of vertex motion vectors. Our method allows animations created by any tools or methods to be easily retargeted to new models. We call this process expression cloning and it provides a new alternative for creating facial animations for character models. Expression cloning makes it meaningful to compile a highquality facial animation library since this data can be reused for new models. Our method transfers vertex motion vectors from a source face model to a target model having different geometric proportions and mesh structure (vertex number and connectivity). With the aid of an automated heuristic correspondence search, expression cloning typically requires a user to select fewer than ten points in the model. Cloned expression animations preserve the relative motions, dynamics, and character of the original facial animations. CR Categories: I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism Animation; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling Geometric Algorithms; I.2.9 [Artificial Intelligence]: Robotics Kinematics and dynamics Keywords: Deformations, Facial animation, Morphing, Neural Nets 1
Prediction risk and architecture selection for neural networks
, 1994
"... Abstract. We describe two important sets of tools for neural network modeling: prediction risk estimation and network architecture selection. Prediction risk is defined as the expected performance of an estimator in predicting new observations. Estimated prediction risk can be used both for estimati ..."
Abstract

Cited by 75 (2 self)
 Add to MetaCart
Abstract. We describe two important sets of tools for neural network modeling: prediction risk estimation and network architecture selection. Prediction risk is defined as the expected performance of an estimator in predicting new observations. Estimated prediction risk can be used both for estimating the quality of model predictions and for model selection. Prediction risk estimation and model selection are especially important for problems with limited data. Techniques for estimating prediction risk include data resampling algorithms such as nonlinear cross–validation (NCV) and algebraic formulae such as the predicted squared error (PSE) and generalized prediction error (GPE). We show that exhaustive search over the space of network architectures is computationally infeasible even for networks of modest size. This motivates the use of heuristic strategies that dramatically reduce the search complexity. These strategies employ directed search algorithms, such as selecting the number of nodes via sequential network construction (SNC) and pruning inputs and weights via sensitivity based pruning (SBP) and optimal brain damage (OBD) respectively.
A Fast Algorithm for Deblurring Models with Neumann Boundary Conditions
, 1999
"... Blur removal is an important problem in signal and image processing. The blurring matrices obtained by using the zero boundary condition (corresponding to assuming dark background outside the scene) are Toeplitz matrices for 1dimensional problems and blockToeplitz Toeplitzblock matrices for 2dim ..."
Abstract

Cited by 68 (18 self)
 Add to MetaCart
Blur removal is an important problem in signal and image processing. The blurring matrices obtained by using the zero boundary condition (corresponding to assuming dark background outside the scene) are Toeplitz matrices for 1dimensional problems and blockToeplitz Toeplitzblock matrices for 2dimensional cases. They are computationally intensive to invert especially in the block case. If the periodic boundary condition is used, the matrices become (block) circulant and can be diagonalized by discrete Fourier transform matrices. In this paper, we consider the use of the Neumann boundary condition (corresponding to a reflection of the original scene at the boundary). The resulting matrices are (block) Toeplitzplus Hankel matrices. We show that for symmetric blurring functions, these blurring matrices can always be diagonalized by discrete cosine transform matrices. Thus the cost of inversion is significantly lower than that of using the zero or periodic boundary conditions. We also s...
Constructive Algorithms for Structure Learning in Feedforward Neural Networks for Regression Problems
 IEEE Transactions on Neural Networks
, 1997
"... In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a small network, then add hidden units and weights incrementally until a satisfactory solution is found. By formulating the whole problem as a state space search, we first describe the general issues in constructive algorithms, with special emphasis on the search strategy. A taxonomy, based on the differences in the state transition mapping, the training algorithm and the network architecture, is then presented. Keywords Constructive algorithm, structure learning, state space search, dynamic node creation, projection pursuit regression, cascadecorrelation, resourceallocating network, group method of data handling. I. Introduction A. Problems with Fixed Size Networks I N recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. Among...
Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values
, 2001
"... Estimating the mean and the covariance matrix of an incomplete dataset and filling in missing values with imputed values is generally a nonlinear problem, which must be solved iteratively. The expectation maximization (EM) algorithm for Gaussian data, an iterative method both for the estimation of m ..."
Abstract

Cited by 54 (3 self)
 Add to MetaCart
Estimating the mean and the covariance matrix of an incomplete dataset and filling in missing values with imputed values is generally a nonlinear problem, which must be solved iteratively. The expectation maximization (EM) algorithm for Gaussian data, an iterative method both for the estimation of mean values and covariance matrices from incomplete datasets and for the imputation of missing values, is taken as the point of departure for the development of a regularized EM algorithm. In contrast to the conventional EM algorithm, the regularized EM algorithm is applicable to sets of climate data, in which the number of variables typically exceeds the sample size. The regularized EM algorithm is based on iterated analyses of linear regressions of variables with missing values on variables with available values, with regression coefficients estimated by ridge regression, a regularized regression method in which a continuous regularization parameter controls the filtering of the noise in the data. The regularization parameter is determined by generalized crossvalidation, such as to minimize, approximately, the expected mean squared error of the imputed values. The regularized EM algorithm can estimate, and exploit for the imputation of missing values, both synchronic and diachronic covariance matrices, which may contain information on spatial covariability, stationary temporal covariability, or cyclostationary temporal covariability. A test of the regularized EM algorithm with simulated surface temperature data demonstrates that the algorithm is applicable to typical sets of climate data and that it leads to more accurate estimates of the missing values than a conventional noniterative imputation technique.
A Computationally Efficient Superresolution Image Reconstruction Algorithm
, 2000
"... Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propo ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
Superresolution reconstruction produces a highresolution image from a set of lowresolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this illconditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonovregularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized crossvalidation method for automatic calculation of regularization parameters. Effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.
Separable Nonlinear Least Squares: the Variable Projection Method and its Applications
 Institute of Physics, Inverse Problems
, 2002
"... this paper nonlinear data fitting problems which have as their underlying model a linear combination of nonlinear functions. More generally, one can also consider that there are two sets of unknown parameters, where one set is dependent on the other and can be explicitly eliminated. Models of this t ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
this paper nonlinear data fitting problems which have as their underlying model a linear combination of nonlinear functions. More generally, one can also consider that there are two sets of unknown parameters, where one set is dependent on the other and can be explicitly eliminated. Models of this type are very common and we will show a variety of applications in different fields. Inasmuch as many inverse problems can be viewed as nonlinear data fitting problems, this material will be of interest to a wide crosssection of researchers and practitioners in parameter, material or system identification, signal analysis, the analysis of spectral data, medical and biological imaging, neural networks, robotics, telecommunications and model order reduction, to name a few