Results 1  10
of
53
A Theory of Networks for Approximation and Learning
 Laboratory, Massachusetts Institute of Technology
, 1989
"... Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, t ..."
Abstract

Cited by 194 (24 self)
 Add to MetaCart
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. Wedevelop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods suchasParzen windows and potential functions and to several neural network algorithms, suchas Kanerva's associative memory,backpropagation and Kohonen's topology preserving map. They also haveaninteresting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values
, 2001
"... Estimating the mean and the covariance matrix of an incomplete dataset and filling in missing values with imputed values is generally a nonlinear problem, which must be solved iteratively. The expectation maximization (EM) algorithm for Gaussian data, an iterative method both for the estimation of m ..."
Abstract

Cited by 54 (3 self)
 Add to MetaCart
Estimating the mean and the covariance matrix of an incomplete dataset and filling in missing values with imputed values is generally a nonlinear problem, which must be solved iteratively. The expectation maximization (EM) algorithm for Gaussian data, an iterative method both for the estimation of mean values and covariance matrices from incomplete datasets and for the imputation of missing values, is taken as the point of departure for the development of a regularized EM algorithm. In contrast to the conventional EM algorithm, the regularized EM algorithm is applicable to sets of climate data, in which the number of variables typically exceeds the sample size. The regularized EM algorithm is based on iterated analyses of linear regressions of variables with missing values on variables with available values, with regression coefficients estimated by ridge regression, a regularized regression method in which a continuous regularization parameter controls the filtering of the noise in the data. The regularization parameter is determined by generalized crossvalidation, such as to minimize, approximately, the expected mean squared error of the imputed values. The regularized EM algorithm can estimate, and exploit for the imputation of missing values, both synchronic and diachronic covariance matrices, which may contain information on spatial covariability, stationary temporal covariability, or cyclostationary temporal covariability. A test of the regularized EM algorithm with simulated surface temperature data demonstrates that the algorithm is applicable to typical sets of climate data and that it leads to more accurate estimates of the missing values than a conventional noniterative imputation technique.
Sharp Adaptation for Inverse Problems With Random Noise
, 2000
"... We consider a heteroscedastic sequence space setup with polynomially increasing variances of observations that allows to treat a number of inverse problems, in particular multivariate ones. We propose an adaptive estimator that attains simultaneously exact asymptotic minimax constants on every ellip ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
We consider a heteroscedastic sequence space setup with polynomially increasing variances of observations that allows to treat a number of inverse problems, in particular multivariate ones. We propose an adaptive estimator that attains simultaneously exact asymptotic minimax constants on every ellipsoid of functions within a wide scale (that includes ellipoids with polynomially and exponentially decreasing axes) and, at the same time, satisfies asymptotically exact oracle inequalities within any class of linear estimates having monotone nondecreasing weights. As application, we construct sharp adaptive estimators in the problems of deconvolution and tomography.
Smoothing Splines Estimators in Functional Linear Regression with ErrorsinVariables
, 2006
"... This work deals with a generalization of the Total Least Squares method in the context of the functional linear model. We first propose a smoothing splines estimator of the functional coefficient of the model without noise in the covariates and we obtain an asymptotic result for this estimator. Then ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
This work deals with a generalization of the Total Least Squares method in the context of the functional linear model. We first propose a smoothing splines estimator of the functional coefficient of the model without noise in the covariates and we obtain an asymptotic result for this estimator. Then, we adapt this estimator to the case where the covariates are noisy and we also derive an upper bound for the convergence speed. Our estimation procedure is evaluated by means of simulations.
NonConvergence of the LCurve Regularization Parameter Selection Method
 Inverse Problems
, 1997
"... The Lcurve method was developed for the selection of regularization parameters in the solution of discrete systems obtained from illposed problems. An analysis of this method is given for selecting a parameter for Tikhonov regularization. This analysis, which is carried out in a semidiscrete, sem ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
The Lcurve method was developed for the selection of regularization parameters in the solution of discrete systems obtained from illposed problems. An analysis of this method is given for selecting a parameter for Tikhonov regularization. This analysis, which is carried out in a semidiscrete, semistochastic setting, shows that the Lcurve approach yields regularized solutions which fail to converge for a certain class of problems. A numerical example is also presented which indicates that this lack of convergence can arise in practical applications.
Tikhonov Regularization for Large Scale Problems
, 1997
"... Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale pro ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Tikhonov regularization is a powerful tool for the solution of illposed linear systems and linear least squares problems. The choice of the regularization parameter is a crucial step, and many methods have been proposed for this purpose. However, efficient and reliable methods for large scale problems are still missing. In this paper approximation techniques based on the Lanczos algorithm and the theory of Gauss quadrature are proposed to reduce the computational complexity for large scale problems. The new approach is applied to 5 different heuristics: Morozov's discrepancy principle, the Gfrerer/Rausmethod, the quasioptimality criterion, generalized crossvalidation, and the Lcurve criterion. Numerical experiments are used to determine the efficiency and robustness of the various methods.
Adaptive estimation of linear functionals in Hilbert scales from indirect white noise observations
 Fields
, 1999
"... We consider adaptive estimating the value of a linear functional from indirect white noise observations. For a flexible approach, the problem is embedded in an abstract Hilbert scale. We develop an adaptive estimator that is rate optimal within a logarithmic factor simultaneously over a wide collect ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We consider adaptive estimating the value of a linear functional from indirect white noise observations. For a flexible approach, the problem is embedded in an abstract Hilbert scale. We develop an adaptive estimator that is rate optimal within a logarithmic factor simultaneously over a wide collection of balls in the Hilbert scale. It is shown that the proposed estimator has the best possible adaptive properties for a wide range of linear functionals. The case of discretized indirect white noise observations is studied, and the adaptive estimator in this setting is developed. Keywords: adaptive estimation, discretization, Hilbert scales, inverse problems, linear functionals, regularization, minimax risk. Running title: Adaptive inverse estimation of linear functionals Department of Statistics, University of Haifa, Mount Carmel, Haifa 31905, Israel. email: goldensh@rstat.haifa.ac.il y Ukrainian Academy of Sciences, Institute of Mathematics, Tereshenkivska str. 3, 252601 Kiev4, Uk...
Multigrid adaptive image processing
 In Proc. IEEE International Conference on Image Processing (ICIP
, 1995
"... We consider a general weighted least squares approximation problem with a membrane spline regularization term. The key parameters in this formulation are the weighting factors which provide the possibility of a spatial adaptation. We prove that the corresponding spacevarying variational problem is ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We consider a general weighted least squares approximation problem with a membrane spline regularization term. The key parameters in this formulation are the weighting factors which provide the possibility of a spatial adaptation. We prove that the corresponding spacevarying variational problem is well posed, and propose a novel multigrid computational solution. This multiresolution relaxation scheme uses three image pyramids (input data, weights, and current solution) and allows for a very efficient computation with an effective O(N) complexity, where N is the number of pixels. This general multigrid solver can be useful for a variety of image processing tasks. In particular, we propose new multigrid solutions for noise reduction in images (adaptive smoothing spline), interpolation/reconstruction of missing image data, and image segmentation using an adaptive extension of the Kmeans clustering algorithm. 1.