## Regularization Theory and Neural Networks Architectures (1995)

Venue: | Neural Computation |

Citations: | 309 - 31 self |

### BibTeX

@ARTICLE{Girosi95regularizationtheory,

author = {Federico Girosi and Michael Jones and Tomaso Poggio},

title = {Regularization Theory and Neural Networks Architectures},

journal = {Neural Computation},

year = {1995},

volume = {7},

pages = {219--269}

}

### Years of Citing Articles

### OpenURL

### Abstract

We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, som...