Results 1  10
of
116
ScaleSpace for Discrete Signals
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1990
"... We address the formulation of a scalespace theory for discrete signals. In one dimension it is possible to characterize the smoothing transformations completely and an exhaustive treatment is given, answering the following two main questions: 1. Which linear transformations remove structure in the ..."
Abstract

Cited by 96 (22 self)
 Add to MetaCart
We address the formulation of a scalespace theory for discrete signals. In one dimension it is possible to characterize the smoothing transformations completely and an exhaustive treatment is given, answering the following two main questions: 1. Which linear transformations remove structure in the sense that the number of local extrema (or zerocrossings) in the output signal does not exceed the number of local extrema (or zerocrossings) in the original signal? 2. How should one create a multiresolution family of representations with the property that a signal at a coarser level of scale never contains more structure than a signal at a finer level of scale? We propose that there is only one reasonable way to define a scalespace for 1D discrete signals comprising a continuous scale parameter, namely by (discrete) convolution with the family of kernels T (n; t) = e I n (t), where I n are the modified Bessel functions of integer order. Similar arguments applied in the continuous case uniquely lead to the Gaussian kernel.
Learning in Linear Neural Networks: a Survey
 IEEE Transactions on neural networks
, 1995
"... Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
Networks of linear units are the simplest kind of networks, where the basic questions related to learning, generalization, and selforganisation can sometimes be answered analytically. We survey most of the known results on linear networks, including: (1) backpropagation learning and the structure of the error function landscape; (2) the temporal evolution of generalization; (3) unsupervised learning algorithms and their properties. The connections to classical statistical ideas, such as principal component analysis (PCA), are emphasized as well as several simple but challenging open questions. A few new results are also spread across the paper, including an analysis of the effect of noise on backpropagation networks and a unified view of all unsupervised algorithms. Keywords linear networks, supervised and unsupervised learning, Hebbian learning, principal components, generalization, local minima, selforganisation I. Introduction This paper addresses the problems of supervise...
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 55 (12 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
Exponential Stability for Nonlinear Filtering
, 1996
"... We study the a.s. exponential stability of the optimal filter w.r.t. its initial conditions. A bound is provided on the exponential rate (equivalently, on the memory length of the filter) for a general setting both in discrete and in continuous time, in terms of Birkhoff's contraction coefficie ..."
Abstract

Cited by 54 (2 self)
 Add to MetaCart
We study the a.s. exponential stability of the optimal filter w.r.t. its initial conditions. A bound is provided on the exponential rate (equivalently, on the memory length of the filter) for a general setting both in discrete and in continuous time, in terms of Birkhoff's contraction coefficient. Criteria for exponential stability and explicit bounds on the rate are given in the specific cases of a diffusion process on a compact manifold, and discrete time Markov chains on both continuous and discretecountable state spaces. R'esum'e Nous 'etudions la stabilit'e du filtre optimal par raport `a ses conditions initiales. Le taux de d'ecroissance exponentielle est calcul'e dans un cadre g'en'eral, pour temps discret et temps continu, en terme du coefficient de contraction de Birkhoff. Des crit`eres de stabilit'e exponentielle et des bornes explicites sur le taux sont calcul'ees pour les cas particuliers d'une diffusion sur une vari'ete compacte, ainsi que pour des chaines de Markov sur ...
Total positivity: tests and parametrizations
 Math. Intelligencer
"... A matrix is totally positive (resp. totally nonnegative) if all its minors are positive (resp. nonnegative) real numbers. The first systematic study of these classes of matrices was undertaken in the 1930s by F. R. Gantmacher and M. G. Krein [20, 21, 22], who established their remarkable spectral pr ..."
Abstract

Cited by 40 (8 self)
 Add to MetaCart
A matrix is totally positive (resp. totally nonnegative) if all its minors are positive (resp. nonnegative) real numbers. The first systematic study of these classes of matrices was undertaken in the 1930s by F. R. Gantmacher and M. G. Krein [20, 21, 22], who established their remarkable spectral properties (in particular,
LOOPERASED WALKS AND TOTAL POSITIVITY
, 2000
"... We consider matrices whose elements enumerate weights of walks in planar directed weighted graphs (not necessarily acyclic). These matrices are totally nonnegative; more precisely, all their minors are formal power series in edge weights with nonnegative coefficients. A combinatorial explanation of ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
We consider matrices whose elements enumerate weights of walks in planar directed weighted graphs (not necessarily acyclic). These matrices are totally nonnegative; more precisely, all their minors are formal power series in edge weights with nonnegative coefficients. A combinatorial explanation of this phenomenon involves looperased walks. Applications include total positivity of hitting matrices of Brownian motion in planar domains.
Resonances in One Dimension and Fredholm Determinants
, 2000
"... We discuss resonances for Schrödinger operators in whole and halfline problems. One of our goals is to connect the Fredholm determinant approach of Froese to the Fourier transform approach of Zworski. Another is to prove a result on the number of antibound states namely, in a halfline problem the ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
We discuss resonances for Schrödinger operators in whole and halfline problems. One of our goals is to connect the Fredholm determinant approach of Froese to the Fourier transform approach of Zworski. Another is to prove a result on the number of antibound states namely, in a halfline problem there are an odd number of antibound states between any two bound states.
Splines as linear combinations of Bsplines. A Survey
, 1976
"... This paper is intended to serve as a postscript to the fundamental 1966 paper by Curry and Schoenberg on Bsplines. It is also intended to promote the point of view that Bsplines are truly basic splines: Bsplines express the essentially local, but not completely local, character of splines; certai ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
This paper is intended to serve as a postscript to the fundamental 1966 paper by Curry and Schoenberg on Bsplines. It is also intended to promote the point of view that Bsplines are truly basic splines: Bsplines express the essentially local, but not completely local, character of splines; certain facts about splines take on their most striking form when put into Bspline terms, and many theorems about splines are most easily proved with the aid of Bsplines; the computational determination of a specific spline from some information about it is usually facilitated when Bsplines are used in its construction.
Logconcavity and the maximum entropy property of the Poisson distribution
 Stochastic Processes and their Applications
"... We prove that the Poisson distribution maximises entropy in the class of ultralogconcave distributions, extending a result of Harremoës. The proof uses ideas concerning logconcavity, and a semigroup action involving adding Poisson variables and thinning. We go on to show that the entropy is a conc ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
We prove that the Poisson distribution maximises entropy in the class of ultralogconcave distributions, extending a result of Harremoës. The proof uses ideas concerning logconcavity, and a semigroup action involving adding Poisson variables and thinning. We go on to show that the entropy is a concave function along this semigroup. 1 Maximum entropy distributions It is wellknown that the distributions which maximise entropy under certain very natural conditions take a simple form. For example, among random variables with fixed mean and variance the entropy is maximised by the normal distribution. Similarly, for random variables with positive support and fixed mean, the entropy is maximised by the exponential distribution. The standard technique for proving such results uses the Gibbs inequality, and exploits the fact that, given a function f(x), and fixing Λ(p) = ∫ p(x)f(x)dx, the maximum entropy density is of the form α exp(−βf(x)). Example 1.1 For a density p with mean µ and variance σ2, write φµ,σ2 for the density of a N(µ, σ2) random variable, and define the function Λ(p) = − ∫ p(x) log φ µ,σ2(x)dx.