Results 1  10
of
353
The Jackknife and the Bootstrap for General Stationary Observations
, 1989
"... this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae ..."
Abstract

Cited by 225 (2 self)
 Add to MetaCart
this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae
Polynomial Splines and Their Tensor Products in Extended Linear Modeling
 Ann. Statist
, 1997
"... ANOVA type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function, or spectral density function. Polynomial splines are used to m ..."
Abstract

Cited by 142 (14 self)
 Add to MetaCart
ANOVA type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function, or spectral density function. Polynomial splines are used to model the main effects, and their tensor products are used to model any interaction components that are included. In the special context of survival analysis, the baseline hazard function is modeled and nonproportionality is allowed. In general, the theory involves the L 2 rate of convergence for the fitted model and its components. The methodology involves least squares and maximum likelihood estimation, stepwise addition of basis functions using Rao statistics, stepwise deletion using Wald statistics, and model selection using BIC, crossvalidation or an independent test set. Publically available software, written in C and interfaced to S/SPLUS, is used to apply this methodology to...
Blind Separation of Mixture of Independent Sources Through a Maximum Likelihood Approach
 In Proc. EUSIPCO
, 1997
"... In this paper we propose two methods for separating mixtures of independent sources without any precise knowledge of their probability distribution. They are obtained by considering a maximum likelihood solution corresponding to some given distributions of the sources and relaxing this assumption af ..."
Abstract

Cited by 101 (8 self)
 Add to MetaCart
In this paper we propose two methods for separating mixtures of independent sources without any precise knowledge of their probability distribution. They are obtained by considering a maximum likelihood solution corresponding to some given distributions of the sources and relaxing this assumption afterward. The first method is specially adapted to temporally independent non Gaussian sources and is based on the use of nonlinear separating functions. The second method is specially adapted to correlated sources with distinct spectra and is based on the use of linear separating filters. A theoretical analysis of the performance of the methods has been made. A simple procedure for choosing optimally the separating functions from a given linear space of functions is proposed. Further, in the second method, a simple implementation based on the simultaneous diagonalization of two symmetric matrices is provided. Finally, some numerical and simulation results are given illustrating the performan...
Source Separation Using Higher Order Moments
 in Proc. ICASSP
, 1989
"... This communication presents a simple algebraic method for the extraction of independent components in multidimensional data. Since statistical independence is a much stronger property than uncorrelation, it is possible, using higherorder moments, to identify source signatures in array data without ..."
Abstract

Cited by 90 (7 self)
 Add to MetaCart
This communication presents a simple algebraic method for the extraction of independent components in multidimensional data. Since statistical independence is a much stronger property than uncorrelation, it is possible, using higherorder moments, to identify source signatures in array data without any apriori model for propagation or reception, that is, without directional vector parametrization, provided that the emitting sources be independent with different probability distributions. We propose such a "blind" identification procedure. Source signatures are directly identified as covariance eigenvectors after data have been orthonormalized and non linearily weighted. Potential applications to Array Processing are illustrated by a simulation consisting in a simultaneous rangebearing estimation with a passive array. INTRODUCTION For a lot of reasons (of various kinds), the most common Signal Processing methods deal with secondorder statistics, expressed in terms of covariance matr...
Perspectives on system identification
 In Plenary talk at the proceedings of the 17th IFAC World Congress, Seoul, South Korea
, 2008
"... System identification is the art and science of building mathematical models of dynamic systems from observed inputoutput data. It can be seen as the interface between the real world of applications and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous ne ..."
Abstract

Cited by 77 (2 self)
 Add to MetaCart
System identification is the art and science of building mathematical models of dynamic systems from observed inputoutput data. It can be seen as the interface between the real world of applications and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous necessity for successful applications. System identification is a very large topic, with different techniques that depend on the character of the models to be estimated: linear, nonlinear, hybrid, nonparametric etc. At the same time, the area can be characterized by a small number of leading principles, e.g. to look for sustainable descriptions by proper decisions in the triangle of model complexity, information contents in the data, and effective validation. The area has many facets and there are many approaches and methods. A tutorial or a survey in a few pages is not quite possible. Instead, this presentation aims at giving an overview of the “science ” side, i.e. basic principles and results and at pointing to open problem areas in the practical, “art”, side of how to approach and solve a real problem. 1.
Frequency content of randomly scattered signals
 PART I, WAVE MOTION
, 1990
"... The statistical properties of acoustic signals reflected by a randomly layered medium are analyzed when a pulsed spherical wave issuing from a point source is incident upon it. The asymptotic analysis of stochastic equations and geometrical acoustics is used to arrive at a set of transport equatio ..."
Abstract

Cited by 72 (20 self)
 Add to MetaCart
The statistical properties of acoustic signals reflected by a randomly layered medium are analyzed when a pulsed spherical wave issuing from a point source is incident upon it. The asymptotic analysis of stochastic equations and geometrical acoustics is used to arrive at a set of transport equations that characterize multiply scattered signals observed at the surface of the layered medium. The results of extensive numerical simulations are presented, illustrating the scope of the theory. A number of inverse problems for randomly layered media are also formulated where we
InformationTheoretic Analysis of Interscale and Intrascale Dependencies Between Image Wavelet Coefficients
 IEEE Transactions on Image Processing
, 2001
"... This paper presents an informationtheoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance. ..."
Abstract

Cited by 70 (1 self)
 Add to MetaCart
This paper presents an informationtheoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance.
Sequential KarhunenLoeve Basis Extraction and its Application to Images
 IEEE Transactions on Image processing
"... The KarhunenLoeve (KL) Transform is an optimal method for approximating a set of vectors, which was used in image processing and computer vision for several tasks. Its computational demands and its batch calculation nature have limited its application. Here we present a new, sequential algorithm fo ..."
Abstract

Cited by 57 (0 self)
 Add to MetaCart
The KarhunenLoeve (KL) Transform is an optimal method for approximating a set of vectors, which was used in image processing and computer vision for several tasks. Its computational demands and its batch calculation nature have limited its application. Here we present a new, sequential algorithm for calculating the KL basis, which is faster in typical applications and is especially advantageous for image sequences: the KL basis calculation is done with much lower delay and allows for dynamic updating of object databases for recognition. Systematic tests of the implemented algorithm show that these advantages are indeed obtained with the same accuracy available from batch KL algorithms. 1 Introduction The KarhunenLoeve (KL) transform [1] is a preferred method for approximating a set of vectors by a low dimensional subspace. The method provides the optimal subspace, spanned by the KL basis, which minimizes the MSE between the given set of vectors and their projections on the subspace...
Bootstraps for Time Series
, 1999
"... We compare and review block, sieve and local bootstraps for time series and thereby illuminate theoretical facts as well as performance on nitesample data. Our (re) view is selective with the intention to get a new and fair picture about some particular aspects of bootstrapping time series. The ge ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
We compare and review block, sieve and local bootstraps for time series and thereby illuminate theoretical facts as well as performance on nitesample data. Our (re) view is selective with the intention to get a new and fair picture about some particular aspects of bootstrapping time series. The generality of the block bootstrap is contrasted by sieve bootstraps. We discuss implementational dis/advantages and argue that two types of sieves outperform the block method, each of them in its own important niche, namely linear and categorical processes, respectively. Local bootstraps, designed for nonparametric smoothing problems, are easy to use and implement but exhibit in some cases low performance. Key words and phrases. Autoregression, block bootstrap, categorical time series, context algorithm, double bootstrap, linear process, local bootstrap, Markov chain, sieve bootstrap, stationary process. 1 Introduction Bootstrapping can be viewed as simulating a statistic or statistical pro...
Measures of Fit for Calibrated Models
 Center for International Business Cycle Research, Columbia Business School
, 1993
"... This paper suggests a new procedure for evaluating the fit of a dynamic structural economic model. The procedure begins by augmenting the variables in the model with just enough stochastic error so that the model can exactly match the second moments of the actual data. Measures of fit for the model ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
This paper suggests a new procedure for evaluating the fit of a dynamic structural economic model. The procedure begins by augmenting the variables in the model with just enough stochastic error so that the model can exactly match the second moments of the actual data. Measures of fit for the model can then be constructed on the basis of the size of this error. The procedure is applied to a standard real business cycle model. Over the business cycle frequencies, the model must be augmented with a substantial error to match data for the postwar U.S. economy. Lower bounds on the variance of the error range from 40 percent to 60 percent of the variance in the actual data. I.