Results 1 
9 of
9
Multichannel Blind Deconvolution: Fir Matrix Algebra And Separation Of Multipath Mixtures
, 1996
"... A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and mat ..."
Abstract

Cited by 74 (0 self)
 Add to MetaCart
A general tool for multichannel and multipath problems is given in FIR matrix algebra. With Finite Impulse Response (FIR) filters (or polynomials) assuming the role played by complex scalars in traditional matrix algebra, we adapt standard eigenvalue routines, factorizations, decompositions, and matrix algorithms for use in multichannel /multipath problems. Using abstract algebra/group theoretic concepts, information theoretic principles, and the Bussgang property, methods of single channel filtering and source separation of multipath mixtures are merged into a general FIR matrix framework. Techniques developed for equalization may be applied to source separation and vice versa. Potential applications of these results lie in neural networks with feedforward memory connections, wideband array processing, and in problems with a multiinput, multioutput network having channels between each source and sensor, such as source separation. Particular applications of FIR polynomial matrix alg...
Unsupervised Deconvolution of Sparse Spike Trains Using Stochastic Approximation
, 1996
"... This paper presents an unsupervised method for restoration of sparse spike trains. These signals are modeled as random BernoulliGaussian processes, and their unsupervised restoration requires (i) estimation of the hyperparameters that control the stochastic models of the input and noise signals ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
This paper presents an unsupervised method for restoration of sparse spike trains. These signals are modeled as random BernoulliGaussian processes, and their unsupervised restoration requires (i) estimation of the hyperparameters that control the stochastic models of the input and noise signals and (ii) deconvolutlon of the pulse process. Classically, the problem is solved iteratively using a maximum generalized likelihood approach despite questionable statistical properties.
Sparsity vs. Statistical Independence from a BestBasis Viewpoint
"... We examine the similarity and difference between sparsity and statistical independence in image representations in a very concrete setting: use the best basis algorithm to select the sparsest basis and the least statisticallydependent basis from basis dictionaries for a given dataset. In order to u ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We examine the similarity and difference between sparsity and statistical independence in image representations in a very concrete setting: use the best basis algorithm to select the sparsest basis and the least statisticallydependent basis from basis dictionaries for a given dataset. In order to understand their relationship, we use synthetic stochastic processes (e.g., spike, ramp, and generalized Gaussian processes) as well as the image patches of natural scenes. Our experiments and analysis so far suggest the following: 1) Both sparsity and statistical independence criteria selected similar bases for most of our examples with minor differences; 2) Sparsity is more computationally and conceptually feasible as a basis selection criterion than the statistical independence, particularly for data compression; 3) The sparsity criterion can and should be adapted to individual realization rather than for the whole collection of the realizations to achieve the maximum performance; 4) The importance of orientation selectivity of the local Fourier and brushlet dictionaries was not clearly demonstrated due to the boundary effect caused by the folding and local periodization. These observations seem to encourage the pursuit of sparse representations rather than that of statistically independent representations.
Adaptive blind equalization through quadratic pdf matching
 in Proceedings of the European Signal Processing Conference
"... In this paper we propose a new cost function for blind equalization which aims at forcing a given probability density at the output of the equalizer. In previous works based on this idea, the KullbackLeibler distance was used as an appropriate measure of the distance between densities. Here we con ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper we propose a new cost function for blind equalization which aims at forcing a given probability density at the output of the equalizer. In previous works based on this idea, the KullbackLeibler distance was used as an appropriate measure of the distance between densities. Here we consider the Euclidean (quadratic) distance between the current pdf at the output of the equalizer and the target pdf. Using Parzen windowing with Gaussian kernels for pdf estimation, this quadratic distance can be easily evaluated from data. The adaptive equalization algorithm minimizes the cost function employing a stochastic gradient descent approach. The algorithm is evaluated in different scenarios through computer simulations, and its performance is compared to that of a minimum Renyi’s entropy approach, which is related to the proposed algorithm, and also to the conventional constant modulus algorithm (CMA). 1
Simulated Annealing Wavelet Estimation Via FourthOrder Cumulant Matching
, 1996
"... The fourthorder cumulant matching method has been developed recently for estimating amixedphase wavelet from a convolutional process. Matching between the trace cumulant and the wavelet moment is done in a minimum meansquared error sense under the assumption of a nonGaussian, stationary, an ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The fourthorder cumulant matching method has been developed recently for estimating amixedphase wavelet from a convolutional process. Matching between the trace cumulant and the wavelet moment is done in a minimum meansquared error sense under the assumption of a nonGaussian, stationary, and statistically independent reflectivity series. This leads to a highly nonlinear optimization problem, usually solved by techniques that require a certain degree of linearization, and that invariably converge to the minimum closest to the initial model. Alternatively, we propose a hybrid strategy that makes use of a simulated annealing algorithm to provide reliability of the numerical solutions by reducing the risk of being trapped in local minima. Beyond the numerical aspect, the reliability of the derived wavelets depends strongly on the amount of data available. However, by using a multidimensional taper to smooth the trace cumulant, we show that the method can be used even ...