Results 1 
6 of
6
Overview of total leastsquares methods
 SIGNAL PROCESSING
, 2007
"... We review the development and extensions of the classical total least squares method and describe algorithms for its generalization to weighted and structured approximation problems. In the generic case, the classical total least squares problem has a unique solution, which is given in analytic form ..."
Abstract

Cited by 68 (9 self)
 Add to MetaCart
We review the development and extensions of the classical total least squares method and describe algorithms for its generalization to weighted and structured approximation problems. In the generic case, the classical total least squares problem has a unique solution, which is given in analytic form in terms of the singular value decomposition of the data matrix. The weighted and structured total least squares problems have no such analytic solution and are currently solved numerically by local optimization methods. We explain how special structure of the weight matrix and the data matrix can be exploited for efficient cost function and first derivative computation. This allows to obtain computationally efficient solution methods. The total least squares family of methods has a wide range of applications in system theory, signal processing, and computer algebra. We describe the applications for deconvolution, linear prediction, and errorsinvariables system identification.
Software for weighted structured lowrank approximation
"... A software package is presented that computes locally optimal solutions to lowrank approximation problems with the following features: • mosaic Hankel structure constraint on the approximating matrix, • weighted 2norm approximation criterion, • fixed elements in the approximating matrix, • missing ..."
Abstract

Cited by 28 (17 self)
 Add to MetaCart
(Show Context)
A software package is presented that computes locally optimal solutions to lowrank approximation problems with the following features: • mosaic Hankel structure constraint on the approximating matrix, • weighted 2norm approximation criterion, • fixed elements in the approximating matrix, • missing elements in the data matrix, and • linear constraints on an approximating matrix’s left kernel basis. It implements a variable projection type algorithm and allows the user to choose standard local optimization methods for the solution of the parameter optimization problem. For an m×n data matrix, with n>m, the computational complexity of the cost function and derivative evaluation is O(m2n). The package is suitable for applications with n ≫ m. In statistical estimation and data modeling—the main application areas of the package—n ≫ m corresponds to modeling of large amount of data by a lowcomplexity model. Performance results on benchmark system identification problems from the database DAISY and approximate common divisor problems are presented.
Structured low rank approximations of the Sylvester resultant matrix for approximate GCDs of Bernstein basis polynomials
, 2008
"... A structured low rank approximation of the Sylvester resultant matrix S(f, g) of the Bernstein basis polynomials f = f(y) and g = g(y), for the determination of their approximate greatest common divisors (GCDs), is computed using the method of structured total least norm. Since the GCD of f(y) and ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A structured low rank approximation of the Sylvester resultant matrix S(f, g) of the Bernstein basis polynomials f = f(y) and g = g(y), for the determination of their approximate greatest common divisors (GCDs), is computed using the method of structured total least norm. Since the GCD of f(y) and g(y) is equal to the GCD of f(y) and αg(y), where α is an arbitrary nonzero constant, it is more appropriate to consider a structured low rank approximation S ( ˜ f, ˜g) of S(f, αg), where the polynomials ˜ f = ˜ f(y) and ˜g = ˜g(y) approximate the polynomials f(y) and αg(y), respectively. Different values of α yield different structured low rank approximations S ( ˜ f, ˜g), and therefore different approximate GCDs. It is shown that the inclusion of α allows to obtain considerably improved approximations, as measured by the decrease of the singular values σi of S ( ˜ f, ˜g), with respect to the approximation obtained when the default value α = 1 is used. An example that illustrates the theory is presented and future work is discussed.
APPROXIMATE GCD OF INEXACT UNIVARIATE POLYNOMIALS
, 2009
"... Abstract − The problem of finding the greatest common divisor (GCD) of univariate polynomials appears in many engineering fields. Despite its formulation is wellknown, it is an illposed problem that entails numerous difficulties when the coefficients of the polynomials are not known with total acc ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract − The problem of finding the greatest common divisor (GCD) of univariate polynomials appears in many engineering fields. Despite its formulation is wellknown, it is an illposed problem that entails numerous difficulties when the coefficients of the polynomials are not known with total accuracy, as, for example, when they come from measurement data. In this work we propose a novel GCD estimation method designed to cope with such inaccuracies. An example of recovery of transient impulsive signals is provided to show the performance of the proposed method working on measurement data.