## An affine scaling methodology for best basis selection (1999)

### Cached

### Download Links

- [dsp.rice.edu]
- [www.dsp.ece.rice.edu]
- [dsp.rice.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | IEEE Trans. Signal Processing |

Citations: | 78 - 11 self |

### BibTeX

@ARTICLE{Rao99anaffine,

author = {Bhaskar D. Rao and Senior Member and Kenneth Kreutz-delgado and Senior Member},

title = {An affine scaling methodology for best basis selection},

journal = {IEEE Trans. Signal Processing},

year = {1999},

pages = {187--200}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract — A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the p-norm-like (`(p 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the Affine Scaling Transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the `(p 1) diversity measures are equivalent to a recently developed class of algorithms called FOCal Underdetermined System Solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a well-behaved p =0norm-like optimization algorithm. Computer experiments demonstrate that the p-norm-like and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution. I.