Return to Colloquia & Seminar listing
Applying Numerical Linear Algebra Techniques to Signal Processing
Applied MathSpeaker: | James R. Bunch, University of California, San Diego |
Location: | 693 Kerr |
Start time: | Tue, May 17 2005, 4:10PM |
Techniques have been developed in numerical linear algebra for distinguishing between infinite precision effects and finite precision effects. Although the structure of algorithms in signal processing prevents the direct application of typical numerical linear algebra techniques of analysis, much can be gained by adapting to signal processing some of those used in numerical linear algebra.
A conceptual framework is presented that focusses on the distinction between a perturbation analysis of a problem and the stability analysis of an algorithm. Numerical analysis techniques for signal processing algorithms are discussed and terminology for a conceptual framework is suggested for distinguishing between errors propagated by the nature of the problem and errors propagated through the use of finite precision arithmetic.
These techniques are applied to the Fast Transversal Filter algorithm, an algorithm that has long been known to diverge explosively, yet has nonetheless received much interest concerning the reasons behind its catastrophic divergence.