1 Nov 2009 14:26

Re: [SciPy-user] least-square filter design

```Tom K. wrote:

>
>
> Neal Becker wrote:
>>
>> Anyone have code for least square (minimum mean square error) FIR filter
>> design?
>>
>
> Could you be a little more specific?  scipy.signal.firwin almost designs a
> least square low pass FIR filter if you use a rectangular window (I say
> almost because like other packages the filter's response is normalized to
> unity at DC so technically it is not least squares although the difference
> is slight and decreases with increasing filter order).
>
> Do you need a transition band?  What type of FIR filter: lowpass,
> highpass,
> bandpass, bandstop, or multiband?  Are discrete samples OK, or do you need
> a
> continuous band (or set of bands)?  Which type of filter - is symmetric
> OK, or do you need antisymmetric?
>
>

I'm looking for something like this:
http://www.mathworks.com/access/helpdesk/help/toolbox/filterdesign/ref/firls.html
```
1 Nov 2009 21:26

Re: [SciPy-user] least-square filter design

```
Neal Becker wrote:
>
> Tom K. wrote:
>
>>
>>
>> Neal Becker wrote:
>>>
>>> Anyone have code for least square (minimum mean square error) FIR filter
>>> design?
>>>
> I'm looking for something like this:
> http://www.mathworks.com/access/helpdesk/help/toolbox/filterdesign/ref/firls.html
>
>

Here's something that works for odd length, symmetric filters with constant
magnitude per band - it doesn't support everything that MathWorks' firls
supports (e.g. design of even length, differentiators, antisymmetric
filters, sloping bands) but hopefully this meets your need.

import numpy as np
from scipy.special import sinc

def firls(N, f, D=None):
"""Least-squares FIR filter.
N -- filter length, must be odd
f -- list of tuples of band edges
Units of band edges are Hz with 0.5 Hz == Nyquist
```

1 Nov 2009 21:30

Re: scipy.linalg.det TypeError

Let us know if you need more help.

On Fri, Oct 30, 2009 at 9:24 AM, Norm Wood wrote:

On 29 Oct., David Goldsmith wrote:
> Uncertain why you're having a problem - your sample code works for me:
>
> >>> import scipy.linalg
> >>> import numpy.linalg
> >>> A=np.matrix([[1.1, 1.9],[1.9,3.5]])
> >>> y = numpy.linalg.det(A); y
> 0.23999999999999988
> >>> y = scipy.linalg.det(A); y
> 0.23999999999999988
> >>> scipy.__version__
> '0.7.1'
> >>> np.__version__
> '1.3.0rc2'
> Python 2.5 on Windoze Vista HPE

Thanks for checking, David.  I'll have to take a closer look at how the
"get_flinalg_funcs" procedure works, and will probably try
rebuilding & reinstalling LAPACK, ATLAS, numpy and scipy from scratch.

Norm
_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

```_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
```
2 Nov 2009 02:28

Re: [SciPy-user] least-square filter design

```
Tom K. wrote:
>
>     a = np.dot(np.linalg.inv(R), r)
>

It occurred to me that "inv" is not the right choice here...

Replace that line with:
a = np.linalg.solve(R, r)

--

--
View this message in context: http://old.nabble.com/least-square-filter-design-tp26083443p26155428.html
Sent from the Scipy-User mailing list archive at Nabble.com.
```
2 Nov 2009 02:45

linear algebra: quadratic forms without linalg.inv

```This is just an exercise.

In econometrics (including statsmodels) we have a lot of quadratic
forms that are usually calculate with a matrix inverse. I finally
spend some time to figure out how to do this with cholesky or LU
composition which should be numerically more stable or accurate (and
faster).

"Don't let that INV go past your eyes" (matlab file exchange)

Josef

""" Use cholesky or LU decomposition to calculate quadratic forms

different ways to calculate matrix product B.T * inv(B) * B

Note: calling convention in sparse not consistent, sparse requires
loop over right hand side

Author: josef-pktd
"""

import numpy as np
from scipy import linalg

B = np.ones((3,2)).T
B = np.arange(6).reshape((3,2)).T

print 'using inv'
Ainv = linalg.inv(A)
print np.dot(Ainv, B[:,0])
print np.dot(Ainv, B)
print reduce(np.dot, [B.T, Ainv, B])

print 'using cholesky'
F = linalg.cho_factor(A)
print linalg.cho_solve(F, B[:,0])
print linalg.cho_solve(F, B)
print np.dot(B.T, linalg.cho_solve(F, B))

print 'using lu'
F = linalg.lu_factor(A)
print linalg.lu_solve(F, B[:,0])
print linalg.lu_solve(F, B)
print np.dot(B.T, linalg.lu_solve(F, B))

from scipy import sparse

Asp = sparse.csr_matrix(A)
print 'using sparse symmetric lu'
F = sparse.linalg.splu(A)
print F.solve(B[:,0])
#print F.solve(B) # wrong results but no exception
AiB = np.column_stack([F.solve(Bcol) for Bcol in B.T])
print AiB
print np.dot(B.T, AiB)

#not:
#Bsp = sparse.csr_matrix(B)
#print B.T * F.solve(Bsp)  # argument to solve must be dense array

print 'using sparse lu'
F = sparse.linalg.factorized(A)
print F(B[:,0])
#print F(B) # wrong results but no exception
AiB = np.column_stack([F(Bcol) for Bcol in B.T])
print np.dot(B.T, AiB)
```
2 Nov 2009 04:03

Re: linear algebra: quadratic forms without linalg.inv

```josef.pktd <at> gmail.com skrev:
> In econometrics (including statsmodels) we have a lot of quadratic
> forms that are usually calculate with a matrix inverse.
That is a sign of numerical incompetence.

You see this often in statistics as well, people who think matrix
inverse is the way to calculate mahalanobis distances, when you should
really use a Cholesky.

As for LU, I'd rather use an SVD as it is numerically more stabile.
Using LU, you are betting on singular values not being tiny. With SVD
you can solve an ill-conditioned system by zeroing tiny singular values.
With LU you just get astronomic rounding errors.

Sturla
```
2 Nov 2009 04:06

Re: linear algebra: quadratic forms without linalg.inv

```On Mon, Nov 02, 2009 at 04:03:14AM +0100, Sturla Molden wrote:
> josef.pktd <at> gmail.com skrev:
> > In econometrics (including statsmodels) we have a lot of quadratic
> > forms that are usually calculate with a matrix inverse.
> That is a sign of numerical incompetence.

Yup, but you'd be surprised to see how much inverse is used. I was
astonished to find out that senior researchers that I respect a lot were
not even aware of the problem.

GaĆ«l
```
2 Nov 2009 04:25

Re: linear algebra: quadratic forms without linalg.inv

```On Sun, Nov 1, 2009 at 11:03 PM, Sturla Molden <sturla <at> molden.no> wrote:
> josef.pktd <at> gmail.com skrev:
>> In econometrics (including statsmodels) we have a lot of quadratic
>> forms that are usually calculate with a matrix inverse.

> That is a sign of numerical incompetence.

I agree, but that's the training. Last time I did principal components
with SVD, it took me a long time to figure out how to get it to work,
I still don't understand it.The only matrix decomposition that I'm
familiar with is eigenvalue decomposition.

But we had this part of the discussion before, in applied
econometrics, if we have enough multicollinearity that numerical
precision matters, then we are screwed anyway and have to rethink the
data analysis or the model, or do a pca.

>

> You see this often in statistics as well, people who think matrix
> inverse is the way to calculate mahalanobis distances, when you should
> really use a Cholesky.
>
> As for LU, I'd rather use an SVD as it is numerically more stabile.
> Using LU, you are betting on singular values not being tiny. With SVD
> you can solve an ill-conditioned system by zeroing tiny singular values.
> With LU you just get astronomic rounding errors.

How can you calculate the quadratic form or the product inv(A)*B with SVD?
Solving the equations is ok, since pinv and lstsq are based on SVD internally.

In matlab there is also a version for QR, but I haven't figured out
how to do this in scipy without an inverse.

Josef

>
> Sturla
> _______________________________________________
> SciPy-User mailing list
> SciPy-User <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
```
2 Nov 2009 05:33

Re: linear algebra: quadratic forms without linalg.inv

```josef.pktd <at> gmail.com skrev:
> if we have enough multicollinearity that numerical
> precision matters, then we are screwed anyway and have to rethink the
> data analysis or the model, or do a pca.
>
>
And PCA has nothing to do with SVD, right?

Or ... what what would you call a procesure that takes your data,
subtracts the mean, and does an SVD?

:-D

Sturla
```
2 Nov 2009 06:15

Re: linear algebra: quadratic forms without linalg.inv

```On Mon, Nov 2, 2009 at 12:33 AM, Sturla Molden <sturla <at> molden.no> wrote:
> josef.pktd <at> gmail.com skrev:
>> if we have enough multicollinearity that numerical
>> precision matters, then we are screwed anyway and have to rethink the
>> data analysis or the model, or do a pca.
>>
>>
> And PCA has nothing to do with SVD, right?

> Or ... what what would you call a procesure that takes your data,
> subtracts the mean, and does an SVD?

All the explanations I read where in terms of eigenvalue decomposition
and not with SVD. I'm pretty good in removing negative eigenvalues
when I'm supposed to have a positive definite matrix, but SVD has too
many parts.

(Besides I don't like pca for regression, and I'm still struggling how to
do partial least squares with SVD.)

Josef

>
> :-D
>
>
> Sturla
> _______________________________________________
> SciPy-User mailing list
> SciPy-User <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
```

Gmane