rational_resampler vs pfb_arb_resampler filter design -- why such strange cut frequencies in both cases?
Lev Serebryakov <lev <at> serebryakov.spb.ru>
2014-10-22 21:39:15 GMT
I'm looking at filter design procedures for rational_resampler and
pfb_arb_ resampler and see contradiction between them.
Lets assume downsample with rate 3/4 or 0.75. Lets source sample rate
to be 10000 (so, target sample rate should be 7500), but, in fact, it
rational_resampler could be configured for this task with
interpolation=3 and decimation=4. We'll set fractional_bw to "0.4" to
match arb_resampler "80%" multiplier. design_filter() method in
rational_resample.py will create low-pass filter with:
gain = 3
FS = 1
transition center = 0.4/3 = 0,1(3) of 30000 = 4000 (!!!)
transition width = 0.1
It doesn't look resonable, as 4000 is more than 3750 which is Nyquist
frequency for targer sampling rate.
Lets look at arbitrary resampler (pfb.py):
rate is < 1. so it takes first branch:
gain = 32 (default number of filters)
FS = 32 (WHY?!)
transition center = 0.5 * 0.75 * 0.8 = 0.3 (And it should be in Hz!)
transition width = 0.15
As you can see, parameters are rather different. But in this case
filter should be roughly the same, because task is the same! We need
to filter out all frequencies below 3750!
And both methods raise the questions. I could not say, that one of
the methods looks good to me. Questions are:
(1) Why rational resampler doesn't take decimation in account when
calculates bandwidth? It design filter which will correctly reject all
images in upsample, but looks like aliasing is possibly when effective
ratio is less than 1, as only interpolation is used in bandwidth
calculation and resulting filter "effectively" works at upsampled rate.
(2) Why arbitrary resampler set Fs to number of filters and AFTER
that pass bandwidth numbers as normalized to 1, not to this Fs?! 0.3
is perfectly good (may be slighly conservative, but Ok) transition
band center, but in this case this 0.3 will be taken for FS=1, and
here FS=32 (number of filters) is used, so it will be 32 times too low
Are these two peculiarities two bugs, really, or I don't understand
Yes, I've read harris' book on multirating processing, and AFTER this
book I got these questions.
Black Lion AKA Lev Serebryakov