eat | 1 Mar 08:31 2011
Picon

Re: faster interpolations (interp1d)

Hi James,

On Mon, Feb 28, 2011 at 5:25 PM, James McCormac <jmccormac01 <at> qub.ac.uk> wrote:
Hi eat,
you sent me a suggestion for faster 1d interpolations using matrices a few
weeks back but I cannot find the email anywhere when I looked for it
today.

Here is a better explanation of what I am trying to do. For example I have
a 1d array of 500 elements. I want to interpolate them quadratically so
each array becomes 10 values, 50,000 in total.

I have 500x500 pixels and I want to get 0.01 pixel resolution.

code snipet:
# collapse an image in the x direction
ref_xproj=np.sum(refarray,axis=0)

# make an array for the 1d spectra
x = np.linspace(0, (x_2-x_1), (x_2-x_1))

# interpolation
f2_xr = interp1d(x, ref_xproj, kind='quadratic')

# new x array for interpolated data
xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100)

# FFT of interpolated spectra
F_ref_xproj = fftpack.fft(f2_xr(xnew))

Can I do this type of interpolation faster using the method you described
before?
I'll misinterpreted your original question and the method I suggested there is not applicable.

To better understand your situation, few questions:
- what you described above; it does work for you in technical sense?
- if so, then the problem is with the execution performance?
- what are your current timings?
- how much you'll need to enhance them?

Regards,
eat

Cheers
James



_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
James McCormac | 1 Mar 12:01 2011
Picon
Picon

Re: faster interpolations (interp1d)

Hi eat,
Yes this code works fine, its not actually that bad on a 2x500 arrays ~5 but the 500 length arrays are the shortest I can run with. I am analyzing CCD images which can go up 2000x2000 in length and breadth, meaning 2x2000 1d arrays after collapsing the spectra. This takes >20 sec per image which is much too long. Ideally the id like it to run as fast as possible (depending on how much accuracy I can maintain).

Yes the code works fine its just a little slow, I've put timers in and 98% of the time is taken up by the interpolation.  Any improvement in performance would be great. I've slimmed  down the rest of the body as much as possible already. 

Cheers
James


On 1 Mar 2011, at 07:31, eat wrote:

Hi James,

On Mon, Feb 28, 2011 at 5:25 PM, James McCormac <jmccormac01 <at> qub.ac.uk> wrote:
Hi eat,
you sent me a suggestion for faster 1d interpolations using matrices a few
weeks back but I cannot find the email anywhere when I looked for it
today.

Here is a better explanation of what I am trying to do. For example I have
a 1d array of 500 elements. I want to interpolate them quadratically so
each array becomes 10 values, 50,000 in total.

I have 500x500 pixels and I want to get 0.01 pixel resolution.

code snipet:
# collapse an image in the x direction
ref_xproj=np.sum(refarray,axis=0)

# make an array for the 1d spectra
x = np.linspace(0, (x_2-x_1), (x_2-x_1))

# interpolation
f2_xr = interp1d(x, ref_xproj, kind='quadratic')

# new x array for interpolated data
xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100)

# FFT of interpolated spectra
F_ref_xproj = fftpack.fft(f2_xr(xnew))

Can I do this type of interpolation faster using the method you described
before?
I'll misinterpreted your original question and the method I suggested there is not applicable.

To better understand your situation, few questions:
- what you described above; it does work for you in technical sense?
- if so, then the problem is with the execution performance?
- what are your current timings?
- how much you'll need to enhance them?

Regards,
eat

Cheers
James



_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

-------------------------------------------------
James McCormac
Astrophysics Research Centre
School of Mathematics & Physics
Queens University Belfast
University Road, 
Belfast, U.K
BT7 1NN,
TEL: 028 90973509






_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
Peter Combs | 1 Mar 12:51 2011
Picon

Re: [SciPy-user] mgrid format from unstructured data

On Feb 23, 2011, at 1:49 AM, Spiffalizer wrote:
> I have found some examples that looks like this
> x,y = np.mgrid[-1:1:10j,-1:1:10j]
> z = (x+y)*np.exp(-6.0*(x*x+y*y))
> xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j]
> tck = interpolate.bisplrep(x,y,z,s=0)
> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck)
> 
> 
> So my question really is how to sort/convert my input to a format that can
> be used by the interpolate function?

I use the LSQBivariateSpline functions:

import numpy as np
import scipy.interpolate as interp

num_knots = int(floor(sqrt(len(z))))
xknots = np.linspace(xmin, xmax, n)
yknots = np.linspace(ymin, ymax, n)
interpolator = interp.LSQBivariateSpline(x, y, z, xknots, yknots)
znew = interpolator.ev(xnew, ynew)

The object orientation is useful for my applications, for reasons that I no longer quite remember.  Looking
through the documentation for bisplrep, though, it doesn't seem like you need to worry about the order
that the points are in. You might try something like:

xknots = list(set(x))
yknots = list(set(y))
tck = interpolate.bisplrep(x,y,z, task=-1, tx = xknots, ty=yknots)

but my understanding of the bisplrep function is hazy at best, so probably best to check it with data you
already know the answer.

Peter Combs
peter.combs <at> berkeley.edu
Tiziano Zito | 1 Mar 14:41 2011
Picon

[ANN] Summer School "Advanced Scientific Programming in Python" in St Andrews, UK

Advanced Scientific Programming in Python
========================================= 
a Summer School by the G-Node and the School of Psychology,
University of St Andrews

Scientists spend more and more time writing, maintaining, and
debugging software. While techniques for doing this efficiently have
evolved, only few scientists actually use them. As a result, instead
of doing their research, they spend far too much time writing
deficient code and reinventing the wheel. In this course we will
present a selection of advanced programming techniques,
incorporating theoretical lectures and practical exercises tailored
to the needs of a programming scientist. New skills will be tested
in a real programming project: we will team up to develop an
entertaining scientific computer game.

We use the Python programming language for the entire course. Python
works as a simple programming language for beginners, but more
importantly, it also works great in scientific simulations and data
analysis. We show how clean language design, ease of extensibility,
and the great wealth of open source libraries for scientific
computing and data visualization are driving Python to become a
standard tool for the programming scientist.

This school is targeted at PhD students and Post-docs from all areas
of science. Competence in Python or in another language such as
Java, C/C++, MATLAB, or Mathematica is absolutely required. Basic
knowledge of Python is assumed. Participants without any prior
experience with Python should work through the proposed introductory
materials before the course.

Date and Location
=================
September 11—16, 2011. St Andrews, UK.

Preliminary Program
===================
Day 0 (Sun Sept 11) — Best Programming Practices
  - Agile development & Extreme Programming 
  - Advanced Python: decorators, generators, context managers
  - Version control with git 
Day 1 (Mon Sept 12) — Software Carpentry
  - Object-oriented programming & design patterns
  - Test-driven development, unit testing & quality assurance
  - Debugging, profiling and benchmarking techniques
  - Programming in teams 
Day 2 (Tue Sept 13) — Scientific Tools for Python
  - Advanced NumPy 
  - The Quest for Speed (intro): Interfacing to C with Cython
  - Best practices in data visualization 
Day 3 (Wed Sept 14) — The Quest for Speed 
  - Writing parallel applications in Python
  - Programming project 
Day 4 (Thu Sept 15) — Efficient Memory Management
  - When parallelization does not help:
    the starving CPUs problem 
  - Data serialization: from pickle to databases
  - Programming project 
Day 5 (Fri Sept 16) — Practical Software Development
  - Programming project
  - The Pac-Man Tournament

Every evening we will have the tutors' consultation hour: Tutors
will answer your questions and give suggestions for your own
projects.

Applications
============
You can apply on-line at http://python.g-node.org

Applications must be submitted before May 29, 2011. Notifications of
acceptance will be sent by June 19, 2011.

No fee is charged but participants should take care of travel,
living, and accommodation expenses. 
Candidates will be selected on the basis of their profile. Places
are limited: acceptance rate in past editions was around 30%.
Prerequisites: You are supposed to know the basics of Python to
participate in the lectures. Please consult the website for a list
of introductory material.

Faculty
======= 
- Francesc Alted, author of PyTables, Castelló de la Plana, Spain 
- Pietro Berkes, Volen Center for Complex Systems, Brandeis
  University, USA 
- Valentin Haenel, Berlin Institute of Technology and Bernstein
  Center for Computational Neuroscience Berlin, Germany 
- Zbigniew Jędrzejewski-Szmek, Faculty of Physics, University of
  Warsaw, Poland 
- Eilif Muller, The Blue Brain Project, Ecole Polytechnique Fédérale
  de Lausanne, Switzerland 
- Emanuele Olivetti, NeuroInformatics Laboratory, Fondazione Bruno
  Kessler and University of Trento, Italy 
- Rike-Benjamin Schuppner, Bernstein Center for Computational
  Neuroscience Berlin, Germany 
- Bartosz Teleńczuk, Institute for Theoretical Biology,
  Humboldt-Universität zu Berlin, Germany
- Bastian Venthur, Berlin Institute of Technology and Bernstein
  Focus: Neurotechnology, Germany 
- Pauli Virtanen, Institute for Theoretical Physics and
  Astrophysics, University of Würzburg, Germany 
- Tiziano Zito, Berlin Institute of Technology and Bernstein Center
  for Computational Neuroscience Berlin, Germany

Organized by Katharina Maria Zeiner and Manuel Spitschan of the
School of Psychology, University of St Andrews, and by Zbigniew
Jędrzejewski-Szmek and Tiziano Zito for the German Neuroinformatics
Node of the INCF.  

Website:  http://python.g-node.org
Contact:  python-info <at> g-node.org

_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
eat | 1 Mar 15:03 2011
Picon

Re: faster interpolations (interp1d)

Hi,

On Tue, Mar 1, 2011 at 1:01 PM, James McCormac <jmccormac01 <at> qub.ac.uk> wrote:
Hi eat,
Yes this code works fine, its not actually that bad on a 2x500 arrays ~5 but the 500 length arrays are the shortest I can run with. I am analyzing CCD images which can go up 2000x2000 in length and breadth, meaning 2x2000 1d arrays after collapsing the spectra. This takes >20 sec per image which is much too long. Ideally the id like it to run as fast as possible (depending on how much accuracy I can maintain).

Yes the code works fine its just a little slow, I've put timers in and 98% of the time is taken up by the interpolation.  Any improvement in performance would be great. I've slimmed  down the rest of the body as much as possible already. 
Can you provide a minimal working code example, which demonstrates the problem? At least you'll get better idea how it performs on some other machine.


Regards,
eat

Cheers
James


On 1 Mar 2011, at 07:31, eat wrote:

Hi James,

On Mon, Feb 28, 2011 at 5:25 PM, James McCormac <jmccormac01 <at> qub.ac.uk> wrote:
Hi eat,
you sent me a suggestion for faster 1d interpolations using matrices a few
weeks back but I cannot find the email anywhere when I looked for it
today.

Here is a better explanation of what I am trying to do. For example I have
a 1d array of 500 elements. I want to interpolate them quadratically so
each array becomes 10 values, 50,000 in total.

I have 500x500 pixels and I want to get 0.01 pixel resolution.

code snipet:
# collapse an image in the x direction
ref_xproj=np.sum(refarray,axis=0)

# make an array for the 1d spectra
x = np.linspace(0, (x_2-x_1), (x_2-x_1))

# interpolation
f2_xr = interp1d(x, ref_xproj, kind='quadratic')

# new x array for interpolated data
xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100)

# FFT of interpolated spectra
F_ref_xproj = fftpack.fft(f2_xr(xnew))

Can I do this type of interpolation faster using the method you described
before?
I'll misinterpreted your original question and the method I suggested there is not applicable.

To better understand your situation, few questions:
- what you described above; it does work for you in technical sense?
- if so, then the problem is with the execution performance?
- what are your current timings?
- how much you'll need to enhance them?

Regards,
eat

Cheers
James



_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

-------------------------------------------------
Astrophysics Research Centre
School of Mathematics & Physics
Queens University Belfast
University Road, 
Belfast, U.K
BT7 1NN,
TEL: 028 90973509







_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user


_______________________________________________
SciPy-User mailing list
SciPy-User <at> scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
nicky van foreest | 1 Mar 20:53 2011
Picon

Re: [SciPy-user] mgrid format from unstructured data

Hi,

In relation to this topic: does anybody know of  a scipy
implementation for multivariate splines?

bye

Nicky

On 1 March 2011 12:51, Peter Combs <peter.combs <at> berkeley.edu> wrote:
> On Feb 23, 2011, at 1:49 AM, Spiffalizer wrote:
>> I have found some examples that looks like this
>> x,y = np.mgrid[-1:1:10j,-1:1:10j]
>> z = (x+y)*np.exp(-6.0*(x*x+y*y))
>> xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j]
>> tck = interpolate.bisplrep(x,y,z,s=0)
>> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck)
>>
>>
>> So my question really is how to sort/convert my input to a format that can
>> be used by the interpolate function?
>
> I use the LSQBivariateSpline functions:
>
> import numpy as np
> import scipy.interpolate as interp
>
> num_knots = int(floor(sqrt(len(z))))
> xknots = np.linspace(xmin, xmax, n)
> yknots = np.linspace(ymin, ymax, n)
> interpolator = interp.LSQBivariateSpline(x, y, z, xknots, yknots)
> znew = interpolator.ev(xnew, ynew)
>
> The object orientation is useful for my applications, for reasons that I no longer quite remember.
 Looking through the documentation for bisplrep, though, it doesn't seem like you need to worry about
the order that the points are in. You might try something like:
>
> xknots = list(set(x))
> yknots = list(set(y))
> tck = interpolate.bisplrep(x,y,z, task=-1, tx = xknots, ty=yknots)
>
> but my understanding of the bisplrep function is hazy at best, so probably best to check it with data you
already know the answer.
>
> Peter Combs
> peter.combs <at> berkeley.edu
>
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
Amenity Applewhite | 1 Mar 22:31 2011

Webinar 3/4: How do I… solve ODEs? Part II

March EPD Webinar: How do I...solve ODEs? Part II

This Friday, Warren Weckesser will present a second installment of his
webinars on differential equations. We will explore two Python packages for
solving boundary value problems.  Both are packaged as scikits:
scikits.bvp_solver, written by John Salvatier, is a wrapper of the
BVP_SOLVER code by Shampine and Muir;  scikits.bvp1lg,  written by
Pauli Virtanen, is a wrapper of the COLNEW solver developed by Ascher and
Bader.

Enthought Python Distribution Webinar
How do I... solve ODEs? Part II
Friday, March 4: 1pm CST/7pm UTC
Wait list (for non EPD subscribers): send an email to amenity <at> enthought.com

Thanks!
_________________________
Amenity Applewhite
Scientific Computing Solutions


_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sloan Lindsey | 2 Mar 11:56 2011
Picon

Re: [SciPy-user] mgrid format from unstructured data

Hi,
on mvsplines:
Take a look at http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata
There is a linear version too.

For the initial question: here is a snippit that works :
import scipy.interpolate as inter
import numpy as np
import matplotlib.pyplot as plt
datax,datay,dataz = np.genfromtxt('mydata.blah', skip_header=1, unpack=True)
points = np.array([datax,datay]).T
nearest = inter.NearestNDInterpolator(points,dataz)
linear = inter.LinearNDInterpolator(points,dataz,fill_value=0.0)
curvey = inter.CloughTocher2DInterpolator(points,dataz, fill_value =
0.0) #careful about the boundary conditions

#now you have 3 interpolants. To determine dataz  <at>  datax,datay
value = curvey(datax,datay)

#if you want a grid so that you can plot your interpolation:
xrange = np.arange(-10.0, 100.0, 0.05)
yrange = np.arange(-100.0, 100.0, 0.05)
mesh = np.meshgrid(xrange,yrange)
a_int_mesh = curvey(mesh)
plt.imshow(Zn-Zno)
plt.show

This works for un ordered data.
Sloan

On Tue, Mar 1, 2011 at 8:53 PM, nicky van foreest <vanforeest <at> gmail.com> wrote:
> Hi,
>
> In relation to this topic: does anybody know of  a scipy
> implementation for multivariate splines?
>
> bye
>
> Nicky
>
> On 1 March 2011 12:51, Peter Combs <peter.combs <at> berkeley.edu> wrote:
>> On Feb 23, 2011, at 1:49 AM, Spiffalizer wrote:
>>> I have found some examples that looks like this
>>> x,y = np.mgrid[-1:1:10j,-1:1:10j]
>>> z = (x+y)*np.exp(-6.0*(x*x+y*y))
>>> xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j]
>>> tck = interpolate.bisplrep(x,y,z,s=0)
>>> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck)
>>>
>>>
>>> So my question really is how to sort/convert my input to a format that can
>>> be used by the interpolate function?
>>
>> I use the LSQBivariateSpline functions:
>>
>> import numpy as np
>> import scipy.interpolate as interp
>>
>> num_knots = int(floor(sqrt(len(z))))
>> xknots = np.linspace(xmin, xmax, n)
>> yknots = np.linspace(ymin, ymax, n)
>> interpolator = interp.LSQBivariateSpline(x, y, z, xknots, yknots)
>> znew = interpolator.ev(xnew, ynew)
>>
>> The object orientation is useful for my applications, for reasons that I no longer quite remember.
 Looking through the documentation for bisplrep, though, it doesn't seem like you need to worry about
the order that the points are in. You might try something like:
>>
>> xknots = list(set(x))
>> yknots = list(set(y))
>> tck = interpolate.bisplrep(x,y,z, task=-1, tx = xknots, ty=yknots)
>>
>> but my understanding of the bisplrep function is hazy at best, so probably best to check it with data you
already know the answer.
>>
>> Peter Combs
>> peter.combs <at> berkeley.edu
>>
>>
>> _______________________________________________
>> SciPy-User mailing list
>> SciPy-User <at> scipy.org
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
josef.pktd | 2 Mar 14:56 2011
Picon

Re: faster interpolations (interp1d)

On Tue, Mar 1, 2011 at 9:03 AM, eat <e.antero.tammi <at> gmail.com> wrote:
> Hi,
>
> On Tue, Mar 1, 2011 at 1:01 PM, James McCormac <jmccormac01 <at> qub.ac.uk>
> wrote:
>>
>> Hi eat,
>> Yes this code works fine, its not actually that bad on a 2x500 arrays ~5
>> but the 500 length arrays are the shortest I can run with. I am analyzing
>> CCD images which can go up 2000x2000 in length and breadth, meaning 2x2000
>> 1d arrays after collapsing the spectra. This takes >20 sec per image which
>> is much too long. Ideally the id like it to run as fast as possible
>> (depending on how much accuracy I can maintain).
>> Yes the code works fine its just a little slow, I've put timers in and 98%
>> of the time is taken up by the interpolation.  Any improvement in
>> performance would be great. I've slimmed  down the rest of the body as much
>> as possible already.
>
> Can you provide a minimal working code example, which demonstrates the
> problem? At least you'll get better idea how it performs on some other
> machine.
>
> Regards,
> eat
>>
>> Cheers
>> James
>>
>> On 1 Mar 2011, at 07:31, eat wrote:
>>
>> Hi James,
>>
>> On Mon, Feb 28, 2011 at 5:25 PM, James McCormac <jmccormac01 <at> qub.ac.uk>
>> wrote:
>>>
>>> Hi eat,
>>> you sent me a suggestion for faster 1d interpolations using matrices a
>>> few
>>> weeks back but I cannot find the email anywhere when I looked for it
>>> today.
>>>
>>> Here is a better explanation of what I am trying to do. For example I
>>> have
>>> a 1d array of 500 elements. I want to interpolate them quadratically so
>>> each array becomes 10 values, 50,000 in total.
>>>
>>> I have 500x500 pixels and I want to get 0.01 pixel resolution.
>>>
>>> code snipet:
>>> # collapse an image in the x direction
>>> ref_xproj=np.sum(refarray,axis=0)
>>>
>>> # make an array for the 1d spectra
>>> x = np.linspace(0, (x_2-x_1), (x_2-x_1))
>>>
>>> # interpolation
>>> f2_xr = interp1d(x, ref_xproj, kind='quadratic')
>>>
>>> # new x array for interpolated data
>>> xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100)
>>>
>>> # FFT of interpolated spectra
>>> F_ref_xproj = fftpack.fft(f2_xr(xnew))
>>>
>>> Can I do this type of interpolation faster using the method you described
>>> before?
>>
>> I'll misinterpreted your original question and the method I suggested
>> there is not applicable.
>> To better understand your situation, few questions:
>> - what you described above; it does work for you in technical sense?
>> - if so, then the problem is with the execution performance?
>> - what are your current timings?
>> - how much you'll need to enhance them?
>> Regards,
>> eat
>>>
>>> Cheers
>>> James

Just a thought since I don't know the details:

using fft interpolation might be faster, e.g. signal.resample

>>> t = np.linspace(0,10,25)
>>> x = np.sin(t)
>>> t2 = np.linspace(0,10,50)
>>> x2 = signal.resample(x,50)

scipy.ndimage.interpolation  should also be faster, if there is
something that does what you want.

Josef

>>>
>>>
>>>
>>> _______________________________________________
>>> SciPy-User mailing list
>>> SciPy-User <at> scipy.org
>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>> _______________________________________________
>> SciPy-User mailing list
>> SciPy-User <at> scipy.org
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>> -------------------------------------------------
>> James McCormac
>> jmccormac01 <at> qub.ac.uk
>> Astrophysics Research Centre
>> School of Mathematics & Physics
>> Queens University Belfast
>> University Road,
>> Belfast, U.K
>> BT7 1NN,
>> TEL: 028 90973509
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> SciPy-User mailing list
>> SciPy-User <at> scipy.org
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
>
James McCormac | 2 Mar 15:34 2011
Picon
Picon

Re: faster interpolations (interp1d)


Hi Josef,
Do you mean, create the spectra then resample it afterwards? I can have a
look at ndimage.iterpolation.

Cheers
James

On Wed, March 2, 2011 1:56 pm, josef.pktd <at> gmail.com wrote:
> On Tue, Mar 1, 2011 at 9:03 AM, eat <e.antero.tammi <at> gmail.com> wrote:
>
>> Hi,
>>
>>
>> On Tue, Mar 1, 2011 at 1:01 PM, James McCormac <jmccormac01 <at> qub.ac.uk>
>> wrote:
>>
>>>
>>> Hi eat,
>>> Yes this code works fine, its not actually that bad on a 2x500 arrays
>>> ~5
>>> but the 500 length arrays are the shortest I can run with. I am
>>> analyzing CCD images which can go up 2000x2000 in length and breadth,
>>> meaning 2x2000 1d arrays after collapsing the spectra. This takes >20
>>> sec per image which is much too long. Ideally the id like it to run as
>>> fast as possible (depending on how much accuracy I can maintain).
>>> Yes the code works fine its just a little slow, I've put timers in and
>>> 98%
>>> of the time is taken up by the interpolation.  Any improvement in
>>> performance would be great. I've slimmed  down the rest of the body
>>> as much as possible already.
>>
>> Can you provide a minimal working code example, which demonstrates the
>> problem? At least you'll get better idea how it performs on some other
>> machine.
>>
>> Regards,
>> eat
>>>
>>> Cheers
>>> James
>>>
>>>
>>> On 1 Mar 2011, at 07:31, eat wrote:
>>>
>>>
>>> Hi James,
>>>
>>>
>>> On Mon, Feb 28, 2011 at 5:25 PM, James McCormac
>>> <jmccormac01 <at> qub.ac.uk>
>>> wrote:
>>>
>>>>
>>>> Hi eat,
>>>> you sent me a suggestion for faster 1d interpolations using matrices
>>>> a few weeks back but I cannot find the email anywhere when I looked
>>>> for it today.
>>>>
>>>> Here is a better explanation of what I am trying to do. For example
>>>> I
>>>> have a 1d array of 500 elements. I want to interpolate them
>>>> quadratically so each array becomes 10 values, 50,000 in total.
>>>>
>>>> I have 500x500 pixels and I want to get 0.01 pixel resolution.
>>>>
>>>>
>>>> code snipet: # collapse an image in the x direction
>>>> ref_xproj=np.sum(refarray,axis=0)
>>>>
>>>> # make an array for the 1d spectra
>>>> x = np.linspace(0, (x_2-x_1), (x_2-x_1))
>>>>
>>>> # interpolation
>>>> f2_xr = interp1d(x, ref_xproj, kind='quadratic')
>>>>
>>>> # new x array for interpolated data
>>>> xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100)
>>>>
>>>> # FFT of interpolated spectra
>>>> F_ref_xproj = fftpack.fft(f2_xr(xnew))
>>>>
>>>>
>>>> Can I do this type of interpolation faster using the method you
>>>> described before?
>>>
>>> I'll misinterpreted your original question and the method I suggested
>>>  there is not applicable. To better understand your situation, few
>>> questions:
>>> - what you described above; it does work for you in technical sense?
>>> - if so, then the problem is with the execution performance?
>>> - what are your current timings?
>>> - how much you'll need to enhance them?
>>> Regards,
>>> eat
>>>>
>>>> Cheers
>>>> James
>>>>
>
> Just a thought since I don't know the details:
>
>
> using fft interpolation might be faster, e.g. signal.resample
>
>>>> t = np.linspace(0,10,25) x = np.sin(t) t2 = np.linspace(0,10,50) x2 =
>>>> signal.resample(x,50)
>
> scipy.ndimage.interpolation  should also be faster, if there is something
> that does what you want.
>
> Josef
>
>
>
>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> SciPy-User mailing list
>>>> SciPy-User <at> scipy.org
>>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>>
>>>
>>> _______________________________________________
>>> SciPy-User mailing list
>>> SciPy-User <at> scipy.org
>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>
>>>
>>> -------------------------------------------------
>>> James McCormac
>>> jmccormac01 <at> qub.ac.uk Astrophysics Research Centre
>>> School of Mathematics & Physics
>>> Queens University Belfast
>>> University Road,
>>> Belfast, U.K
>>> BT7 1NN,
>>> TEL: 028 90973509
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> SciPy-User mailing list
>>> SciPy-User <at> scipy.org
>>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>>
>>>
>>
>>
>> _______________________________________________
>> SciPy-User mailing list
>> SciPy-User <at> scipy.org
>> http://mail.scipy.org/mailman/listinfo/scipy-user
>>
>>
>>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>
>

Gmane