Cera, Tim | 3 Mar 16:30 2015

GSoC ideas

These ideas are things that I wanted to tackle, but they might instead get better play as part of the GSoC program.

1. In the scipy.ndimage​ package there are many functions that have a 'mode' option to define the padding process to use to minimize edge issues.  I think would be better to use the pad function in numpy.  With this approach, as improvements are made in the numpy pad function, the scipy.ndimage package benefits.

2. Something that could be useful to me is to update ODRPACK to ODRPACK95.  ODRPACK95 can be found at http://www.netlib.org/toms/869.zip.  At the same time suggest to implement the new odr in scipy.optimize.  License is not defined in the ODRPACK95 software but I found this in the netlib FAQ: 

Most netlib software packages have no restrictions on their use but we recommend you check with the authors to be sure. Checking with the authors is a nice courtesy anyway since many authors like to know how their codes are being used.

​Kindest regards,

SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Robert McGibbon | 3 Mar 11:38 2015

Facilitate navigating to latest version of the docs


I've noticed that often, when googling for the documentation of a scipy function, I often get the docs for that function from a mix of different versions of scipy. Furthermore, on the mailing list, it's somewhat common for people to ask questions about a function that are based on the docstrings for an older version scipy (this might be because they're using an older version of scipy, but I think in many cases it's what came up in their search).

In the web documentation for scikit-learn, the version that you're browsing is displayed somewhat prominently. Also for particularly old versions of the docs, a red bar at the top of the screen lets you know that you're browsing an outdated version, and offers a link to the latest stable version. See this page, for example.

Something similar might be a good idea for the scipy documentation.

SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Paul Nation | 2 Mar 06:49 2015

Updates to VODE and ZVODE solvers in single step mode

When using the single-step mode in either the VODE or ZVODE ode solver, the default mode (2) called in:

    def step(self, *args):
        itask = self.call_args[2]
        self.call_args[2] = 2 # Step mode is here
        r = self.run(*args)
        self.call_args[2] = itask
        return r

results in taking a single step that (typically) goes beyond the output time requested in the solver.  When doing, for example, monte carlo algorithms, this leads to a big performance hit because one must take a step back, reset the solver  and then use the normal mode to go to the requested stop time.  Instead, these solvers support a mode (5) that will never step beyond the end time.  The modified step function is in that case:

def step(self, *args):
        itask = self.call_args[2]
        self.rwork[0] = args[4]    #Set to stop time
        self.call_args[2] = 5       #Set single step mode to stop at requested time.
        r = self.run(*args)
        self.call_args[2] = itask
        return r

Currently in order to implement this, one needs to create their own ODE integrator subclass of VODE or ZVODE, overload the step function, then create an ode instance and then finally add the custom integrator using ode._integrator.  I think supporting both options natively would be a nice thing to have in SciPy.

In addition, often it is not necessary to do a full reset of the ode solver using ode.reset().  Often times one just needs to change the RHS vector (and possibly the time) and set the flag for the solver to start anew (ode._integrator.call_args[3] = 1).  This to results in a large performance benefit for things like monte carlo solving. Right now I need to call

ode._y = new_vec
ode._integrator.call_args[3] = 1

when I want to accomplish this.  Adding support for a “fast reset” might also be a good thing to have in SciPy.  

All of the code to accomplish such things are already being used in the QuTiP monte carlo solver(https://github.com/qutip/qutip/blob/master/qutip/mcsolve.py) and would therefore be fairly painless to add to SciPy.

Best regards,


SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Julian Taylor | 1 Mar 22:05 2015

ANN: NumPy 1.9.2 bugfix release


We am pleased to announce the release of NumPy 1.9.2, a
bugfix only release for the 1.9.x series.
The tarballs and win32 binaries are available on sourceforge:
PyPI also contains the wheels for MacOs.

The upgrade is recommended for all users of the 1.9.x series.

Following issues have been fixed:
* #5316: fix too large dtype alignment of strings and complex types
* #5424: fix ma.median when used on ndarrays
* #5481: Fix astype for structured array fields of different byte order
* #5354: fix segfault when clipping complex arrays
* #5524: allow np.argpartition on non ndarrays
* #5612: Fixes ndarray.fill to accept full range of uint64
* #5155: Fix loadtxt with comments=None and a string None data
* #4476: Masked array view fails if structured dtype has datetime component
* #5388: Make RandomState.set_state and RandomState.get_state threadsafe
* #5390: make seed, randint and shuffle threadsafe
* #5374: Fixed incorrect assert_array_almost_equal_nulp documentation
* #5393: Add support for ATLAS > 3.9.33.
* #5313: PyArray_AsCArray caused segfault for 3d arrays
* #5492: handle out of memory in rfftf
* #4181: fix a few bugs in the random.pareto docstring
* #5359: minor changes to linspace docstring
* #4723: fix a compile issues on AIX

The NumPy Developer team

SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Felix Berkenkamp | 1 Mar 16:47 2015

Split signal.lti class into subclasses

Hi everyone,

I started looking into improving the signal.lti class following the 
issue discussed at

The pull request can be found here:

The main idea is to split the lti class into ss, tf, and zpk subclasses.
Calling the lti class itself returns instances of these three subclasses.

* No redundant information (lti class currently holds the information of 
all 3 classes)
* Reduce overhead (creating 3 system representations)
* Switching between the different subclasses is more explicit: obj.ss(), 
obj.tf(), obj.zpk()
* Avoids one huge class for everything
* Is fully backwards compatible (as far as I can tell)
* Similar to what Octave / Matlab does (easier to convert code from 
there to scipy)

* Accessing properties that are not part of the subclass is more 
expensive (e.g.: sys = ss(1,1,1,1); sys.num  --> this now returns 

Any suggestions / comments / things I've broken?

josef.pktd | 1 Mar 04:43 2015


Advertised by Jake's blog post (*), NUFFT looks interesting


"...it's way faster than Lomb-Scargle!"

something for scipy, or do we have to wait for Fortran 90?
or maybe it's just a bit of cython.

(*) https://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/

Nikolay Mayorov | 28 Feb 19:04 2015

GSOC Optimization Project

Hi! I want to clarify some things about GSOC project idea related to Levenberg-Marquardt algorithm.

1) Why do we want anything but current leastsq based on MINPACK?

Looks like it is answered here: https://github.com/scipy/scipy/pull/90 

"When we call python from FORTRAN, a lot of magic has to be done. This magic prevents us, for example, to properly pass exceptions through the FORTRAN

Could you comment more on that perhaps?

2) What's wrong with https://github.com/scipy/scipy/pull/90? Why it is stalled? What do you expect from GSOC student to do better / different?

Again partially answered in PR: "It's stalled: the algorithmic part is OK, the new interfaces proposed controversial.", "However, this could perhaps be extended to Levenberg-Marquardt supporting sparse Jacobians"

3) Based on 2: how GSOC student should proceed with interface issue? I mean there weren't any strong opinions and it was on the list for so long. I have no idea how to come up with a good solution all of a sudden.

4) Do you believe that code written during GSOC should be based on PR mentioned?


That's what I come up so far about the work during GSOC:

- Decide on interface part
- Add new features to the PR from pv (probably just one of them): 
   Sparse Jacobians support
   Constraints support
- Implement a solid test suite


I would appreciate your answers,


SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Freddy Rietdijk | 26 Feb 15:33 2015

Spherical Harmonics and Condon-Shortley phase


I'm working with auralization and Ambisonics, and the directivity patterns that are used with Ambisonics are spherical harmonics. Scipy has an implementation, scipy.special.sph_harm. Several definitions exist however for spherical harmonics, and the documentation does not specify which is implemented.

A common definition that is used in quantum-mechanics includes the Condon-Shortley phase, which is a (-1)**m factor.

For my purpose, Ambisonics, I need spherical harmonics without this factor.
I found the code, which uses external functions, quite difficult to read. I did see `(-1)**mp` but I'm not sure now whether this really is the CS phase or not.

Who knows which definition is used in `sph_harm`? 


SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Robert Lucente | 24 Feb 20:11 2015

Slow moment function in scipy.stats

Why must it be an either or? Why not have 2 options specified via a parameter

1) Fast but inaccurate
2) Slow but more accurate
stefan | 24 Feb 12:42 2015

Re: Slow moment function in scipy.stats

Hi Ralf and Julian I have created a pull request (I hope, very limited experience with git and github). After doing so, I saw Julian's post. Indeed, pow is very accurate. I would argue that for all realistic uses of the moment function, the inaccuracies will be insignificant, but this is certainly up for discussion. BR, Stefan On 02/24/2015 08:17 AM, Ralf Gommers wrote: > Hi Stefan, > > > On Thu, Feb 19, 2015 at 12:55 PM, stefan <stefan.peterson <at> rubico.com > <mailto:stefan.peterson <at> rubico.com>> wrote: > > Hello list, > > First time poster here. Anyway, some time ago I noticed that the > scipy skewness function was a major bottleneck in an algorithm of > mine. Back then, I typed up my own replacement and thought no more > about it. Today, for some unknown reason, I decided to dig a little > deeper in this and found the major culprit to be the way moments are > computed, specifically the use of np.power. > > > np.power is indeed slow, see for explanations: > http://stackoverflow.com/questions/25254541/why-is-numpy-power-60x-slower-than-in-lining > http://stackoverflow.com/questions/26770996/why-is-numpy-power-slower-for-integer-exponents > > > I'd say that replacing one call to np.power with 6 lines of code to > achieve a ~10x speedup is a good tradeoff. Pull request is welcome:) > its faster but also less accurate, the reason pow is so slow is that it has an accuracy of 0.5 ulp regardless of input, which a multiplication does not. But the argument can certainly be made that this is irrelevant especially as the moment computation performed here is numerically not that stable in the first place (uncorrected loss of significance in the subtraction, mean has an error in the order of the array size ...) maybe numpy should have a special integer case in power for such situations?
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Ralf Gommers | 24 Feb 06:47 2015

scipy.signal function naming


Historically many names in scipy.signal have followed Matlab, which typically chooses short but very nondescriptive names. I would prefer to not keep doing that but instead choose readable names that fit with the rest of Scipy and Python. Recent examples from PRs, with in brackets a proposed alternative:

grpdelay (group_delay) - https://github.com/scipy/scipy/pull/4549
place (place_poles) - https://github.com/scipy/scipy/pull/4295
ctrb (controllalbility_matrix) - https://github.com/scipy/scipy/pull/4515
obsv (observability_matrix) - https://github.com/scipy/scipy/pull/4515
upfirdn (rate_resample ?) - just discussed on list



SciPy-Dev mailing list
SciPy-Dev <at> scipy.org