Evgeni Burovski | 25 Jul 19:54 2016

ANN: scipy 0.18.0 release

On behalf of the Scipy development team I am pleased to announce the
availability of Scipy 0.18.0. This release contains several great new
features and a large number of bug fixes and various improvements, as
detailed in the release notes below.
99 people contributed to this release over the course of six months.

Thanks to everyone who contributed!

This release requires Python 2.7 or 3.4-3.5 and NumPy 1.7.2 or
greater. Source tarballs and release notes can be found at

OS X and Linux wheels are available from PyPI. For security-conscious,
the wheels themselves are signed with my GPG key. Additionally, you
can checksum the wheels and verify the checksums with those listed in
the README file at



Hash: SHA1

SciPy 0.18.0 Release Notes
(Continue reading)

Hamed Dadgour | 23 Jul 01:17 2016

Fwd: Scipy installation issue

Hi there,

I am trying to install Scipy after I successfully installed Numpy on Python3.6.
I did try to install Scipy from PyCharm by going to settings -> project interpretor -> add Scipy.  However, I keep getting this error message (see the screenshot).

It seems that Python3.6 can't find my Fortran compiler.  However, I have already installed the Fortran compiler on my machine.

How do I tell PyCharm where it should look for Fortran compiler?

Here are the gcc and gfortran versions that I have on my machine.

Hameds-MacBook-Air:~ hameddadgour$ gcc --version

Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/usr/include/c++/4.2.1

Apple LLVM version 7.0.2 (clang-700.1.81)

Target: x86_64-apple-darwin14.5.0

Thread model: posix

Hameds-MacBook-Air:~ hameddadgour$ gfortran --version

GNU Fortran (Homebrew gcc 6.1.0) 6.1.0

Copyright (C) 2016 Free Software Foundation, Inc.

This is free software; see the source for copying conditions.  There is NO


SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Schultz, Martin | 18 Jul 12:11 2016

Suggest addition to scipy.stats : Mann-Kendall test

Dear list,


     the routine below evaluates the Mann-Kendall test for non-parametrically checking the significance of any trend estimate. It would provide a nice complement to the existing theilslopes routine and others in scipy.stats. I found this code when searching for “Mann Kendall python” and saw that the original link was no longer existing. In my opinion it would be good to preserve this piece of work and make it available to others. I tested this routine on hundreds of datasets and it seemed to work well. This implementation was also compared to a few calculations with the Matlab implementation of this test and provided identical results.


Best regards,





# -*- coding: utf-8 -*-


Created on Wed Jul 29 09:16:06 2015

<at> author: Michael Schramm



from __future__ import division

import numpy as np

from scipy.stats import norm, mstats



def mk_test(x, alpha = 0.05):


    This function is derived from code originally posted by Sat Kumar Tomer (satkumartomer <at> gmail.com)

    See also: http://vsp.pnnl.gov/help/Vsample/Design_Trend_Mann_Kendall.htm


    The purpose of the Mann-Kendall (MK) test (Mann 1945, Kendall 1975, Gilbert 1987) is to statistically assess if there is a monotonic upward or downward trend of the variable of interest over time. A monotonic upward (downward) trend means that the variable consistently increases (decreases) through time, but the trend may or may not be linear. The MK test can be used in place of a parametric linear regression analysis, which can be used to test if the slope of the estimated linear regression line is different from zero. The regression analysis requires that the residuals from the fitted regression line be normally distributed; an assumption not required by the MK test, that is, the MK test is a non-parametric (distribution-free) test.

    Hirsch, Slack and Smith (1982, page 107) indicate that the MK test is best viewed as an exploratory analysis and is most appropriately used to identify stations where changes are significant or of large magnitude and to quantify these findings.



        x:   a vector of data

        alpha: significance level (0.05 default)



        trend: tells the trend (increasing, decreasing or no trend)

        h: True (if trend is present) or False (if trend is absence)

        p: p value of the significance test

        z: normalized test statistics




      >>> x = np.random.rand(100)

      >>> trend,h,p,z = mk_test(x,0.05)


    n = len(x)


    # calculate S

    s = 0

    for k in range(n-1):

        for j in range(k+1,n):

            s += np.sign(x[j] - x[k])


    # calculate the unique data

    unique_x = np.unique(x)

    g = len(unique_x)


    # calculate the var(s)

    if n == g: # there is no tie

        var_s = (n*(n-1)*(2*n+5))/18

    else: # there are some ties in data

        tp = np.zeros(unique_x.shape)

        for i in range(len(unique_x)):

            tp[i] = sum(unique_x[i] == x)

        var_s = (n*(n-1)*(2*n+5) + np.sum(tp*(tp-1)*(2*tp+5)))/18


    if s>0:

        z = (s - 1)/np.sqrt(var_s)

    elif s == 0:

            z = 0

    elif s<0:

        z = (s + 1)/np.sqrt(var_s)


    # calculate the p_value

    p = 2*(1-norm.cdf(abs(z))) # two tail test

    h = abs(z) > norm.ppf(1-alpha/2)


    if (z<0) and h:

        trend = 'decreasing'

    elif (z>0) and h:

        trend = 'increasing'


        trend = 'no trend'


    return trend, h, p, z



Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt

SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Fabian Rost | 18 Jul 11:07 2016

Adding a multistart algorithm to global optimizers in scipy.optimize

Hello everyone,

in GitHub issue #6381 I suggested to add a global optimizer algorithm to scipy.optimize. As <at> andyfaff pointed out I should rather started the discussion here. As I already have some feedback by <at> andyfaff, I'll copy-paste my idea and his reply here. I'll also add my comments on his reply below.

My issue:

I regularly come across the following task: To find a the global minimum of a function I run local optimizers from multiple random initial guesses. For some situations, e.g. fitting parameters in systems biology, it was found that this type of algorithm performs better than others (Raue et al., 2013). I wonder if there is any interest to include such an algorithm into scipy. I wrote some code, documentation and 2 examples here:

<at> andyfaff's reply:

This is possibly something that would've been better to raise on the mailing list first. There are several approaches to the type of problem you described. Firstly (and probably best), you can use one of the global optimizers, optimize.basinhopping or optimize.differential_evolution, they are designed for this very task. They will be a lot more effective than brute search followed by local minimisation as the dimensionality of the problem increases.
Remember that you can do brute search and polish using the optimize.brute, although the grid search is done over a regular grid.
Finally, and this is not well known, you can do a brute search and polish (of the best solution) with differential_evolution. Here the brute search would examine the problem space by latin hypercube sampling. You would probably want to increase the popsize by a bit. By setting maxiter to 0 no evolution is done on the population, and the best solution from the hypercube sampling would be polished with L-BFGS-B.

I do appreciate that isn't the same as performing a local minimisation of all the brute search locations, but I don't think that is going to outperform differential_evolution or basinhopping in the first place. basinhopping does do a local minimisation step as part of it's operation anyway.

my reply to that:

Thanks very much for your comments!

You point out that basinhopping or differential evolution will be a lot more effective than and that they probably will outperform a multistart algorithm.

Concerning that I'd like to point to the paper Raue et al., 2013 once more. One of there problems is 11-dimensional. They compare the efficiency of multiple optimizers among them differential evolution and a multistart optimizer. I think they do not include basinhooping or simulated annealing. They find that their multistart optimizer outperform all other tested optimizers for the problem at hand. I should also mention they perform there study using MatLab, not Python. I found another study performed in Mathematica. This study finds optimization problems for which a multistart algorithm gives better results than simulated annealing or differential evolution. So doesn't that mean your point concerning effectiveness does not hold in general?

I was aware of brute, I was not aware of the brute search with differential_evolution. Thanks. Let me mention that the latin hypercube sampling points could of course also be used as initials for multistart. That's what's done by Raue et al., 2013 and it's probably better than my simple random uniform implementation.

Let me also mention that a multistart algorithms are available in Excel's solver add-on, in MatLab, in R and in Mathematica, but not for scipy.



Fabian Rost (Dipl.-Phys.)
Wissenschaftlicher Mitarbeiter
Technische Universität Dresden
Centre for Information Services and High Performance Computing (ZIH)
Dept. for Innovative Methods of Computing (IMC)
01062 Dresden
Tel.: +49 (351) 463-38780
Fax: +49 (351) 463-38245
E-Mail: fabian.rost <at> tu-dresden.de
Web: http://imc.zih.tu-dresden.de/
SciPy-Dev mailing list
SciPy-Dev <at> scipy.org

Cross Compiling SciPy?

Hello all -- apologies if similar question(s) have been asked before, 
but I didn't see anything in the archives.

I'm trying to compile NumPy + SciPy for an embedded target -- PowerPC 
target CPU, x86-64 host machine.  The build system I'm using is based on 
Yocto/OpenEmbedded, basically a fairly standard cross compile setup.   
I'm using tagged versions from the git repositories, specifically 
v1.10.4 of NumPy and v0.17.1 for SciPy, and Python 2.7 as provided by 
the upstream "Poky" distribution.  So far, NumPy has been relatively 
painless, the "setup.py" script seems to obey the BUILD_SYS/HOST_SYS 
variables and successfully builds for the target architecture.

SciPy, on the other hand, is more problematic.  The provided "setup.py" 
script was not even getting started because it attempts this:

         from numpy.distutils.core import setup

But the "setup.py" script is actually running on the host machine in 
this case, so it is importing the setup from the host installation, NOT 
the target's numpy installation.  If I install numpy on the host machine 
as well, this import now works, but it creates a mixture of binaries -- 
the build fails later on when it attempts to link binaries built using 
the native gcc with other binaries built using the cross compiler.  I've 
also tried the previous release (v0.16.1 tag) with similar results.

Has anyone successfully cross compiled SciPy that could provide some 
guidance here?  Is there anything special I need to do with the NumPy 
cross installation such that the SciPy build can find it?

Thanks in advance
Nicolas P. Rougier | 15 Jul 15:48 2016

100 Numpy exercises complete !

Hi all,

It's my great pleasure to announce that "100 Numpy exercises" is now complete.
I've also made a notebook out of them such that you can now test them on binder.


If you spot errors or have better solutions to propose, PR welcome.
(I'm still fighting to fix exercise #54 that does not work as expected).

Ralf Gommers | 27 Jun 00:43 2016

welcome Nikolay to the core team

Hi all,

On behalf of the Scipy developers I'd like to welcome Nikolay Mayorov as member of the core dev team.

Nikolay started contributing as a GSoC student last year, then he added optimize.lsq_linear/least_squares. Since then he's made many other contributions, for example: optimize.solve_bvp, interpolate.CubicSpline and a nearest neighbor chain algorithm to speed up cluster.linkage. See https://github.com/scipy/scipy/pulls/nmayorov

Keep them coming Nikolay!


<at> other-newer-core-devs: if you're wondering if we all agreed on offering Nikolay commit rights, that happened already half a year ago via email (it just took a while before the acceptance came in:)).

SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
David Shi | 25 Jun 23:10 2016

How best to turn JSON into a CSV or Pandas data frame table?

Which are the best ways to turn a JSON object into a CSV or Pandas data frame table?

Looking forward to hearing from you.


SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Andrew Nelson | 22 Jun 05:08 2016

Re: [scipy/scipy] 0.18.0 release candidate 1

The release candidate passes all the tests on my machine.

OSX 10.9.5
python 3.4.4 (conda)
numpy 1.12.0.dev0+6ab89b8

On 20 June 2016 at 23:17, Evgeni Burovski <notifications <at> github.com> wrote:

This is the first release candidate for scipy 0.18.0. See https://github.com/scipy/scipy/blob/maintenance/0.18.x/doc/release/0.18.0-notes.rst for the release notes.

Please note that this is a source-only release.

If no issues are reported for this release, it will become the final 0.18.0 release. Issues can be reported via Github or on the scipy-dev mailing list (see http://scipy.org/scipylib/mailing-lists.html).

You are receiving this because you are subscribed to this thread.
View it on GitHub or mute the thread.

Dr. Andrew Nelson

SciPy-Dev mailing list
SciPy-Dev <at> scipy.org
Evgeni Burovski | 20 Jun 14:51 2016

maintenance/0.18.x tagged


0.18.x has been tagged and master is now open for 0.19.0 development.
I'll upload the source tarballs to GH releases shortly.


Bhavika Tekwani | 19 Jun 11:18 2016

Review pull request #6285: BUG: stats: Inconsistency in the multivariate_normal docstring #6263

Hey everyone,

I just submitted a pull request attempting to close issue #6263 regarding the pdf and logpdf methods in multivariate_normal. 
As I understood - the bug was simply an absence of default parameters in pdf and logpdf methods within class multivariate_normal_gen.

I've submitted this change along with a test for default values. 

Please see the PR here - https://github.com/scipy/scipy/pull/6285

I hope this is correct and I welcome guidance if it is not. 

Thank you,
Bhavika T. 

SciPy-Dev mailing list
SciPy-Dev <at> scipy.org