Björn Dahlgren | 7 Apr 17:12 2014

Proposed new function for joining arrays: np.interleave


Interleaving arrays is something I need to do every now and then, and by the looks of stackoverflow so do others:

I think the code needed for the general n dimensional case with m number of arrays
is non-trivial enough for it to be useful to provide such a function in numpy, so I took the liberty to
make a Pull Request with my implementation. This would be my first contribution to NumPy so I apologize if something is not adhering to your practices.

Jaime has already pointed out that I should email this list (I hope I managed to email the list in
the right way - never used a mailing list before) for you to notice the pull request. He also pointed
out some improvements of my proposed implementation (not forcing consistent dtype), but before
I go on and make changes I might need to ask you guys if this is really something you are interested in including?

Best regards,
/Björn Dahlgren

NumPy-Discussion mailing list
NumPy-Discussion <at>
Yaroslav Halchenko | 6 Apr 23:43 2014

match RNG numbers with R?

Hi NumPy gurus,

We wanted to test some of our code by comparing to results of R
implementation which provides bootstrapped results.

R, Python std library, numpy all have Mersenne Twister RNG implementation.  But
all of them generate different numbers.  This issue was previously discussed in :  In Python, and numpy generated
numbers are based on using 53 bits of two 32 bit random integers generated by
the algorithm (see below).    Upon my brief inspection, original 32bit numbers
are nohow available for access neither in NumPy nor in Python stdlib

I wonder if I have missed something and there is an easy way (i.e. without
reimplementing core algorithm, or RPy'ing numbers from R) to generate random
numbers in Python to match the ones in R?

Excerpt from

# R
%R RNGkind("Mersenne-Twister"); set.seed(1); sample(0:9, 10, replace=T)

array([2, 3, 5, 9, 2, 8, 9, 6, 6, 0], dtype=int32)

# stock Python
random.seed(1); [random.randint(0, 10) for i in range(10)]

[1, 9, 8, 2, 5, 4, 7, 8, 1, 0]

# numpy
rng = nprandom.RandomState(1);  [rng.randint(0, 10) for i in range(10)]

[5, 8, 9, 5, 0, 0, 1, 7, 6, 9]


Yaroslav O. Halchenko, Ph.D.
Senior Research Associate,     Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
Julian Taylor | 6 Apr 22:05 2014

numpy git master requiring cython for build

numpy.random is largely built from a cython file. Up to know numpy git
included generated c sources for this one file.
It is troublesome to have merge 20k line changes for one line bugfixes,
so it is proposed to remove the generated sources from the master branch
in this PR:

Like in SciPy the sources will not be contained in git anymore but built
by the regular build when required.
Release source tarballs (sdist) will continue contain the generated
sources so users of stable release are not required to have cython

If you have any objects to this please speak up soon.

Francesc Alted | 6 Apr 12:51 2014

ANN: numexpr 2.4 RC1

  Announcing Numexpr 2.4 RC1

Numexpr is a fast numerical expression evaluator for NumPy.  With it,
expressions that operate on arrays (like "3*a+4*b") are accelerated
and use less memory than doing the same calculation in Python.

It wears multi-threaded capabilities, as well as support for Intel's
MKL (Math Kernel Library), which allows an extremely fast evaluation
of transcendental functions (sin, cos, tan, exp, log...)  while
squeezing the last drop of performance out of your multi-core
processors.  Look here for a some benchmarks of numexpr using MKL:

Its only dependency is NumPy (MKL is optional), so it works well as an
easy-to-deploy, easy-to-use, computational engine for projects that
don't want to adopt other solutions requiring more heavy dependencies.

What's new

A new `contains()` function has been added for detecting substrings in
strings.  Thanks to Marcin Krol.

Also, there is a new version of that allows better management
of the NumPy dependency during pip installs.  Thanks to Aleks Bunin.

This is the first release candidate before 2.4 final would be out,
so please give it a go and report back any problems you may have.

In case you want to know more in detail what has changed in this
version, see:

or have a look at RELEASE_NOTES.txt in the tarball.

Where I can find Numexpr?

The project is hosted at GitHub in:

You can get the packages from PyPI as well (but not for RC releases):

Share your experience

Let us know of any bugs, suggestions, gripes, kudos, etc. you may

Enjoy data!


Francesc Alted
Charles R Harris | 4 Apr 20:12 2014

Where to put versionadded

Hi All,

Currently there are several placements of the '.. versionadded::' directive and I'd like to settle
on a proper style for consistency. There are two occasions on which it is used, first, when a new function or class is added and second, when a new keyword is added to an existing function or method. The options are as follows.

New Function

1) Originally, the directive was added in the notes section.
.. versionadded:: 1.5.0

 2) Alternatively, it is placed after the extended summary.

blah, blah

..versionadded:: 1.5.0

Between these two, I vote for 2) because the version is easily found when reading the documentation either in a terminal or rendered into HTML.

New Parameter

1) It is placed before the parameter description

newoption : int, optional
    .. versionadded:: 1.5.0

2) It is placed after the parameter description.

newoption : int, optional

    .. versionadded:: 1.5.0

Both of these render correctly, but the first is more compact while the second puts the version
after the description where it doesn't interrupt the reading. I'm tending towards 1) on account of its compactness.



NumPy-Discussion mailing list
NumPy-Discussion <at>
Ralf Gommers | 3 Apr 22:14 2014

ANN: Scipy 0.14.0 release candidate 1


I'm pleased to announce the availability of the first release candidate of Scipy 0.14.0. Please try this RC and report any issues on the scipy-dev mailing list. A significant number of fixes for scipy.sparse went in after the beta release, so users of that module may want to test this release carefully.

Source tarballs, binaries and the full release notes can be found at The final release will follow in one week if no new issues are found.

A big thank you to everyone who contributed to this release!


NumPy-Discussion mailing list
NumPy-Discussion <at>
Neal Becker | 3 Apr 15:35 2014

mtrand normal sigma >= 0 too restrictive

Traceback (most recent call last):
  File "./", line 1694, in <module>
    run_line (sys.argv)
  File "./", line 1690, in run_line
    return run (opt, cmdline)
  File "./", line 1115, in run (xbits, freq=freqs[i]+burst.freq_offset, tau=burst.time_offset, 
  File "/home/nbecker/hn-inroute-fixed/", line 191, in __call__
    self.channel_out, self.complex_channel_gain = (mix_out)
  File "./", line 105, in __call__
    ampl = 10**(0.05*self.pwr_gen())
  File "./", line 148, in __call__
    pwr = self.gen()
  File "./", line 124, in __call__
    x = self.gen()
  File "/home/nbecker/sigproc.ndarray/", line 11, in __call__
    return (self.mean, self.std, size)
  File "mtrand.pyx", line 1479, in mtrand.RandomState.normal 
ValueError: scale <= 0

I believe this restriction is too restrictive, and should be
scale < 0

There is nothing wrong with scale == 0 as far as I know.  It's a convenient way
to turn off the noise in my simulation.
Leah Silen | 2 Apr 23:29 2014

PyData SV Proposals

Be sure and get your proposals in for PyData Silicon Valley 2014 by April 6! 

The event brings together scientists, analysts, developers, engineers, architects and others from the Python data science community to discuss new techniques and tools for management, analytics and visualization of data. Presentation content can be at a novice, intermediate or advanced level. Talks will run 30-40 min and hands-on tutorial and workshop sessions will run 90 min. or 3 hours.

If you are interested in presenting a talk, tutorial or sponsor workshop on meeting the challenges of big data using Python, please submit a SHORT abstract and bio. If you don't have time to create an abstract, feel free to submit a presentation outline. To see the type of topics presented at previous PyData events, please look at our past conference sites at or check out the videos on

Submit Your Proposal

  • Registration will be complimentary for speakers.
  • Any questions can be directed to admin <at>
  • Conference dates are May 2-4.
  • Proposal deadline has been extended through April 6, 2014.
Leah Silen

NumPy-Discussion mailing list
NumPy-Discussion <at>
mbyt | 2 Apr 21:47 2014

structured array assignments


I am writing due to an issue in structured array assignments. Let's
consider the following example:

    In [31]: x = np.empty(1, dtype=[('field', 'i4', 10)])

    In [32]: type(x[0])
    Out[32]: numpy.void

    In [33]: x[0] = np.ones(10, dtype='i4').view('V40')

    In [34]: x
    array([([1, 1, 1, 1, 1, 1, 1, 1, 1, 1],)], 
          dtype=[('field', '<i4', (10,))])

    In [42]: type(x[0]['field'])
    Out[42]: numpy.ndarray

    In [43]: x[0]['field'] = np.zeros(10, dtype='i4')

    In [44]: x
    array([([0, 1, 1, 1, 1, 1, 1, 1, 1, 1],)], 
          dtype=[('field', '<i4', (10,))])

The assignments in both (33 and 43) call the C function
voidtype_ass_subscript, but the second call produces an unexpected
result --- it only copies the first element. 43 will work, if a slice
assignment (C function array_assign_slice) is forced by a left hand
[:] (that is x[0]['field'][:]).

The question now is: What is the general opinion about forcing slice
assignments for ndarray types and raising otherwise?


Haslwanter Thomas | 1 Apr 20:27 2014

Standard Deviation (std): Suggested change for "ddof" default value

While most other Python applications (scipy, pandas) use for the calculation of the standard deviation the default “ddof=1” (i.e. they calculate the sample standard deviation), the Numpy implementation uses the default “ddof=0”.

Personally I cannot think of many applications where it would be desired to calculate the standard deviation with ddof=0. In addition, I feel that there should be consistency between standard modules such as numpy, scipy, and pandas.


I am wondering if there is a good reason to stick to “ddof=0” as the default for “std”, or if others would agree with my suggestion to change the default to “ddof=1”?




Prof. (FH) PD Dr. Thomas Haslwanter

School of Applied Health and Social Sciences

University of Applied Sciences Upper Austria
FH OÖ Studienbetriebs GmbH
Garnisonstraße 21
4020 Linz/Austria
Tel.: +43 (0)5 0804 -52170
Fax: +43 (0)5 0804 -52171
E-Mail: Thomas.Haslwanter <at>


NumPy-Discussion mailing list
NumPy-Discussion <at>
Bob Dowling | 1 Apr 16:31 2014

Confused by spec of numpy.linalg.solve


>>> sys.version
'3.3.2 (default, Mar  5 2014, 08:21:05) \n[GCC 4.8.2 20131212 (Red Hat

>>> numpy.__version__


I'm trying to unpick the shape requirements of numpy.linalg.solve().
The help text says:

solve(a, b) -
     a : (..., M, M) array_like
         Coefficient matrix.
     b : {(..., M,), (..., M, K)}, array_like
         Ordinate or "dependent variable" values.

It's the requirements on "b" that are giving me grief.  My read of the
help text is that "b" must have a shape with either its final axis or
its penultimate axis equal to M in size.  Which axis the matrix
contraction is along depends on the size of the final axis of "b".

So, according to my reading, if "b" has shape (6,3) then the first
choice, "(..., M,)", is invoked but if "a" has shape (3,3) and "b" has
shape (3,6) then the second choice, "(..., M, K)", is invoked.  I find
this weird, but I've dealt with (much) weirder.

However, this is not what I see.  When "b" has shape (3,6) everything
goes as expected.  When "b" has shape (6,3) I get an error message that
6 is not equal to 3:

> ValueError: solve: Operand 1 has a mismatch in its core dimension 0,
> with gufunc signature (m,m),(m,n)->(m,n) (size 6 is different from 3)

Obviously my reading is incorrect.  Can somebody elucidate for me
exactly what the requirements are on the shape of "b"?

Example code:

import numpy
import numpy.linalg

# Works.
M = numpy.array([
     [1.0,     1.0/2.0, 1.0/3.0],
     [1.0/2.0, 1.0/3.0, 1.0/4.0],
     [1.0/3.0, 1.0/4.0, 1.0/5.0]
     ] )

yy1 = numpy.array([
     [1.0, 0.0, 0.0],
     [0.0, 1.0, 0.0],
     [0.0, 0.0, 1.0]
xx1 = numpy.linalg.solve(M, yy1)

# Works too.
yy2 = numpy.array([
     [1.0, 0.0, 0.0, 1.0, 0.0, 0.0],
     [0.0, 1.0, 0.0, 0.0, 1.0, 0.0],
     [0.0, 0.0, 1.0, 0.0, 0.0, 1.0]
xx2 = numpy.linalg.solve(M, yy2)

# Fails.
yy3 = numpy.array([
     [1.0, 0.0, 0.0],
     [0.0, 1.0, 0.0],
     [0.0, 0.0, 1.0],
     [1.0, 0.0, 0.0],
     [0.0, 1.0, 0.0],
     [0.0, 0.0, 1.0]
xx3 = numpy.linalg.solve(M, yy3)