Nathaniel Smith | 6 Jul 18:26 2016

Re: Added atleast_nd, request for clarification/cleanup of atleast_3d

On Jul 5, 2016 11:21 PM, "Ralf Gommers" <ralf.gommers <at>> wrote:
> On Wed, Jul 6, 2016 at 7:06 AM, Nathaniel Smith <njs <at>> wrote:
>> On Jul 5, 2016 9:09 PM, "Joseph Fox-Rabinovitz" <jfoxrabinovitz <at>> wrote:
>> >
>> > Hi,
>> >
>> > I have generalized np.atleast_1d, np.atleast_2d, np.atleast_3d with a
>> > function np.atleast_nd in PR#7804
>> > (
>> >
>> > As a result of this PR, I have a couple of questions about
>> > `np.atleast_3d`. `np.atleast_3d` appears to do something weird with
>> > the dimensions: If the input is 1D, it prepends and appends a size-1
>> > dimension. If the input is 2D, it appends a size-1 dimension. This is
>> > inconsistent with `np.atleast_2d`, which always prepends (as does
>> > `np.atleast_nd`).
>> >
>> >   - Is there any reason for this behavior?
>> >   - Can it be cleaned up (e.g., by reimplementing `np.atleast_3d` in
>> > terms of `np.atleast_nd`, which is actually much simpler)? This would
>> > be a slight API change since the output would not be exactly the same.
>> Changing atleast_3d seems likely to break a bunch of stuff...
>> Beyond that, I find it hard to have an opinion about the best design for these functions, because I don't think I've ever encountered a situation where they were actually what I wanted. I'm not a big fan of coercing dimensions in the first place, for the usual "refuse to guess" reasons. And then generally if I do want to coerce an array to another dimension, then I have some opinion about where the new dimensions should go, and/or I have some opinion about the minimum acceptable starting dimension, and/or I have a maximum dimension in mind. (E.g. "coerce 1d inputs into a column matrix; 0d or 3d inputs are an error" -- atleast_2d is zero-for-three on that requirements list.)
>> I don't know how typical I am in this. But it does make me wonder if the atleast_* functions act as an attractive nuisance, where new users take their presence as an implicit recommendation that they are actually a useful thing to reach for, even though they... aren't that. And maybe we should be recommending folk move away from them rather than trying to extend them further?
>> Or maybe they're totally useful and I'm just missing it. What's your use case that motivates atleast_nd?
> I think you're just missing it:) atleast_1d/2d are used quite a bit in Scipy and Statsmodels (those are the only ones I checked), and in the large majority of cases it's the best thing to use there. There's a bunch of atleast_2d calls with a transpose appended because the input needs to be treated as columns instead of rows, but that's still efficient and readable enough.

I know people *use* it :-). What I'm confused about is in what situations you would invent it if it didn't exist. Can you point me to an example or two where it's "the best thing"? I actually had statsmodels in mind with my example of wanting the semantics "coerce 1d inputs into a column matrix; 0d or 3d inputs are an error". I'm surprised if there are places where you really want 0d arrays converted into 1x1, or want to allow high dimensional arrays to pass through - and if you do want to allow high dimensional arrays to pass through, then transposing might help with 2d cases but will silently mangle high-d cases, right?


NumPy-Discussion mailing list
NumPy-Discussion <at>
Joseph Fox-Rabinovitz | 6 Jul 06:09 2016

Added atleast_nd, request for clarification/cleanup of atleast_3d


I have generalized np.atleast_1d, np.atleast_2d, np.atleast_3d with a
function np.atleast_nd in PR#7804

As a result of this PR, I have a couple of questions about
`np.atleast_3d`. `np.atleast_3d` appears to do something weird with
the dimensions: If the input is 1D, it prepends and appends a size-1
dimension. If the input is 2D, it appends a size-1 dimension. This is
inconsistent with `np.atleast_2d`, which always prepends (as does

  - Is there any reason for this behavior?
  - Can it be cleaned up (e.g., by reimplementing `np.atleast_3d` in
terms of `np.atleast_nd`, which is actually much simpler)? This would
be a slight API change since the output would not be exactly the same.


klo uo | 5 Jul 21:18 2016

f2py output module name

Hi, I'm following this guide:

I'm on Windows with gfortran and VS2015. When I run:

    f2py -c -m fib3 fib3.f

as output I dont get "fib3.pyd", but "fib3.cp35-win_amd64.pyd".

Does anyone know how to get correctly named module in this case?

NumPy-Discussion mailing list
NumPy-Discussion <at>
Jaime Fernández del Río | 5 Jul 10:22 2016

numpy.dtype has the wrong size, try recompiling

This question on Stack Overflow:

If I remember correctly this has something to do with Cython, right? Can't remeber the details though, can someone help that poor soul out?


( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
NumPy-Discussion mailing list
NumPy-Discussion <at>
Tom Kooij | 4 Jul 22:25 2016

ANN: PyTables 3.2.3 released.

 Announcing PyTables 3.2.3 

We are happy to announce PyTables 3.2.3. 

What's new 

This is a bug fix release. It solves many issues reported in the
months since the release of 3.2.2.

In case you want to know more in detail what has changed in this 
version, please refer to: 

For an online version of the manual, visit: 

What it is? 

PyTables is a library for managing hierarchical datasets and 
designed to efficiently cope with extremely large amounts of data with 
support for full 64-bit file addressing.  PyTables runs on top of 
the HDF5 library and NumPy package for achieving maximum throughput and 
convenient use.  PyTables includes OPSI, a new indexing technology, 
allowing to perform data lookups in tables exceeding 10 gigarows 
(10**10 rows) in less than a tenth of a second. 


About PyTables: 

About the HDF5 library: 


Thanks to many users who provided feature improvements, patches, bug 
reports, support and suggestions.  See the ``THANKS`` file in the 
distribution package for a (incomplete) list of contributors.  Most 
specially, a lot of kudos go to the HDF5 and NumPy makers. 
Without them, PyTables simply would not exist. 

Share your experience 

Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. 


  **Enjoy data!** 

  -- The PyTables Developers

NumPy-Discussion mailing list
NumPy-Discussion <at>
Skip Montanaro | 3 Jul 04:08 2016

Picking rows with the first (or last) occurrence of each key

(I'm probably going to botch the description...)

Suppose I have a 2D array of Python objects, the first n elements of each
row form a key, the rest of the elements form the value. Each key can (and
generally does) occur multiple times. I'd like to generate a new array
consisting of just the first (or last) row for each key occurrence. Rows
retain their relative order on output.

For example, suppose I have this array with key length 2:

[ 'a', 27, 14.5 ]
[ 'b', 12, 99.0 ]
[ 'a', 27, 15.7 ]
[ 'a', 17, 100.3 ]
[ 'b', 12, -329.0 ]

Selecting the first occurrence of each key would return this array:

[ 'a', 27, 14.5 ]
[ 'b', 12, 99.0 ]
[ 'a', 17, 100.3 ]

while selecting the last occurrence would return this array:

[ 'a', 27, 15.7 ]
[ 'a', 17, 100.3 ]
[ 'b', 12, -329.0 ]

In real life, my array is a bit larger than this example, with the input
being on the order of a million rows, and the output being around 5000
rows. Avoiding processing all those extra rows at the Python level would
speed things up.

I don't know what this filter might be called (though I'm sure I haven't
thought of something new), so searching Google or Bing for it would seem to
be fruitless. It strikes me as something which numpy or Pandas might already
have in their bag(s) of tricks.

Pointers appreciated,

Leon Woo | 2 Jul 14:02 2016

AUTO: Leon Woo is out of the office (returning 11/07/2016)

I am out of the office until 11/07/2016.

For standard requests within the scope of EMG PWM Berlin, please write to EMG PWM Berlin <at> DBEMEA.
For non-standard requests, please cc Hien Pham-Thu.

Note: This is an automated response to your message  "NumPy-Discussion Digest, Vol 118, Issue 2" sent on 02.07.2016 14:00:01.

This is the only notification you will receive while this person is away.


Informationen (einschließlich Pflichtangaben) zu einzelnen, innerhalb der EU tätigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter Diese E-Mail enthält vertrauliche und/ oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.

Please refer to for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden.

NumPy-Discussion mailing list
NumPy-Discussion <at>
Michael Ward | 28 Jun 22:36 2016

Is numpy.test() supposed to be multithreaded?

Heya, I'm not a numbers guy, but I maintain servers for scientists and 
researchers who are.  Someone pointed out that our numpy installation on 
a particular server was only using one core.  I'm unaware of the who/how 
the previous version of numpy/OpenBLAS were installed, so I installed 
them from scratch, and confirmed that the users test code now runs on 
multiple cores as expected, drastically increasing performance time.

Now the user is writing back to say, "my test code is fast now, but 
numpy.test() is still about three times slower than <some other server 
we don't manage>".  When I watch htop as numpy.test() executes, sure 
enough, it's using one core.  Now I'm not sure if that's the expected 
behavior or not.  Questions:

* if numpy.test() is supposed to be using multiple cores, why isn't it, 
when we've established with other test code that it's now using multiple 

* if numpy.test() is not supposed to be using multiple cores, what could 
be the reason that the performance is drastically slower than another 
server with a comparable CPU, when the user's test code performs 

For what it's worth, the users "test" code which does run on multiple 
cores is as simple as:

a = np.random.random_sample((size,size))
b = np.random.random_sample((size,size))
x =,b)

Whereas this uses only one core:



OpenBLAS 0.2.18 was basically just compiled with "make", nothing special 
to it.  Numpy 1.11.0 was installed from source (python 
install), using a site.cfg file to point numpy to the new OpenBLAS.

Bryan Van de Ven | 28 Jun 20:32 2016

ANN: Bokeh 0.12 Released

Hi all,

On behalf of the Bokeh team, I am pleased to announce the release of version 0.12.0 of Bokeh!

This release was a major update, and was focused on areas of layout and styling, new JavaScript APIs for
BokehJS, and improvements to the Bokeh Server. But there were many additional improvements in other
areas as well. Rather than try to describe all the changes here, I encourage every one to check out the new
project blog:

which has details as well as live demonstrations. And as always, see the CHANGELOG and Release Notes for
full details.

If you are using Anaconda/miniconda, you can install it with conda:

	conda install bokeh

Alternatively, you can also install it with pip:

	pip install bokeh

Full information including details about how to use and obtain BokehJS are at:

Issues, enhancement requests, and pull requests can be made on the Bokeh Github page:

Documentation is available at

Questions can be directed to the Bokeh mailing list: bokeh <at> or the Gitter Chat room:


Bryan Van de Ven
Continuum Analytics
Matthew Brett | 28 Jun 05:46 2016

Accelerate or OpenBLAS for numpy / scipy wheels?


I just succeeded in getting an automated dual arch build of numpy and
scipy, using OpenBLAS.  See the last three build jobs in these two
build matrices:

Tests are passing on 32 and 64-bit.

I didn't upload these to the usual Rackspace container at to avoid confusion.

So, I guess the question now is - should we switch to shipping
OpenBLAS wheels for the next release of numpy and scipy?  Or should we
stick with the Accelerate framework that comes with OSX?

In favor of the Accelerate build : faster to build, it's what we've
been doing thus far.

In favor of OpenBLAS build : allows us to commit to one BLAS / LAPACK
library cross platform, when we have the Windows builds working.
Faster to fix bugs with good support from main developer.  No
multiprocessing crashes for Python 2.7.

Any thoughts?


Charles R Harris | 26 Jun 18:36 2016

Numpy 1.11.1 release

Hi All,

I'm pleased to announce the release of Numpy 1.11.1. This release supports Python 2.6 - 2.7, and 3.2 - 3.5 and fixes bugs and regressions found in Numpy 1.11.0 as well as making several build related improvements.  Wheels for Linux, Windows, and OSX can be found on PyPI. Sources are available on both PyPI and Sourceforge.

Thanks to all who were involved in this release, and a special thanks to Matthew Brett for his work on the Linux and Windows wheel infrastructure.

The following pull requests have been merged:

  • 7506 BUG: Make sure numpy imports on python 2.6 when nose is unavailable.
  • 7530 BUG: Floating exception with invalid axis in np.lexsort.
  • 7535 BUG: Extend glibc complex trig functions blacklist to glibc < 2.18.
  • 7551 BUG: Allow graceful recovery for no compiler.
  • 7558 BUG: Constant padding expected wrong type in constant_values.
  • 7578 BUG: Fix OverflowError in Python 3.x. in swig interface.
  • 7590 BLD: Fix configparser.InterpolationSyntaxError.
  • 7597 BUG: Make work on scalars.
  • 7608 BUG: linalg.norm(): Don't convert object arrays to float.
  • 7638 BLD: Correct C compiler customization in
  • 7654 BUG: ma.median of 1d array should return a scalar.
  • 7656 BLD: Remove hardcoded Intel compiler flag -xSSE4.2.
  • 7660 BUG: Temporary fix for str(mvoid) for object field types.
  • 7665 BUG: Fix incorrect printing of 1D masked arrays.
  • 7670 BUG: Correct initial index estimate in histogram.
  • 7671 BUG: Boolean assignment no GIL release when transfer needs API.
  • 7676 BUG: Fix handling of right edge of final histogram bin.
  • 7680 BUG: Fix np.clip bug NaN handling for Visual Studio 2015.
  • 7724 BUG: Fix segfaults in np.random.shuffle.
  • 7731 MAINT: Change mkl_info.dir_env_var from MKL to MKLROOT.
  • 7737 BUG: Fix issue on OS X with Python 3.x, npymath.ini not installed.

The following developers contributed to this release, developers marked with a '+' are first time contributors.

  • Allan Haldane
  • Amit Aronovitch+
  • Andrei Kucharavy+
  • Charles Harris
  • Eric Wieser+
  • Evgeni Burovski
  • Loïc Estève+
  • Mathieu Lamarre+
  • Matthew Brett
  • Matthias Geier
  • Nathaniel J. Smith
  • Nikola Forró+
  • Ralf Gommers
  • Ray Donnelly+
  • Robert Kern
  • Sebastian Berg
  • Simon Conseil
  • Simon Gibbons
  • Sorin Sbarnea+

NumPy-Discussion mailing list
NumPy-Discussion <at>