Joseph Martinot-Lagarde | 5 Sep 02:47 2014

'norm' keyword for FFT functions

I have an old PR [1] to fix #2142 [2]. The idea is to have a new keyword
for all fft functions to define the normalisation of the fft:
- if 'norm' is None (the default), the normalisation is the current one:
fft() is not normalized ans ifft is normalized by 1/n.
- if norm is "ortho", the direct and inverse transforms are both
normalized by 1/sqrt(n). The results are then unitary.

The keyword name and value is consistent with scipy.fftpack.dct.

Do you feel that it should be merged ?

Joseph

      [1] https://github.com/numpy/numpy/pull/3883
      [2] https://github.com/numpy/numpy/issues/2142
Eelco Hoogendoorn | 4 Sep 19:39 2014
Picon

Re: Does a `mergesorted` function make sense?


On Thu, Sep 4, 2014 at 10:31 AM, Eelco Hoogendoorn <hoogendoorn.eelco <at> gmail.com> wrote:

On Wed, Sep 3, 2014 at 6:46 PM, Jaime Fernández del Río <jaime.frio <at> gmail.com> wrote:
On Wed, Sep 3, 2014 at 9:33 AM, Jaime Fernández del Río <jaime.frio <at> gmail.com> wrote:
On Wed, Sep 3, 2014 at 6:41 AM, Eelco Hoogendoorn <hoogendoorn.eelco <at> gmail.com> wrote:
 Not sure about the hashing. Indeed one can also build an index of a set by means of a hash table, but its questionable if this leads to improved performance over performing an argsort. Hashing may have better asymptotic time complexity in theory, but many datasets used in practice are very easy to sort (O(N)-ish), and the time-constant of hashing is higher. But more importantly, using a hash-table guarantees poor cache behavior for many operations using this index. By contrast, sorting may (but need not) make one random access pass to build the index, and may (but need not) perform one random access to reorder values for grouping. But insofar as the keys are better behaved than pure random, this coherence will be exploited.

If you want to give it a try, these branch of my numpy fork has hash table based implementations of unique (with no extra indices) and in1d:


A use cases where the hash table is clearly better:

In [1]: import numpy as np
In [2]: from numpy.lib._compiled_base import _unique, _in1d

In [3]: a = np.random.randint(10, size=(10000,))
In [4]: %timeit np.unique(a)
1000 loops, best of 3: 258 us per loop
In [5]: %timeit _unique(a)
10000 loops, best of 3: 143 us per loop
In [6]: %timeit np.sort(_unique(a))
10000 loops, best of 3: 149 us per loop

It typically performs between 1.5x and 4x faster than sorting. I haven't profiled it properly to know, but there may be quite a bit of performance to dig out: have type specific comparison functions, optimize the starting hash table size based on the size of the array to avoid reinsertions...

If getting the elements sorted is a necessity, and the array contains very few or no repeated items, then the hash table approach may even perform worse,:

In [8]: a = np.random.randint(10000, size=(5000,))
In [9]: %timeit np.unique(a)
1000 loops, best of 3: 277 us per loop
In [10]: %timeit np.sort(_unique(a))
1000 loops, best of 3: 320 us per loop

But the hash table still wins in extracting the unique items only:

In [11]: %timeit _unique(a)
10000 loops, best of 3: 187 us per loop
 
Where the hash table shines is in more elaborate situations. If you keep the first index where it was found, and the number of repeats, in the hash table, you can get return_index and return_counts almost for free, which means you are performing an extra 3x faster than with sorting. return_inverse requires a little more trickery, so I won;t attempt to quantify the improvement. But I wouldn't be surprised if, after fine tuning it, there is close to an order of magnitude overall improvement

The spped-up for in1d is also nice:

In [16]: a = np.random.randint(1000, size=(1000,))
In [17]: b = np.random.randint(1000, size=(500,))
In [18]: %timeit np.in1d(a, b)
1000 loops, best of 3: 178 us per loop
In [19]: %timeit _in1d(a, b)
10000 loops, best of 3: 30.1 us per loop

Of course, there is no point in 

Ooops!!! Hit the send button too quick. Not to extend myself too long: if we are going to rethink all of this, we should approach it with an open mind. Still, and this post is not helping with that either, I am afraid that we are discussing implementation details, but are missing a broader vision of what we want to accomplish and why. That vision of what numpy's grouping functionality, if any, should be, and how it complements or conflicts with what pandas is providing, should precede anything else. I now I haven't, but has anyone looked at how pandas implements grouping? Their documentation on the subject is well worth a read:


Does numpy need to replicate this? What/why/how can we add to that?

Jaime

--
(\__/)
( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

I would certainly not be opposed to having a hashing based indexing mechanism; I think it would make sense design-wise to have a HashIndex class with the same interface as the rest, and use that subclass in those arraysetops where it makes sense. The 'how to' of indexing and its applications are largely orthogonal I think (with some tiny performance compromises which are worth the abstraction imo). For datasets which are not purely random, have many unique items, and which do not fit into cache, I would expect sorting to come out on top, but indeed it depends on the dataset.


Yeah, the question how pandas does grouping, and whether we can do better, is a relevant one.

From what I understand, pandas relies on cython extensions to get vectorized grouping functionality. This is no longer necessary since the introduction of ufuncs in numpy. I don't know how the implementations compare in terms of performance, but I doubt the difference is huge.

I personally use grouping a lot in my code, and I don't like having to use pandas for it. Most importantly, I don't want to go around creating a dataframe for a single one-line hit-and-run association between keys and values. The permanent association of different types of data and their metadata which pandas offers is I think the key difference from numpy, which is all about manipulating just plain ndarrays. Arguably, grouping itself is a pretty elementary manipulating of ndarrays, and adding calls to DataFrame or Series inbetween a statement that could just be simply group_by(keys).mean(values) feels wrong to me. As does including pandas as a dependency just to use this small piece of functionality. Grouping is a more general functionality than any particular method of organizing your data.

In terms of features, adding transformations and filtering might be nice too; I hadn't thought about it, but that is because unlike the currently implemented features, the need has never arose for me. Im only a small sample size, and I don't see any fundamental objection to adding such functionality though. It certainly raises the question as to where to draw the line with pandas; but my rule of thumb is that if you can think of it as an elementary operation on ndarrays, then it probably belongs in numpy.


Oh I forgot to add: with an indexing mechanism based on sorting, unique values and counts also come 'for free', not counting the O(N) cost of actually creating those arrays. The only time an operating relying on an index incurs another nontrivial amount of overhead is in case its 'rank' or 'inverse' property is used, which invokes another argsort. But for the vast majority of grouping or set operations, these properties are never used.

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Neal Becker | 4 Sep 13:32 2014
Picon

SFMT (faster mersenne twister)

http://www.math.sci.hiroshima-u.ac.jp/~%20m-mat/MT/SFMT/index.html

--

-- 
-- Those who don't understand recursion are doomed to repeat it
Charles R Harris | 3 Sep 23:47 2014
Picon

Give Jaime Fernandez commit rights.

Hi All,

I'd like to give Jaime commit rights. Having at three active developers with commit rights is the goal and Jaime has been pretty consistent with code submissions and discussion participation.

Thoughts?

Chuck
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Alan G Isaac | 3 Sep 23:19 2014
Picon

odd (?) behavior: negative integer scalar in exponent

What should be the value of `2**np.int_(-32)`?
It is apparently currently computed as `1. / (2**np.int_(32))`,
so the computation overflows (when a C long is 32 bits).
I would have hoped for it to be computed as `1./(2.**np.int_(32))`.

Cheers,
Alan Isaac
Gabor Kovacs | 1 Sep 17:23 2014
Picon

ENH IncrementalWriter for .npy files

Dear All,

I would like to add a class for writing one (possibly big) .npy file
saving multiple (same dtype, compatible shape) arrays. My use case was
the saving of slowly accumulating data regularly for a long time into
one file.

Please find a first implementation under
https://github.com/numpy/numpy/pull/4987 . It currently supports
writing a new file only and only in C order in the file. Opening an
existing file for append and reading back parts from a very big .npy
file would be straightforward next steps for a full featured class.

The .npy file format is only affected by leaving some extra space for
re-writing the header later with a possibly bigger "shape" field,
respecting the 16-byte alignment.

Example:
```
A=np.array([[0,1,2,3,4,5,6,7],[8,9,10,11,12,13,14,15]])
with np.IncrementalWriter("testfile.npy",hdrupdate=True,flush=True) as W:
    W.save(A)
    W.save(A)
```

Feel free to comment this idea.

Cheers,
Gabor
Emel Hasdal | 1 Sep 10:33 2014
Picon

How to install numpy on a box without hardware FPU

Hello,

  Is it possible to configure/install numpy on a box without a hardware FPU? When I try to install it using pip,
I get a bunch of compile errors since  floating-point exceptions (FE_DIVBYZERO etc) are undefined on this platform. 

How do I get numpy installed and working on such a platform?

Thanks,
Emel
<!-- .ExternalClass .ecxhmmessage P { padding:0px; } .ExternalClass body.ecxhmmessage { font-size:12pt; font-family:Calibri; } -->
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
josef.pktd | 30 Aug 19:43 2014
Picon

inplace unary operations?

Is there a way to negate a boolean, or to change the sign of a float inplace ?


Josef
random thoughts
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Benjamin Root | 30 Aug 04:10 2014
Picon

Can't seem to use np.insert() or np.append() for structured arrays

Consider the following:

a = np.array([(1, 'a'), (2, 'b'), (3, 'c')], dtype=[('foo', 'i'), ('bar', 'a1')])
b = np.append(a, (4, 'd'))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ben/miniconda/lib/python2.7/site-packages/numpy/lib/function_base.py", line 3555, in append
    return concatenate((arr, values), axis=axis)
TypeError: invalid type promotion
b = np.insert(a, 4, (4, 'd'))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ben/miniconda/lib/python2.7/site-packages/numpy/lib/function_base.py", line 3464, in insert
    new[slobj] = values
ValueError: could not convert string to float: d

In my original code snippet I was developing which has a more involved dtype, I actually got a different exception:
b = np.append(a, c)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ben/miniconda/lib/python2.7/site-packages/numpy/lib/function_base.py", line 3553, in append
    values = ravel(values)
  File "/home/ben/miniconda/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 1367, in ravel
    return asarray(a).ravel(order)
  File "/home/ben/miniconda/lib/python2.7/site-packages/numpy/core/numeric.py", line 460, in asarray
    return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.

Luckily, this works as a work-around:
>>> b = np.append(a, np.array([(4, 'd')], dtype=a.dtype))
>>> b
array([(1, 'a'), (2, 'b'), (3, 'c'), (4, 'd')],
      dtype=[('foo', 'i'), ('bar', 'S1')])

The same happens whether I enclose the value with square bracket or not. I suspect that this array type just wasn't considered when its checking logic was developed. This is with 1.8.2 from miniconda. Should we consider this a bug or are structured arrays just not expected to be modified like this?

Cheers!
Ben Root
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Jaime Fernández del Río | 29 Aug 02:14 2014
Picon

PR added: frozen dimensions in gufunc signatures

Hi,

I have just sent a PR (https://github.com/numpy/numpy/pull/5015), adding the possibility of having frozen dimensions in gufunc signatures. As a proof of concept, I have added a `cross1d` gufunc to `numpy.core.umath_tests`:

In [1]: import numpy as np
In [2]: from numpy.core.umath_tests import cross1d

In [3]: cross1d.signature
Out[3]: '(3),(3)->(3)'

In [4]: a = np.random.rand(1000, 3)
In [5]: b = np.random.rand(1000, 3)

In [6]: np.allclose(np.cross(a, b), cross1d(a, b))
Out[6]: True

In [7]: %timeit np.cross(a, b)
10000 loops, best of 3: 76.1 us per loop

In [8]: %timeit cross1d(a, b)
100000 loops, best of 3: 13.1 us per loop

In [9]: c = np.random.rand(1000, 2)
In [10]: d = np.random.rand(1000, 2)

In [11]: cross1d(c, d)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-11-72c66212e40c> in <module>()
----> 1 cross1d(c, d)

ValueError: cross1d: Operand 0 has a mismatch in its core dimension 0, with gufunc signature (3),(3)->(3) (size 2 is different from 3)

The speed up over `np.cross` is nice, and while `np.cross` is not the best of examples, as it needs to handle more sizes, in many cases this will allow producing gufuncs that work without a Python wrapper redoing checks that are best left to the iterator, such as dimension sizes.

It still needs tests, but before embarking on fully developing those, I wanted to make sure that there is an interest on this.

I would also like to further enhance gufuncs providing computed dimensions, e.g. making it possible to e.g. define `pairwise_cross` with signature '(n, 3)->($m, 3)', where the $ indicates that m is a computed dimension, that would have to be calculated by a function passed to the gufunc constructor and stored in the gufunc object, based on the other core dimensions. In this case it would make $m be n*(n-1), so that all pairwise cross products between 3D vectors could be computed.

The syntax with '$' is kind of crappy, so any suggestions on how to better express this in the signature are more than welcome, as well as any feedback on the merits (or lack of them) of implementing this.

Jaime

--
(\__/)
( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de dominación mundial.
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Julian Taylor | 27 Aug 19:07 2014

ANN: NumPy 1.9.0 release candidate 1 available

Hello,

Almost punctually for EuroScipy we have finally managed to release the
first release candidate of NumPy 1.9.
We intend to only fix bugs until the final release which we plan to do
in the next 1-2 weeks.

In this release numerous performance improvements have been added, most
significantly the indexing code has been rewritten be several times
faster for most cases and performance of using small arrays and scalars
has almost doubled.
Plenty of other functions have been improved too, nonzero, where,
count_nonzero, floating point min/max, boolean argmin/argmax,
searchsorted, triu/tril, masked sorting can be expected to perform
significantly better in many cases.

Also NumPy now releases the GIL for more functions, most notably the
indexing now releases it and the random modules state object has a
private lock instead of using the GIL. This allows leveraging pure
python threads more efficiently.

In order to make working with arrays containing NaN values easier
nanmedian and nanpercentile have been added which ignore these values.
These functions and the regular median and percentile now also support
generalized axis arguments that ufuncs already have, these allow
reducing along multiple axis in one call.

Please see the release notes for all the details. Please also take not
of the many small compatibility notes and deprecation in the notes.
https://github.com/numpy/numpy/blob/maintenance/1.9.x/doc/release/1.9.0-notes.rst

The source tarballs and win32 binaries can be downloaded here:
https://sourceforge.net/projects/numpy/files/NumPy/1.9.0rc1

Cheers,
Julian Taylor

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Gmane