subro | 14 Aug 07:01 2015
Picon

Help in understanding

Hi,

I am new to NumPy, Can someone help me in understanding below code.

>>> names = np.array(['Bob', 'Joe', 'Will', 'Bob', 'Will', 'Joe', 'Joe'])

>>> data = np.random.random((7,4))

>>> print data
[[ 0.85402649  0.12827655  0.5805555   0.86288236]
 [ 0.30162683  0.45269508  0.98098039  0.1291469 ]
 [ 0.21229924  0.37497112  0.57367496  0.08607771]
 [ 0.302866    0.42160468  0.26879288  0.68032467]
 [ 0.60612492  0.35210577  0.91355096  0.57872181]
 [ 0.11583826  0.81988882  0.39214077  0.51377566]
 [ 0.03767641  0.1920532   0.24872009  0.36068313]]

>>> data[names == 'Bob']

array([[ 0.85402649,  0.12827655,  0.5805555 ,  0.86288236],
       [ 0.302866  ,  0.42160468,  0.26879288,  0.68032467]])

Also, can someone help me where and how to practice NumPy?

--
View this message in context: http://numpy-discussion.10968.n7.nabble.com/Help-in-understanding-tp40827.html
Sent from the Numpy-discussion mailing list archive at Nabble.com.
Sebastian Berg | 13 Aug 20:34 2015
Picon

Multiarray API size mismatch 301 302?

Hey,

just for hacking/testing, I tried to add to shape.c:

/*NUMPY_API
 *
 * Checks if memory overlap exists
 */
NPY_NO_EXPORT int
PyArray_ArraysShareMemory(PyArrayObject *arr1, PyArrayObject *arr2, int
work) {
    return solve_may_share_memory(arr1, arr2, work);
}

and to numpy_api.py:

    # End 1.10 API
    'PyArray_ArraysShareMemory':            (301,),

But I am getting the error:

  File "numpy/core/code_generators/generate_numpy_api.py", line 230, in
do_generate_api
    (len(multiarray_api_dict), len(multiarray_api_index)))
AssertionError: Multiarray API size mismatch 301 302

It is puzzling me, so anyone got a quick idea?

- Sebastian
(Continue reading)

Nathan Goldbaum | 12 Aug 23:03 2015
Picon

Changes to np.digitize since NumPy 1.9?

Hi all,

I've been testing the package I spend most of my time on, yt, under numpy 1.10b1 since the announcement went out.

I think I've narrowed down and fixed all of the test failures that cropped up except for one last issue. It seems that the behavior of np.digitize with respect to ndarray subclasses has changed since the NumPy 1.9 series. Consider the following test script:

```python
import numpy as np


class MyArray(np.ndarray):
    def __new__(cls, *args, **kwargs):
        return np.ndarray.__new__(cls, *args, **kwargs)

data = np.arange(100)

bins = np.arange(100) + 0.5

data = data.view(MyArray)

bins = bins.view(MyArray)

digits = np.digitize(data, bins)

print type(digits)
```

Under NumPy 1.9.2, this prints "<type 'numpy.ndarray'>", but under the 1.10 beta, it prints "<class '__main__.MyArray'>"

I'm curious why this change was made. Since digitize outputs index arrays, it doesn't make sense to me why it should return anything but a plain ndarray. I see in the release notes that digitize now uses searchsorted under the hood. Is this related?

We can "fix" this in our codebase by wrapping digitize or by adding numpy version checks in places where the output type matters. Is it also possible for me to customize the return type here by exploiting the ufunc machinery and the __array_wrap__ and __array_finalize__ functions?

Thanks for any help or advice you might have,

Nathan


_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Ralf Gommers | 12 Aug 19:35 2015
Picon

Re: Problems using add_npy_pkg_config



On Wed, Aug 12, 2015 at 7:23 PM, Charles R Harris <charlesr.harris <at> gmail.com> wrote:


On Wed, Aug 12, 2015 at 10:50 AM, Ralf Gommers <ralf.gommers <at> gmail.com> wrote:


On Wed, Aug 12, 2015 at 6:23 PM, Christian Engwer <christian.engwer <at> uni-muenster.de> wrote:
Dear all,

I'm trying to use the numpy distutils to install native C
libraries. These are part of a larger roject and should be usable
standalone. I managed to install headers and libs, but now I
experience problems writing the corresponding pkg file. I first tried
to do the trick without numpy, but getting all the pathes right in all
different setups is really a mess.

This doesn't answer your question but: why? If you're not distributing a Python project, there is no reason to use distutils instead of a sane build system.

Believe it or not, distutils *is* one of the saner build systems when you want something cross platform. Sad, isn't it...

Come on. We don't take it seriously, and neither do the Python core devs. It's also pretty much completely unsupported. Numpy.distutils is a bit better in that respect than Python distutils, which doesn't even get sane patches merged.

Try Scons, Tup, Gradle, Shake, Waf or anything else that's at least somewhat modern and supported. Do not use numpy.distutils unless there's no other mature choice (i.e. you're developing a Python project).

Ralf


_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Christian Engwer | 12 Aug 18:23 2015
Picon

Problems using add_npy_pkg_config

Dear all,

I'm trying to use the numpy distutils to install native C
libraries. These are part of a larger roject and should be usable
standalone. I managed to install headers and libs, but now I
experience problems writing the corresponding pkg file. I first tried
to do the trick without numpy, but getting all the pathes right in all
different setups is really a mess.

Please a find a m.w.e. attached to this mail. It consists of foo.c
foo.ini.in and setup.py.

I'm sure I missed some important part, but somehow the distribution
variable in build_src seems to be uniinitalized. Calling
> python setup.py install --prefix=/tmp/foo.inst
fils with ...
  File "/usr/lib/python2.7/dist-packages/numpy/distutils/command/build_src.py", line 257, in build_npy_pkg_config
      pkg_path = self.distribution.package_dir[pkg]
  TypeError: 'NoneType' object has no attribute '__getitem__'

I also tried to adopt parts of the numpy setup, but these use
sub-modules, which I don't need... might this the the cause of my
problems?

Any help is highly appreciated ;-)

Cheers
Christian
Attachment (foo.c): text/x-csrc, 25 bytes
[meta]
Name= <at> foo <at> 
Version=1.0
Description=dummy description

[default]
Cflags=-I <at> prefix <at> /include
Libs=
Attachment (setup.py): text/x-python, 525 bytes
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Casey Deen | 12 Aug 18:12 2015
Picon

f2py and callbacks with variables

Hi all-

   I've run into what I think might be a bug in f2py and callbacks to
python.  Or, maybe I'm not using things correctly.  I have created a
very minimal example which illustrates my problem at:

https://github.com/soylentdeen/fluffy-kumquat

The issue seems to affect call backs with variables, but only when they
are called indirectly (i.e. from other fortran routines).  For example,
if I have a python function

def show_number(n):
    print("%d" % n)

and I setup a callback in a fortran routine:

      subroutine cb
cf2py intent(callback, hide) blah
      external blah
      call blah(5)
      end

and connect it to the python routine
fortranObject.blah = show_number

I can successfully call the cb routine from python:

>fortranObject.cb
5

However, if I call the cb routine from within another fortran routine,
it seems to lose its marbles

      subroutine no_cb
      call cb
      end

capi_return is NULL
Call-back cb_blah_in_cb__user__routines failed.

For more information, please have a look at the github repository.  I've
reproduced the behavior on both linux and mac.  I'm not sure if this is
an error in the way I'm using the code, or if it is an actual bug.  Any
and all help would be very much appreciated.

Cheers,
Casey

--

-- 
Dr. Casey Deen
Post-doctoral Researcher
deen <at> mpia.de                       +49-6221-528-375
Max Planck Institut für Astronomie (MPIA)
Königstuhl 17  D-69117 Heidelberg, Germany
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Klemm, Michael | 11 Aug 16:37 2015
Picon

ANN: pyMIC v0.6 Released

Announcement: pyMIC v0.6
=========================

I'm happy to announce the release of pyMIC v0.6.

pyMIC is a Python module to offload computation in a Python program to the Intel Xeon Phi coprocessor.  It
contains offloadable arrays and device management functions.  It supports invocation of native kernels
(C/C++, Fortran) and blends in with Numpy's array types for float, complex, and int data types.

For more information and downloads please visit pyMIC's Github page: https://github.com/01org/pyMIC. 
You can find pyMIC's mailinglist at https://lists.01.org/mailman/listinfo/pymic.

Full change log:
=================

Version 0.6
----------------------------
- Experimental support for the Windows operating system.
- Switched to Cython to generate the glue code for pyMIC.
- Now using Markdown for README and CHANGELOG.
- Introduced PYMIC_DEBUG=3 to trace argument passing for kernels.
- Bugfix: added back the translate_device_pointer() function.
- Bugfix: example SVD now respects order of the passed matrices when applying the `dgemm` routine.
- Bugfix: fixed memory leak when invoking kernels.
- Bugfix: fixed broken translation of fake pointers.
- Refactoring: simplified bridge between pyMIC and LIBXSTREAM.

Version 0.5
----------------------------
- Introduced new kernel API that avoids insane pointer unpacking.
- pyMIC now uses libxstreams as the offload back-end
  (https://github.com/hfp/libxstream).
- Added smart pointers to make handling of fake pointers easier.

Version 0.4
----------------------------
- New low-level API to allocate, deallocate, and transfer data
  (see OffloadStream).
- Support for in-place binary operators.
- New internal design to handle offloads.

Version 0.3
----------------------------
- Improved handling of libraries and kernel invocation.
- Trace collection (PYMIC_TRACE=1, PYMIC_TRACE_STACKS={none,compact,full}).
- Replaced the device-centric API with a stream API.
- Refactoring to better match PEP8 recommendations.
- Added support for int(int64) and complex(complex128) data types.
- Reworked the benchmarks and examples to fit the new API.
- Bugfix: fixed syntax errors in OffloadArray.

Version 0.2
----------------------------
- Small improvements to the README files.
- New example: Singular Value Decomposition.
- Some documentation for the API functions.
- Added a basic testsuite for unit testing (WIP).
- Bugfix: benchmarks now use the latest interface.
- Bugfix: numpy.ndarray does not offer an attribute 'order'.
- Bugfix: number_of_devices was not visible after import.
- Bugfix: member offload_array.device is now initialized.
- Bugfix: use exception for errors w/ invoke_kernel & load_library.

Version 0.1
----------------------------
Initial release.

Dr.-Ing. Michael Klemm
Senior Application Engineer
Software and Services Group
Developer Relations Division
Phone	+49 89 9914 2340
Cell	+49 174 2417583

Intel Deutschland GmbH
Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany
Tel: +49 89 99 8853-0, www.intel.de
Managing Directors: Christin Eisenschmid, Prof. Dr. Hermann Eul
Chairperson of the Supervisory Board: Tiffany Doon Silva
Registered Office: Munich
Commercial Register: Amtsgericht Muenchen HRB 186928
Pieter Eendebak | 11 Aug 10:36 2015
Picon

overhead in np.matrix


The overhead of the np.matrix class is quite high for small matrices. See for example the following code:

import time
import math
import numpy as np

def rot2D(phi):
    c=math.cos(phi); 
    return np.matrix(c)

_b=np.matrix(np.zeros( (1,)))
def rot2Dx(phi):
    global _b
    r=_b.copy()
    c=math.cos(phi);
    r.itemset(0, c)
    return r

phi=.023

%timeit rot2D(phi)
%timeit rot2Dx(phi)

The second implementation performs much better by using a copy instead of a constructor. Is there a way to efficiency create a new np.matrix object? For other functions in my code I do not have the option to copy an existing matrix, but I need to construct a new object or perform a cast from np.array to np.matrix.

I am already aware of two alternatives:

- Using the new multiplication operator (https://www.python.org/dev/peps/pep-0465/). This is a good solution, but only python 3.5
- Using the .dot functions from np.array. This works, but personally I like the notation using np.matrix much better.

I also created an issue on github: https://github.com/numpy/numpy/issues/6186

With kind regards,
Pieter Eendebak


_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris | 11 Aug 00:34 2015
Picon

mingw32 and numpy 1.10

Mingw32 will not compile current numpy due to initialization of a static structure slot with a Python C-API function. The function is not considered a constant expression by the old gcc in mingw32. Compilation does work with more recent compilers; evidently the meaning of "constant expression" is up to the vendor.

So, this is fixable if we initialize the slot with 0, but that loses some precision/functionality. The question is, do we want to support mingw32, and numpy-vendor as well, for numpy 1.10.0? I think the answer is probably "yes", but we may want to reconsider for numpy 1.11, when we may want to use Carl's mingw64 toolchain instead.

Chuck 
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Benjamin Root | 10 Aug 18:09 2015
Picon

np.in1d() & sets, bug?

Just came across this one today:

>>> np.in1d([1], set([0, 1, 2]), assume_unique=True)
array([ False], dtype=bool)
>>> np.in1d([1], [0, 1, 2], assume_unique=True)
array([ True], dtype=bool)

I am assuming this has something to do with the fact that order is not guaranteed with set() objects? I was kind of hoping that setting "assume_unique=True" would be sufficient to overcome that problem. Should sets be rejected as an error?

This was using v1.9.0

Cheers!
Ben Root
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Allan Haldane | 7 Aug 04:59 2015
Picon

improving structured array assignment

Hello all,

I've written up a tentative PR which tidies up structured array assignment,

https://github.com/numpy/numpy/pull/6053

It has a backward incompatible change which I'd especially like to get 
some feedback on: Structure assignment now always works "by field 
position" instead of "by field name". Consider the following assignment:

   >>> v1 = np.array([(1,2,3)],
   ...               dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'i4')])
   >>> v2 = np.array([(4,5,6)],
   ...               dtype=[('b', 'i4'), ('a', 'i4'), ('c', 'i4')])
   >>> v1[:] = v2

Previously, v1 would be set to "(5,4,6)" but with the PR it is set to 
"(4,5,6)".

This might seem like negligible improvement, but assignment "by field 
name" has lots of inconsistent/broken edge cases which I've listed in 
the PR, which disappear with assignment "by field position". The PR 
doesn't seem to break much of anything in scipy, pandas, and astropy.

If possible, I'd like to try getting a deprecation warning for this 
change into 1.10.

I also changed a few more minor things about structure assignment, 
expanded the docs on structured arrays, and made a multi-field index 
(arr[['f1', 'f0']]) return a view instead of a copy, which had been 
planned for 1.10 but didn't get in because of the strange behavior of 
structure assignment.

Allan

Gmane