josef.pktd | 1 Jun 17:54 2015
Picon

checking S versus U dtype

What's the best way to check whether a numpy array is string or bytes on python3?

using char?


>>> A = np.asarray([[1, 0, 0],['E', 1, 0],['E', 'E', 1]], dtype='<U1')
>>> A
array([['1', '0', '0'],
       ['E', '1', '0'],
       ['E', 'E', '1']], 
      dtype='<U1')
>>> A.dtype
dtype('<U1')
>>> A.dtype.char
'U'
>>> A.dtype.char == 'U'
True
>>> A.dtype.char == 'S'
False
>>> A.astype('<S1').dtype.char == 'S'
True
>>> A.astype('<S1').dtype.char == 'U'
False
>>> 

background: 
I don't know why sometimes I got S and sometimes U on Python 3.4, and I want the code to work with both

>>> A == 'E'
array([[False, False, False],
       [ True, False, False],
       [ True,  True, False]], dtype=bool)

>>> A.astype('<S1') == 'E'
False
>>> A.astype('<S1') == b'E'
array([[False, False, False],
       [ True, False, False],
       [ True,  True, False]], dtype=bool)


Josef
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris | 31 May 00:23 2015
Picon

matmul needs some clarification.

Hi All,

The problem arises when multiplying a stack of matrices times a vector. PEP465 defines this as appending a '1' to the dimensions of the vector and doing the defined stacked matrix multiply, then removing the last dimension from the result. Note that in the middle step we have a stack of matrices and after removing the last dimension we will still have a stack of matrices. What we want is a stack of vectors, but we can't have those with our conventions. This makes the result somewhat unexpected. How should we resolve this?

Chuck
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Robert Cimrman | 29 May 17:24 2015
Picon

ANN: SfePy 2015.2

I am pleased to announce release 2015.2 of SfePy.

Description
-----------

SfePy (simple finite elements in Python) is a software for solving systems of
coupled partial differential equations by the finite element method or by the
isogeometric analysis (preliminary support). It is distributed under the new
BSD license.

Home page: http://sfepy.org
Mailing list: http://groups.google.com/group/sfepy-devel
Git (source) repository, issue tracker, wiki: http://github.com/sfepy

Highlights of this release
--------------------------

- major code simplification (removed element groups)
- time stepping solvers updated for interactive use
- improved finding of reference element coordinates of physical points
- reorganized examples
- reorganized installation on POSIX systems (sfepy-run script)

For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1
(rather long and technical).

Best regards,
Robert Cimrman and Contributors (*)

(*) Contributors to this release (alphabetical order):

Lubos Kejzlar, Vladimir Lukes, Anton Gladky, Matyas Novak
Julian Taylor | 28 May 14:46 2015

Verify your sourceforge windows installer downloads

hi,
It has been reported that sourceforge has taken over the gimp
unofficial windows downloader page and temporarily bundled the
installer with unauthorized adware:
https://plus.google.com/+gimp/posts/cxhB1PScFpe

As NumPy is also distributing windows installers via sourceforge I
recommend that when you download the files you verify the downloads
via the checksums in the README.txt before using them. The README.txt
is clearsigned with my gpg key so it should be safe from tampering.
Unfortunately as I don't use windows I cannot give any advice on how
to do the verifcation on these platforms. Maybe someone familar with
available tools can chime in.

I have checked the numpy downloads and they still match what I
uploaded, but as sourceforge does redirect based on OS and geolocation
this may not mean much.

Cheers,
Julian Taylor
Florian Lindner | 27 May 16:15 2015
Picon

MPI: Sendrecv blocks

Hello,

I have this piece of code:

comm = MPI.COMM_WORLD
temp = np.zeros(blockSize*blockSize)
PrintNB("Communicate A to", get_left_rank())
comm.Sendrecv(sendbuf=np.ascontiguousarray(lA), dest=get_left_rank(), 
recvbuf=temp)
lA = np.reshape(temp, (blockSize, blockSize))
PrintNB("Finished sending")

lA being a numpy array. Output is:

[0] Communicate A to 1
[2] Communicate A to 3
[3] Communicate A to 2
[1] Communicate A to 0
[1] Finished sending
# here it blocks

[n] is the rank. I have a circular send. 0>1, 1>0 and 2>3, 3>2. I understood 
Sendrec so that it is made specifically for these cases, but still it 
blocks.

What is the problem here?

Thanks!
Florian
Tom Krauss | 26 May 16:59 2015
Picon

addition to numpy.i

Hi folks,

After some discussion with Bill Spotz I decided to try to submit my new typemap to numpy.i that allows in-place arrays of an arbitrary number of dimensions to be passed in as a "flat" array with a single "size".

To that end I created my first pull request
sorry if I missed any steps or procedures - I noticed only after I did the commit and pull request that I should have created a new feature branch, sorry about that.

Anyway I noticed the pull request initiated a series of tests and one of them failed. How do I go about debugging and resolving the failure?

Thanks for your help,
  Tom Krauss
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett | 26 May 16:56 2015
Picon

Strategy for OpenBLAS

Hi,

This morning I was wondering whether we ought to plan to devote some
resources to collaborating with the OpenBLAS team.

Summary:  we should explore ways of setting up numpy as a test engine
for OpenBLAS development.

Detail:

I am getting the impression that OpenBLAS is looking like the most
likely medium term solution for open-source stack builds of numpy and
scipy on Linux and Windows at least.

ATLAS has been our choice for this up until now, but it is designed
for optimizing to a particular CPU configuration, which will likely
make it slow on some or even most of the machines a general installer
gets installed on.  This is only likely to get more severe over time,
because current ATLAS development is on multi-core optimization, where
the number of cores may need to be set at compile time.

The worry about OpenBLAS has always been that it is hard to maintain,
and fixes don't always have tests.  There might be other alternatives
that are a better bet technically, but don't currently have OpenBLAS'
dynamic selection features or CPU support.

It is relatively easy to add tests using Python / numpy.  We like
tests.  Why don't we propose a collaboration with OpenBLAS where we
build and test numpy with every / most / some commits of OpenBLAS, and
try to make it easy for the OpenBLAS team to add tests.    Maybe we
can use and add to the list of machines on which OpenBLAS is tested
[1]?  We Berkeley Pythonistas can certainly add the machines at our
buildbot farm [2].  Maybe the Julia / R developers would be interested
to help too?

Cheers,

Matthew

[1] https://github.com/xianyi/OpenBLAS/wiki/Machine-List
[2] http://nipy.bic.berkeley.edu/buildslaves
Nathaniel Smith | 25 May 17:38 2015
Picon

Re: Chaining apply_over_axis for multiple axes.

On May 25, 2015 4:05 AM, "Andrew Nelson" <andyfaff <at> gmail.com> wrote:
>
> I have a function that operates over a 1D array, to return an array of a similar size.  To use it in a 2D fashion I would have to do something like the following:
>
> for row in range(np.size(arr, 0):
>     arr_out[row] = func(arr[row])
> for col in range(np.size(arr, 1):
>     arr_out[:, col] = func(arr[:, col])
>
> I would like to generalise this to N dimensions. Does anyone have any suggestions of how to achieve this?

The crude but effective way is

tmp_in = arr.reshape((-1, arr.shape[-
1]))
tmp_out = np.empty(tmp_in.shape)
for i in range(tmp_in.shape[0]):
    tmp_out[i, :] = func(tmp_in[i, :])
out = tmp_out.reshape(arr.shape)

This won't produce any unnecessary copies if your input array is contiguous.

This also assumes you want to apply the function on the last axis. If not you can do something like

arr = arr.swapaxes(axis, -1)
... call the code above ...
out = out.swapaxes(axis, -1)

This will result in an extra copy of the input array though if it's >2d and the requested axis is not the last one.

-n

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sebastian Berg | 25 May 17:21 2015
Picon

Re: Chaining apply_over_axis for multiple axes.

On Mo, 2015-05-25 at 21:02 +1000, Andrew Nelson wrote:
> I have a function that operates over a 1D array, to return an array of
> a similar size.  To use it in a 2D fashion I would have to do
> something like the following:
> 
> 
> for row in range(np.size(arr, 0):
>     arr_out[row] = func(arr[row])
> for col in range(np.size(arr, 1):
>     arr_out[:, col] = func(arr[:, col])
> 
> 
> I would like to generalise this to N dimensions. Does anyone have any
> suggestions of how to achieve this?  Presumably what I need to do is
> build an iterator, and then remove an axis:
> 
> 
> # arr.shape=(2, 3, 4)
> it = np.nditer(arr, flags=['multi_index'])
> it.remove_axis(2)
> while not it.finished:
>     arr_out[it.multi_index] = func(arr[it.multi_index])
>     it.iternext()
> 

Just warning that nditer is pretty low level (i.e. can be a bit mind
boggling since it is close to the C-side of things).

Anyway, you can of course do this just iterating the result. Since you
have no buffering, etc. this should work fine. There is also
`np.nesterd_iters` but since I am a bit lazy to look it up, you would
have to actually check some examples for it from the numpy tests to see
how it works probably.

- Sebastian

> 
> If I have an array with shape (2, 3, 4) this would allow me to iterate
> over the 6 1D arrays that are 4 elements long.  However, how do I then
> construct the iterator for the preceding axes?
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion <at> scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Andrew Nelson | 25 May 13:02 2015
Picon

Chaining apply_over_axis for multiple axes.

I have a function that operates over a 1D array, to return an array of a similar size.  To use it in a 2D fashion I would have to do something like the following:

for row in range(np.size(arr, 0):
    arr_out[row] = func(arr[row])
for col in range(np.size(arr, 1):
    arr_out[:, col] = func(arr[:, col])

I would like to generalise this to N dimensions. Does anyone have any suggestions of how to achieve this?  Presumably what I need to do is build an iterator, and then remove an axis:

# arr.shape=(2, 3, 4)
it = np.nditer(arr, flags=['multi_index'])
it.remove_axis(2)
while not it.finished:
    arr_out[it.multi_index] = func(arr[it.multi_index])
    it.iternext()

If I have an array with shape (2, 3, 4) this would allow me to iterate over the 6 1D arrays that are 4 elements long.  However, how do I then construct the iterator for the preceding axes?
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Ralf Gommers | 24 May 16:55 2015
Picon

ANN: Scipy 0.16.0 beta release 2

Hi all,

The second beta for Scipy 0.16.0 is now available. After beta 1 a couple of critical issues on Windows were solved, and there are now also 32-bit Windows binaries (along with the sources and release notes) available on https://sourceforge.net/projects/scipy/files/scipy/0.16.0b2/.

Please try this release and report any issues on the scipy-dev mailing list.

Cheers,
Ralf


_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion <at> scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Gmane