Vincent Davis | 27 Oct 17:49 2014

ODE how to?

It's been too long since I have done differential equations and I am not sure the best tools to solve this problem.
I am starting with a basic kinematic equation for the balance of forces.
P\v - ((A*Cw*Rho*v^2)/2 + m*g*Crl + m*g*slope) =   m*a
P: power
x: position
v: velocity, x'
a: acceleration x"
(A*Cw*Rho*v^2)/2 : air resistance
m*g*Crl : rolling resistance
m*g*slope : potential energy (elevation)

I am modifying the above equation so that air velocity and slope are dependant on location x.
Vair = v + f(x)  where f(x) is the weather component and a function of location x.
Same goes for slope, slope = g(x)

Power is a function I what to optimize/find to minimize time but at this time just simulate. maybe something like:
P = 2500/(v+1)
I will have restriction on P but not interested in that now.
The "course" I what to simulate therefore defines slope and wind speed. and is of a fixed distance. 

I have played with some of the simple scipy.integrate.odeint examples. I get that I need to define a system of equations but am not really sure the rules for doing so. A little help would be greatly appreciated.

Vincent Davis
NumPy-Discussion mailing list
NumPy-Discussion <at>
Edison Gustavo Muenz | 27 Oct 16:56 2014

Accept numpy arrays on arguments of numpy.testing.assert_approx_equal()

I’ve implemented support for numpy.arrays for the arguments of numpy.testing.assert_approx_equal() and have issued a pull-request on Github.

I don’t know if I should be sending the message to the list to notify about this, but since I’m new to the numpy-dev list I think it never hurts to say hi :)

NumPy-Discussion mailing list
NumPy-Discussion <at>
Glen Mabey | 27 Oct 16:06 2014

Fwd: numpy.i and std::complex


I was very excited to learn about numpy.i for easy numpy+swigification of C code -- it's really handy.

Knowing that swig wraps C code, I wasn't too surprised that there was the issue with complex data types (as
described at,
but still it was pretty disappointing because most of my data is complex, and I'm invoking methods written
to use C++'s std::complex class.

After quite a bit of puzzling and not much help from previous mailing list posts, I created this very brief
but very useful file, which I call numpy_std_complex.i --

/* -*- C -*-  (not really, but good for syntax highlighting) */

%include "numpy.i"

%include <std_complex.i>

%numpy_typemaps(std::complex<float>,  NPY_CFLOAT , int)
%numpy_typemaps(std::complex<double>, NPY_CDOUBLE, int)

#endif /* SWIGPYTHON */

I'd really like for this to be included alongside numpy.i -- but maybe I overestimate the number of numpy
users who use complex data (let your voice be heard!) and who also end up using std::complex in C++ land.

Or if anyone wants to improve upon this usage I would be very happy to hear about what I'm missing.

I'm sure there's a documented way to submit this file to the git repo, but let me simultaneously ask whether
list subscribers think this is worthwhile and ask someone to add+push it for me …

Glen Mabey
D. Michael McFarland | 27 Oct 14:26 2014

Choosing between NumPy and SciPy functions

A recent post raised a question about differences in results obtained
with numpy.linalg.eigh() and scipy.linalg.eigh(), documented at
respectively.  It is clear that these functions address different
mathematical problems (among other things, the SciPy routine can solve
the generalized as well as standard eigenproblems); I am not concerned
here with numerical differences in the results for problems both should
be able to solve (the author of the original post received useful
replies in that thread).

What I would like to ask about is the situation this illustrates, where
both NumPy and SciPy provide similar functionality (sometimes identical,
to judge by the documentation).  Is there some guidance on which is to
be preferred?  I could argue that using only NumPy when possible avoids
unnecessary dependence on SciPy in some code, or that using SciPy
consistently makes for a single interface and so is less error prone.
Is there a rule of thumb for cases where SciPy names shadow NumPy names?

I've used Python for a long time, but have only recently returned to
doing serious numerical work with it.  The tools are very much improved,
but sometimes, like now, I feel I'm missing the obvious.  I would
appreciate pointers to any relevant documentation, or just a summary of
conventional wisdom on the topic.

Neal Becker | 27 Oct 13:14 2014

multi-dimensional c++ proposal

The multi-dimensional c++ stuff is interesting (about time!)


-- Those who don't understand recursion are doomed to repeat it
Sunghwan Choi | 27 Oct 09:37 2014

higher accuracy in diagonialzation

Dear all,

I am now diagonalizing a 200-by-200 symmetric matrix. But the two methods, scipy.linalg.eigh and numpy.linalg.eigh give significantly different result. The results from two methods are different within 10^-4 order. One of them is inaccurate or both two of them are inaccurate within that range. Which one is more accurate? or Are there any ways to control the accuracy for diagonalization? If you have some idea please let me know.


Sunghwan Choi

Ph. D. candidator

Department of Chemistry

KAIST (South Korea)


NumPy-Discussion mailing list
NumPy-Discussion <at>
RayS | 26 Oct 19:40 2014

Re: Memory efficient alternative for np.loadtxt and np.genfromtxt

At 06:32 AM 10/26/2014, you wrote:
On Sun, Oct 26, 2014 at 1:21 PM, Eelco Hoogendoorn
<hoogendoorn.eelco <at>> wrote:
> Im not sure why the memory doubling is necessary. Isnt it possible to
> preallocate the arrays and write to them?

Not without reading the whole file first to know how many rows to preallocate

Seems to me that loadtxt()
should have an optional shape. I often know how many rows I have (# of samples of data) from other meta data.
- if the file is smaller for some reason (you're not sure and pad your estimate) it could do one of
    - zero pad array
    - raise()
    - return truncated view
- if larger
    - raise()
    - return data read (this would act like size ) )
- Ray S
NumPy-Discussion mailing list
NumPy-Discussion <at>
Julian Taylor | 26 Oct 18:13 2014

ANN: NumPy 1.9.1 release candidate


We have finally finished the first release candidate of NumOy 1.9.1,
sorry for the week delay.
The 1.9.1 release will as usual be a bugfix only release to the 1.9.x
The tarballs and win32 binaries are available on sourceforge:

If no regressions show up the final release is planned next week.
The upgrade is recommended for all users of the 1.9.x series.

Following issues have been fixed:
* gh-5184: restore linear edge behaviour of gradient to as it was in < 1.9.
  The second order behaviour is available via the `edge_order` keyword
* gh-4007: workaround Accelerate sgemv crash on OSX 10.9
* gh-5100: restore object dtype inference from iterable objects without
* gh-5163: avoid gcc-4.1.2 (red hat 5) miscompilation causing a crash
* gh-5138: fix nanmedian on arrays containing inf
* gh-5203: copy inherited masks in MaskedArray.__array_finalize__
* gh-2317: genfromtxt did not handle filling_values=0 correctly
* gh-5067: restore api of npy_PyFile_DupClose in python2
* gh-5063: cannot convert invalid sequence index to tuple
* gh-5082: Segmentation fault with argmin() on unicode arrays
* gh-5095: don't propagate subtypes from np.where
* gh-5104: np.inner segfaults with SciPy's sparse matrices
* gh-5136: Import dummy_threading if importing threading fails
* gh-5148: Make numpy import when run with Python flag '-OO'
* gh-5147: Einsum double contraction in particular order causes ValueError
* gh-479: Make f2py work with intent(in out)
* gh-5170: Make python2 .npy files readable in python3
* gh-5027: Use 'll' as the default length specifier for long long
* gh-4896: fix build error with MSVC 2013 caused by C99 complex support
* gh-4465: Make PyArray_PutTo respect writeable flag
* gh-5225: fix crash when using arange on datetime without dtype set
* gh-5231: fix build in c99 mode

Source tarballs, windows installers and release notes can be found at

Julian Taylor

NumPy-Discussion mailing list
NumPy-Discussion <at>
Artur Bercik | 26 Oct 10:09 2014

Subdividing NumPy array into Regular Grid

I have a rectangle with the following coordinates:

import numpy as np

ulx,uly = (110, 60) ##uppper left lon, upper left lat
urx,ury = (120, 60) ##uppper right lon, upper right lat
lrx, lry = (120, 50) ##lower right lon, lower right lat
llx, lly = (110, 50) ##lower left lon, lower left lat

I want to divide that single rectangle into 100 regular grids inside that, and want to calculate the (ulx, uly), (urx,ury), (lrx, lry), and (llx, lly) for each grid separately:

lats = np.linspace(60, 50, 10)
lons = np.linspace(110, 120, 10)

lats = np.repeat(lats,10).reshape(10,10)
lons = np.tile(lons,10).reshape(10,10)

I could not imagine what to do then?

Is somebody familiar with such kind of problem?
NumPy-Discussion mailing list
NumPy-Discussion <at>
Saullo Castro | 26 Oct 09:46 2014

Memory efficient alternative for np.loadtxt and np.genfromtxt

I would like to start working on a memory efficient alternative for np.loadtxt and np.genfromtxt that uses arrays instead of lists to store the data while the file iterator is exhausted.

The motivation came from this SO question:

where for huge arrays the current NumPy ASCII readers are really slow and require ~6 times more memory. This case I tested with Pandas' read_csv() and it required 2 times more memory.

I would be glad if you could share your experience on this matter.

NumPy-Discussion mailing list
NumPy-Discussion <at>
Matthew Brett | 25 Oct 03:04 2014

npy_log2 undefined on Linux


We (dipy developers) have a hit a new problem trying to use the
``npy_log`` C function in our code.

Specifically, on Linux, but not on Mac or Windows, we are getting
errors of form:

ImportError: /path/to/extension/ undefined
symbol: npy_log2

when compiling something like:

import numpy as np
cimport numpy as cnp

cdef extern from "numpy/npy_math.h" nogil:
    double npy_log(double x)

def use_log(double val):
    return npy_log(val)

See : for
a self-contained example that replicates the failure with ``make``.

I guess this means that the code referred to by ``npy_log`` is not on
the ordinary runtime path on Linux?

What should I do next to debug?

Thanks a lot,