Allen B. Riddell | 29 Mar 14:20 2015

Reading from C++ stream

Hi,

I'm currently working with a C++ library that makes use of C++ streams.
I'd like to read from these streams in Cython/Python with a minimum of
code.

From reading past mailing list discussions it seems like there aren't
many good options, with one of the best being this bit of production
code that uses boost:

http://cci.lbl.gov/cctbx_sources/boost_adaptbx/python_streambuf.h

Before I plunge in, I thought I'd check to see if there's an easier way.
(The previous discussions are from 2013 and before.)

Best wishes,

Allen

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Franco Nicolas Bellomo | 28 Mar 19:12 2015
Picon

Cython 0.20 don't speedup with boundscheck(False)

Hi!. I'm starting to take my first steps with cython and I have several questions. The idea is to solve a problem of heat transfer . The first thing was to use pure numpy probe but is slow to time I need. Also, I'm using cython to implement OpenMP.

This is my setup.py:

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np

ext_modules = [Extension("explicit_cython2", ["explicit_cython2.pyx"])]

setup(
  name = 'Explicit method using Cython',
  cmdclass = {'build_ext': build_ext},
  include_dirs = [np.get_include()],
  ext_modules = ext_modules
)

And this is my cython code (explicit_cython2.pyx):

import cython
import numpy as np
cimport numpy as np.
DTYPE = np.int
ctypedef np.int_t DTYPE_t

<at> cython.boundscheck(False)
<at> cython.wraparound(False)
cpdef double [:,:] explicit_cython(np.ndarray[np.float_t,ndim=1] u, float kappa, float dt, float dz, np.ndarray[np.float_t,ndim=1] term_const,
                    unsigned int nz, np.ndarray[long,ndim=1] plot_time):
    '''Cython version of explicit method'''
   
    #Defining C types
    cdef unsigned int i, k, j
    cdef unsigned int len_plot = len(plot_time) - 1
    cdef float lamnda = kappa*dt/dz**2
   
    # Memoryview on a NumPy array
    cdef double [:] u_view = u
    cdef double [:] un_view = u
    cdef double [:] const_view = term_const
    cdef double [:,:] uOut_view = np.zeros([len_plot + 1, nz])
    cdef long [:] plot_view = plot_time

    uOut_view[0] = u_view
   
    for i in range(len_plot):
        for k in range(plot_view[i], plot_view[i+1]):
            un_view = u_view
            for j in range(1, nz-1):
                u_view[j] = un_view[j] + lamnda*(un_view[j+1] - 2*un_view[j] + un_view[j-1]) + const_view[j]
        uOut_view[i+1] = u_view
 
    return uOut_view

I have some
questions:

# W
hy I get no time difference when I put `boundscheck` and ``wraparound` to False?

# Is correncto the setup I implemented? Because I saw the cython doc using `from Cython.Build import cythonize`
#
This code is faster than the implementation of numpy but slower than numba. Can you think of any other improvement that can be implemented?

Thank you very much for your help.

Best regards,
Franco


[0] This is the mathematical problem, but I apologize for that wrote in Spanish.
http://nbviewer.ipython.org/github/pewen/transferencia_calor/blob/master/transporte_calor_metodo_explicito.ipynb

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Sturla Molden | 28 Mar 00:33 2015
Picon

New Cython API for BLAS and LAPACK merged in SciPy :)

Thanks to Ian Henriksen, a new Cython API for BLAS and LAPACK is finally
merged in SciPy. It will be included in the next release. (For those who
cannot wait, get SciPy's master branch from Github.)

This makes it possible to use SciPy's BLAS and LAPACK from any 3rd party
Cython module without explicitely linking with the libraries. This means
that projects like scikit-learn and statsmodels do not need to maintain a
separate build dependency on BLAS and LAPACK. 

All we need to do is:

cimport scipy.linalg.cython_blas as blas
cimport scipy.linalg.cython_lapack as lapack

Now we can call blas.zgemm(...), lapack.dgelss(...), etc. We do not link
with 
BLAS or LAPACK. It is all taken care of in the generated Cython code.

All of BLAS and almost all of LAPACK is exposed. The new Cython API is much
more comprehensive than the current Python interfaces because it does not
depend on the f2py wrappers. Because f2py wrappers are not used, the Cython
API also does not depend on abusing their _cpointer attribute.

The Cython API can also (with some extra effort) be used from C (or C++ or
Fortran) if Cython is used to generate the Python to C interface. In this
case you need to fill in some global function pointers when the module is
imported and call these from C.

A Cython module using these interfaces will have a build dependency on
NumPy (i.e. similar to cimport numpy). The Cython module will also have 
a runtime dependency on NumPy and SciPy.

https://github.com/scipy/scipy/pull/4021

For future development of SciPy, this means we can use hand-crafted 
Cython code instead of relying on f2py wrappers for BLAS and LAPACK.
This makes it much easier to control the overhead, including when copies
and transpositions are made, and lwork queries will not need a separate
f2py wrapper. We also avoid linking in LAPACK and BLAS in multiple
locations and bloating the memory footprint if they are required by native 
code. Currently, every extension module which needs BLAS or LAPACK
functions will statically link their own copy.

Sturla

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Barrett Brister | 27 Mar 19:11 2015
Picon

Cython: Runtime and tutorial books

Hello, I have been doing some scripting with NumPy/SciPy for awhile, but Cython has a reputation for being particularly fast while relatively straightforward to code.

1. Can you guys link me to some relevant discussions of the relative runtime speeds of Cython vs. Numpy vs. C code?

2. What would be a good tutorial book for someone who has basic level of Python knowledge and knows very little C, to learn Cython?

Thanks,

Barrett

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jeroen Demeyer | 25 Mar 17:17 2015
Picon

globals() in Cython

In the release notes of Cython 0.15, there is

* globals() now returns a read-only dict of the Cython module's globals, 
rather than the globals of the first non-Cython module in the stack

However, it seems that this is not what actually happens in Cython 0.22. 
Did this change get reverted (intentionally or not)?

Jeroen.

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

AMAN singh | 25 Mar 06:18 2015
Picon

How to deal with Multiple dtype in cython

Hi developer

I am working on porting scipy.ndimage module from c to cython. I need to maintain the same speed after rewriting the module. And the module should also work for all numpy datatypes.Since making the module work with multiple data types is a bottleneck kind of thing for performance, can you suggest some techniques which I can use so that both the requirements are fulfilled(speed and generic nature).

Thanks in advance.

Regards, 
Aman Singh

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Curtis M. Faith | 24 Mar 21:35 2015
Picon

Inconsistent requirement for namespace in .pxd files causing compile errors

I'm trying to figure out why these definitions in a header file:

namespace opencog {


void initialize_opencog(AtomSpace* atomSpace, const char* configFile = NULL);
void finalize_opencog();
void configuration_load(const char* configFile);


} // namespace opencog

which when imported in a .pxd like this:

from opencog.atomspace cimport cAtomSpace


cdef
extern from "opencog/cython/opencog/Utilities.h" namespace "opencog":
   
# C++:
   
#  
   
#   initialize_opencog(AtomSpace* atomSpace, const char* configFile = NULL);
   
#   void finalize_opencog();
   
#   void configuration_load(const char* configFile);
   
#
    cdef
void c_initialize_opencog "initialize_opencog" (cAtomSpace*, char*)
    cdef
void c_finalize_opencog "finalize_opencog" ()
    cdef
void c_configuration_load "configuration_load" (char*)

and then used in a .pyx file like:

def finalize_opencog():
    c_finalize_opencog
()

give the following errors for each of the function names when compiling the Cython-generated utilities.cpp file:

[ 29%] Building CXX object opencog/cython/opencog/CMakeFiles/utilities_cython.dir/utilities.cpp.o

/home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp: In function PyObject* __pyx_pf_7opencog_9utilities_2finalize_opencog(PyObject*)’:
/home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp:987:20: error: finalize_opencog was not declared in this scope
   finalize_opencog
();
                   
^
/home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp:987:20: note: suggested alternative:
In file included from /home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp:364:0:
/home/opencog/src/opencog/opencog/cython/opencog/Utilities.h:30:6: note:   opencog::finalize_opencog
 
void finalize_opencog();

If I prepend "opencog::" like:

cdef void c_finalize_opencog "opencog::finalize_opencog" ()

then everything works fine. But I thought that the namespace "opencog" was supposed to take care of that already.

In another .pxd file, I have the following and it works just fine:

cdef extern from "opencog/cython/opencog/BindlinkStub.h" namespace "opencog":
   
# C++:
   
#   Handle stub_bindlink(AtomSpace*, Handle);
   
#
    cdef cHandle c_stub_bindlink
"stub_bindlink" (cAtomSpace*, cHandle)

and I get no errors and I don't have to prepend the "opencog::" to the function names. This is the only .pxd file that generates errors if they are left out.

Any idea what the difference is between the two usages above?

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Curtis M. Faith | 24 Mar 17:27 2015
Picon

Python calling Cython calling C++ calling embedded Python

I'm working on improving the Python bindings to the OpenCog framework. The framework already supports calling into Callbacks that are Python functions using the embedded Python C API.

It also supports Python bindings implemented via Cython for running python scripts that manipulate OpenCog objects. However, callbacks into Python functions from within OpenCog are not currently working properly because the internal interpreter needs to be initialized with some OpenCog specific objects first.

So I wrote Cython wrappers for the OpenCog-specific Python interpreter initialization and it appears that the C API is linking to the same process space and interpreter object as the Python binary (/usr/bin/python on Ubuntu) that is running the scripts at the outer level. It must be doing this via shared library linking (libPython2.7.so)???

So for example, when I ran a script called test.py via:

/usr/bin/python test.py

from opencog.atomspace import AtomSpace
from opencog.utilities import initialize_opencog, finalize_opencog

atomspace
= AtomSpace()

print
print "initialize_opencog"
initialize_opencog
(atomspace)

print "finalize_opencog"
finalize_opencog
()

print "del atomspace"
del atomspace

print "Test complete"

this crashed after the return from finalize_opencog()because finalize_opencog contains code that called Py_Finalize() which isn't a good idea if that results in termination of all the Python objects created by the running /usr/bin/python executable itself while it is running.

I fixed this crash by checking to see if Python is already initialized and store this fact so I can avoid calling Py_Finalize() when cleaning up OpenCog objects if Python was already initialized on entry.

I haven't read anything about this particular case where I'm accessing C++ via Cython (and it's Python C API calls) which in turn call back into Python code via the C API calls.

Is this really what is happening? Can I reliably manipulate the internals of the Python interpreter running the outer level script using Python's C API calls from within functions that are written in C++ called via Cython? For instance, to lookup a function that is declared in the __main__ module?

If so, how can I make sure that the Python interpreters are the same? For instance, if the Python binary was statically linked or linked to another version of Python, then this approach would probably not work. So I'd like to be able to make some sort of check and then warn the user if they are different if I start counting on this behavior under normal circumstances. I want to be able to detect when someone tries running from a Python 3 executable when OpenCog is linked to libPython2.7.so, or from Python2.7.2 linking to libPython2.7.2.so in one location while OpenCog is linking against libPython2.7.so in another location.

- Curtis

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Antony Lee | 24 Mar 07:43 2015
Picon

Optimizing iteration on zip objects

Hi,
I believe that loops such as

for x, y, z in zip(xs, ys, zs): ...

(where the number of arguments and unpacking variables are directly given in that line) are currently not optimized at all (as in, a Python zip object is created, and tuples obtained and unpacked).  Having looked at the Cython sources, I see an optimization for enumerate (_transform_enumerate_iteration) that (I believe?) rewrites

for x in enumerate(y): ...

into

i = 0
for x' in y:
    i += 1
    ...

although it is not clear to me whether there is actual tuple creation and unpacking going on there.  I would be happy to try to write a similar optimizer for zip if there is interest for it, but would need some help (such as a short walkthrough of the "enumerate" optimizer).

Antony

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
afylot | 23 Mar 18:17 2015
Picon

installing cython with python3.2

I was installing cython 0.22 from the tarball (python setup.py install --prefix=my/prefix), compiling with gcc.
I got an error:

gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -fPIC -I/usr/include/python3.2mu -c /home/afylot/downloads/Cython-0.22/Cython/Plex/Scanners.c -o build/temp.linux-x86_64-3.2/home/afylot/downloads/Cython-0.22/Cython/Plex/Scanners.o

/home/afylot/downloads/Cython-0.22/Cython/Plex/Scanners.c:308:1: error: unknown type name ‘Py_UNICODE’

and many other errors that seems all related to Py_UNICODE.

/usr/include/python3.2mu  seems to be the directory where the definition of py_unicode should be, and I have looked into that with fgrep, and it looks like there is no definition of py_unicode, there is just one file called pyconfig.h. I have already installed cython 0.22 with python 2.7 with no problem, and in fact in the corresponding include directory for python 2.7 there is the definition for py_unicode, and there are a lot of files .h, other than pyconfig.h.

I don't understand if this is a bug in cython or if my installation of python 3.2 lacks something. I haven't installed python3.2 myself, so what should I tell to the computer manager to fix this?

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
pije | 23 Mar 08:46 2015
Picon

Buffer protocol definition

Hi cython users,

I'm trying to setup a buffer protocol in cython. FYI I'm using Cython0.19 and Python2.7.

The short story: I declare a new class in which I setup the two necessary methods __getbuffer__ and __releasebuffer__. Compilation is OK. But I can't get any data from a python object created using the buffer protocol.

Here is the cython code defining my buffer protocol:

cimport numpy as CNY # Cython buffer protocol implementation for my array class cdef class P_NpArray: cdef CNY.ndarray npy_ar def __cinit__(self, inpy_ar): self.npy_ar=inpy_ar def __getbuffer__(self, Py_buffer *buffer, int flags): cdef Py_ssize_t ashape[2] ashape[0]=self.npy_ar.shape[0] ashape[1]=self.npy_ar.shape[1] cdef Py_ssize_t astrides[2] astrides[0]=self.npy_ar.strides[0] astrides[1]=self.npy_ar.strides[1] buffer.buf = <void *> self.npy_ar.data buffer.format = 'f' buffer.internal = NULL buffer.itemsize = self.npy_ar.itemsize buffer.len = self.npy_ar.size*self.npy_ar.itemsize buffer.ndim = self.npy_ar.ndim buffer.obj = self buffer.readonly = 0 buffer.shape = ashape buffer.strides = astrides buffer.suboffsets = NULL def __releasebuffer__(self, Py_buffer *buffer): pass

This above code compiles fine. But I can't retrieve the buffer data properly.
See the following test where:

  • I create a numpy array
  • load it with my BP class
  • try to retrieve it as numpy array (Just to showcase my problem):
>>> import myarray >>> import numpy as np >>> ar=np.ones((2,4)) # create a numpy array >>> ns=myarray.P_NpArray(ar) # declare numpy array as a new numpy-style array >>> print ns <myarray.P_NpArray object at 0x7f30f2791c58> >>> nsa = np.asarray(ns) # Convert back to numpy array. Buffer protocol called here. /home/tools/local/x86z/lib/python2.7/site-packages/numpy/core/numeric.py:235: RuntimeWarning: Item size computed from the PEP 3118 buffer format string does not match the actual item size. return array(a, dtype, copy=False, order=order) >>> print type(nsa) # Output array has the correct type <type 'numpy.ndarray'> >>> print "nsa=",nsa nsa= <myarray.P_NpArray object at 0x7f30f2791c58> >>> print "nsa.data=", nsa.data nsa.data= Xy0 >>> print "nsa.itemsize=",nsa.itemsize nsa.itemsize= 8 >>> print "nsa.size=",nsa.size # Output array has a WRONG size nsa.size= 1 >>> print "nsa.shape=",nsa.shape # Output array has a WRONG shape nsa.shape= () >>> np.frombuffer(nsa.data, np.float64) # I can't get a proper read of the data buffer [ 6.90941928e-310]


I looked around for the RuntimeWarning and found out that it probably was not relevant see PEP 3118 warning when using ctypes array as numpy array http://bugs.python.org/issue10746 and http://bugs.python.org/issue10744. What do you think ?

Obviously the buffer shape and size are not properly transmitted. So. What am I missing ? Is my buffer protocol correctly defined ?

Many thanks for the hints.



--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Gmane