Jeroen Demeyer | 25 Mar 17:17 2015
Picon

globals() in Cython

In the release notes of Cython 0.15, there is

* globals() now returns a read-only dict of the Cython module's globals, 
rather than the globals of the first non-Cython module in the stack

However, it seems that this is not what actually happens in Cython 0.22. 
Did this change get reverted (intentionally or not)?

Jeroen.

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

AMAN singh | 25 Mar 06:18 2015
Picon

How to deal with Multiple dtype in cython

Hi developer

I am working on porting scipy.ndimage module from c to cython. I need to maintain the same speed after rewriting the module. And the module should also work for all numpy datatypes.Since making the module work with multiple data types is a bottleneck kind of thing for performance, can you suggest some techniques which I can use so that both the requirements are fulfilled(speed and generic nature).

Thanks in advance.

Regards, 
Aman Singh

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Curtis M. Faith | 24 Mar 21:35 2015
Picon

Inconsistent requirement for namespace in .pxd files causing compile errors

I'm trying to figure out why these definitions in a header file:

namespace opencog {


void initialize_opencog(AtomSpace* atomSpace, const char* configFile = NULL);
void finalize_opencog();
void configuration_load(const char* configFile);


} // namespace opencog

which when imported in a .pxd like this:

from opencog.atomspace cimport cAtomSpace


cdef
extern from "opencog/cython/opencog/Utilities.h" namespace "opencog":
   
# C++:
   
#  
   
#   initialize_opencog(AtomSpace* atomSpace, const char* configFile = NULL);
   
#   void finalize_opencog();
   
#   void configuration_load(const char* configFile);
   
#
    cdef
void c_initialize_opencog "initialize_opencog" (cAtomSpace*, char*)
    cdef
void c_finalize_opencog "finalize_opencog" ()
    cdef
void c_configuration_load "configuration_load" (char*)

and then used in a .pyx file like:

def finalize_opencog():
    c_finalize_opencog
()

give the following errors for each of the function names when compiling the Cython-generated utilities.cpp file:

[ 29%] Building CXX object opencog/cython/opencog/CMakeFiles/utilities_cython.dir/utilities.cpp.o

/home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp: In function PyObject* __pyx_pf_7opencog_9utilities_2finalize_opencog(PyObject*)’:
/home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp:987:20: error: finalize_opencog was not declared in this scope
   finalize_opencog
();
                   
^
/home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp:987:20: note: suggested alternative:
In file included from /home/opencog/src/opencog/build/opencog/cython/opencog/utilities.cpp:364:0:
/home/opencog/src/opencog/opencog/cython/opencog/Utilities.h:30:6: note:   opencog::finalize_opencog
 
void finalize_opencog();

If I prepend "opencog::" like:

cdef void c_finalize_opencog "opencog::finalize_opencog" ()

then everything works fine. But I thought that the namespace "opencog" was supposed to take care of that already.

In another .pxd file, I have the following and it works just fine:

cdef extern from "opencog/cython/opencog/BindlinkStub.h" namespace "opencog":
   
# C++:
   
#   Handle stub_bindlink(AtomSpace*, Handle);
   
#
    cdef cHandle c_stub_bindlink
"stub_bindlink" (cAtomSpace*, cHandle)

and I get no errors and I don't have to prepend the "opencog::" to the function names. This is the only .pxd file that generates errors if they are left out.

Any idea what the difference is between the two usages above?

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Curtis M. Faith | 24 Mar 17:27 2015
Picon

Python calling Cython calling C++ calling embedded Python

I'm working on improving the Python bindings to the OpenCog framework. The framework already supports calling into Callbacks that are Python functions using the embedded Python C API.

It also supports Python bindings implemented via Cython for running python scripts that manipulate OpenCog objects. However, callbacks into Python functions from within OpenCog are not currently working properly because the internal interpreter needs to be initialized with some OpenCog specific objects first.

So I wrote Cython wrappers for the OpenCog-specific Python interpreter initialization and it appears that the C API is linking to the same process space and interpreter object as the Python binary (/usr/bin/python on Ubuntu) that is running the scripts at the outer level. It must be doing this via shared library linking (libPython2.7.so)???

So for example, when I ran a script called test.py via:

/usr/bin/python test.py

from opencog.atomspace import AtomSpace
from opencog.utilities import initialize_opencog, finalize_opencog

atomspace
= AtomSpace()

print
print "initialize_opencog"
initialize_opencog
(atomspace)

print "finalize_opencog"
finalize_opencog
()

print "del atomspace"
del atomspace

print "Test complete"

this crashed after the return from finalize_opencog()because finalize_opencog contains code that called Py_Finalize() which isn't a good idea if that results in termination of all the Python objects created by the running /usr/bin/python executable itself while it is running.

I fixed this crash by checking to see if Python is already initialized and store this fact so I can avoid calling Py_Finalize() when cleaning up OpenCog objects if Python was already initialized on entry.

I haven't read anything about this particular case where I'm accessing C++ via Cython (and it's Python C API calls) which in turn call back into Python code via the C API calls.

Is this really what is happening? Can I reliably manipulate the internals of the Python interpreter running the outer level script using Python's C API calls from within functions that are written in C++ called via Cython? For instance, to lookup a function that is declared in the __main__ module?

If so, how can I make sure that the Python interpreters are the same? For instance, if the Python binary was statically linked or linked to another version of Python, then this approach would probably not work. So I'd like to be able to make some sort of check and then warn the user if they are different if I start counting on this behavior under normal circumstances. I want to be able to detect when someone tries running from a Python 3 executable when OpenCog is linked to libPython2.7.so, or from Python2.7.2 linking to libPython2.7.2.so in one location while OpenCog is linking against libPython2.7.so in another location.

- Curtis

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Antony Lee | 24 Mar 07:43 2015
Picon

Optimizing iteration on zip objects

Hi,
I believe that loops such as

for x, y, z in zip(xs, ys, zs): ...

(where the number of arguments and unpacking variables are directly given in that line) are currently not optimized at all (as in, a Python zip object is created, and tuples obtained and unpacked).  Having looked at the Cython sources, I see an optimization for enumerate (_transform_enumerate_iteration) that (I believe?) rewrites

for x in enumerate(y): ...

into

i = 0
for x' in y:
    i += 1
    ...

although it is not clear to me whether there is actual tuple creation and unpacking going on there.  I would be happy to try to write a similar optimizer for zip if there is interest for it, but would need some help (such as a short walkthrough of the "enumerate" optimizer).

Antony

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
afylot | 23 Mar 18:17 2015
Picon

installing cython with python3.2

I was installing cython 0.22 from the tarball (python setup.py install --prefix=my/prefix), compiling with gcc.
I got an error:

gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -fPIC -I/usr/include/python3.2mu -c /home/afylot/downloads/Cython-0.22/Cython/Plex/Scanners.c -o build/temp.linux-x86_64-3.2/home/afylot/downloads/Cython-0.22/Cython/Plex/Scanners.o

/home/afylot/downloads/Cython-0.22/Cython/Plex/Scanners.c:308:1: error: unknown type name ‘Py_UNICODE’

and many other errors that seems all related to Py_UNICODE.

/usr/include/python3.2mu  seems to be the directory where the definition of py_unicode should be, and I have looked into that with fgrep, and it looks like there is no definition of py_unicode, there is just one file called pyconfig.h. I have already installed cython 0.22 with python 2.7 with no problem, and in fact in the corresponding include directory for python 2.7 there is the definition for py_unicode, and there are a lot of files .h, other than pyconfig.h.

I don't understand if this is a bug in cython or if my installation of python 3.2 lacks something. I haven't installed python3.2 myself, so what should I tell to the computer manager to fix this?

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
pije | 23 Mar 08:46 2015
Picon

Buffer protocol definition

Hi cython users,

I'm trying to setup a buffer protocol in cython. FYI I'm using Cython0.19 and Python2.7.

The short story: I declare a new class in which I setup the two necessary methods __getbuffer__ and __releasebuffer__. Compilation is OK. But I can't get any data from a python object created using the buffer protocol.

Here is the cython code defining my buffer protocol:

cimport numpy as CNY # Cython buffer protocol implementation for my array class cdef class P_NpArray: cdef CNY.ndarray npy_ar def __cinit__(self, inpy_ar): self.npy_ar=inpy_ar def __getbuffer__(self, Py_buffer *buffer, int flags): cdef Py_ssize_t ashape[2] ashape[0]=self.npy_ar.shape[0] ashape[1]=self.npy_ar.shape[1] cdef Py_ssize_t astrides[2] astrides[0]=self.npy_ar.strides[0] astrides[1]=self.npy_ar.strides[1] buffer.buf = <void *> self.npy_ar.data buffer.format = 'f' buffer.internal = NULL buffer.itemsize = self.npy_ar.itemsize buffer.len = self.npy_ar.size*self.npy_ar.itemsize buffer.ndim = self.npy_ar.ndim buffer.obj = self buffer.readonly = 0 buffer.shape = ashape buffer.strides = astrides buffer.suboffsets = NULL def __releasebuffer__(self, Py_buffer *buffer): pass

This above code compiles fine. But I can't retrieve the buffer data properly.
See the following test where:

  • I create a numpy array
  • load it with my BP class
  • try to retrieve it as numpy array (Just to showcase my problem):
>>> import myarray >>> import numpy as np >>> ar=np.ones((2,4)) # create a numpy array >>> ns=myarray.P_NpArray(ar) # declare numpy array as a new numpy-style array >>> print ns <myarray.P_NpArray object at 0x7f30f2791c58> >>> nsa = np.asarray(ns) # Convert back to numpy array. Buffer protocol called here. /home/tools/local/x86z/lib/python2.7/site-packages/numpy/core/numeric.py:235: RuntimeWarning: Item size computed from the PEP 3118 buffer format string does not match the actual item size. return array(a, dtype, copy=False, order=order) >>> print type(nsa) # Output array has the correct type <type 'numpy.ndarray'> >>> print "nsa=",nsa nsa= <myarray.P_NpArray object at 0x7f30f2791c58> >>> print "nsa.data=", nsa.data nsa.data= Xy0 >>> print "nsa.itemsize=",nsa.itemsize nsa.itemsize= 8 >>> print "nsa.size=",nsa.size # Output array has a WRONG size nsa.size= 1 >>> print "nsa.shape=",nsa.shape # Output array has a WRONG shape nsa.shape= () >>> np.frombuffer(nsa.data, np.float64) # I can't get a proper read of the data buffer [ 6.90941928e-310]


I looked around for the RuntimeWarning and found out that it probably was not relevant see PEP 3118 warning when using ctypes array as numpy array http://bugs.python.org/issue10746 and http://bugs.python.org/issue10744. What do you think ?

Obviously the buffer shape and size are not properly transmitted. So. What am I missing ? Is my buffer protocol correctly defined ?

Many thanks for the hints.



--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
gary thompson | 22 Mar 23:10 2015
Picon

public api in a sub package

Dear All

I am having a problem with exporting a public  api with the right package loading code

specifically I have something like the following in setup.py


ext_modules = [Extension("yy.xx",  ['yy.xx.pyx'],
                         define_macros = [('CPLUSPLUS', '1') ,
                                          ('USE_CDS_NAMESPACE', '1') ,
                                          ('OLD_ENSEMBLE_INTERFACE',old_ensemble_interface)],

                         language="c++",
                         extra_compile_args=extra_compile_args,
                         extra_link_args=extra_link_args,
                         include_dirs=include_dirs)]

setup(
    package_dir = {'yy':''},
    ext_modules=cythonize(ext_modules[0:1]),
    cmdclass = {'build_ext': build_ext},
)


and yy.xx.py has 

cdef public api double zz(object self, int i, ...):

..... some very interesting cython code

now when I do the build everything cython goes in the right pace and looks good (ie we have a yy directory with an xx in it

Howvere, now I expect to have a file called


yy.xx_api.h but I am just getting a file called xx_api.h furthermore inside I get

import_xx() 

not 

import_yy__xx()

could someone tell me what I am doing wrong and how to solve it?

regards

gary






--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Abhishek Agrawal | 22 Mar 12:40 2015
Picon

Optimizing generated file size

Hi
I am a newbie to cython and I am working on a project of rewriting a c module to cython. I want to minimize the size of generated c files as much as possible. Is there any blog or documentation link on which can help me. Also if you can tell about new cython features which can make the module run faster, that would be of great help.
Please reply ASAP...!!!

Regards,
Abhishek Agarwal

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Abbeville | 22 Mar 12:24 2015
Picon

Windows 8.1 + Anaconda 2.1 + Cython 0.22 compile error

Not sure if this is an Anaconda or Cython problem and posted a similar note on Anaconda list. 

Installed Python Anaconda 2.1 64-bit on Windows 8.1. Created "setup.py" to compile a .pyx as:

import numpy
 
from distutils.core import setup
from Cython.Build import cythonize
 
extensions = cythonize(["my.pyx"])
 
setup(
    ext_modules = extensions, 
    include_dirs=[numpy.get_include()]
    )

The my.pyx includes the imports:

import cython
import numpy
cimport numpy

> python setup.py build_ext --inplace
 
Traceback (most recent call last):
  File "setup.py", line 2, in <module>
    from Cython.Build import cythonize
  File "C:\Anaconda\lib\site-packages\Cython\Build\__init__.py", line 1, in <mod
ule>
    from .Dependencies import cythonize
  File "C:\Anaconda\lib\site-packages\Cython\Build\Dependencies.py", line 166, i
n <module>
    <at> cython.locals(start=long, end=long)
AttributeError: 'module' object has no attribute 'locals'

What is the problem?

Btw, it didn't work for Anaconda 2.1 32-bit either.

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jérôme Kieffer | 22 Mar 11:54 2015

Export cdef class methods ...

Dear Cython users,

I am trying to share a cdef class defined in one cython module into another one ...
While sharing attributes looks ok, any cdef (or cpdef) method seems to be an issue.

Maybe I am not trying the good way?

I would like to do (in another cython module):

from ary cimport Ary

cdef:
    Ary ary = Ary(data)
    int x, y, local_max
for x in prange(ary.height, nogil=True):
  for y in range(ary.width):
     local_max = ary.c_local_maxi(x,y)

################################################################################
ary.pxd:

cdef class Ary:
    cdef readonly float[:, :] data
    cdef readonly int width, height

    cdef int c_local_maxi(self, int, int)

################################################################################
ary.pyx:

import numpy

cdef class Ary:
    def __cinit__(self, data not None):
        assert data.ndim == 2
        self.width = data.shape[1]
        self.height = data.shape[0]
        self.data = numpy.ascontiguousarray(data, dtype=numpy.float32)

    cdef int local_maxi(self, int x, int y) nogil:
        cdef:
            float value, current, tmp
            int ix, iy
            int start_x, stop_x, start_y, stop_y
        value = self.data[y, x]
        current = value - 1.0
        while value > current:
            current = value
            start_x = max(0, x - 1)
            stop_x = min(x + 1, self.width - 1)
            start_y = max(0, y - 1)
            stop_y = min(x + 1, self.width -1)
            for iy in range(start_y, stop_y):
                for ix in range(start_x, stop_x):
                    tmp = self.data[iy, ix]
                    if tmp > value:
                        x, y = ix, iy
                        value = tmp
        return x + self.width * y

-- 
Jérôme Kieffer <google <at> terre-adelie.org>

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Gmane