stonebig34 | 8 Nov 11:14 2014
Picon

a small updating of 64Bit Cython Extensions On Windows from Mingw ?

Hi,

With Winpython 64 bit version, I don't encounter "big" problem (that said : that I notice) using https://github.com/numpy/numpy/wiki/Mingw-static-toolchain
(which points to  https://bitbucket.org/carlkl/mingw-w64-for-python/downloads/mingw64static-2014-07.7z)

Of course, there is ONE  "hack" to do on python original script :

Find_And_replace in "..\Lib\distutils\cygwinccompiler.py"
this
"-O -W"
per that
"-O -DMS_WIN64 -W"

==> Shouldn't you nuance a little the pretty much "don't use ever the 64bit MINGW compiler" message of the official site
"https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows" ?



--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Andre Ribeiro | 7 Nov 22:23 2014
Picon

numpy array float64

Hi

I have just started using Cython and I ran into an issue with numpy arrays. I'm trying to get a simple sum function working:

import numpy as np
cimport numpy as np

def myfunc(np.ndarray[np.float64_t, ndim=1] A):
    cdef Py_ssize_t i
    cdef double s
    for i in range(A.shape[0]):
     s += A[i]
    return s


a = np.array([1.0,2.0,3.0])

#print a

print myfunc(a)

If I import the corresponding module I get the value 9. If I change the np.array to (dtype = int) I get the correct value of 6. Also, if I print the float64 array before the loop I end up with a sum of 6. I really don't know what's going on here.

I'd appreciate any help.

Thanks
Andre.

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Martin Scherer | 7 Nov 16:31 2014
Picon
Picon

Cython.Build.cythonize returns empty list

Hi,

cythonize() returns an empty module list. The source tarball contains
already precythonized c files.

This is the code snippet:

#### begin snippet

from setuptools import setup, Extension

def extensions():
    USE_CYTHON = False
    try:
        from Cython.Build import cythonize
        USE_CYTHON = True
        ext = '.pyx'

    except ImportError:
        warnings.warn('cython not found. Using precythonized files')
        ext = '.c'

    e = Extension('pyemma.msm.estimation.dense.mle_trev_given_pi',

sources=['pyemma/msm/estimation/dense/mle_trev_given_pi' + ext,
                     'pyemma/msm/estimation/dense/_mle_trev_given_pi.c'],

include_dirs=[os.path.abspath('pyemma/msm/estimation/dense')],
                  extra_compile_args=[])

    modules = [e]

    if USE_CYTHON:
        modules = cythonize(modules)
    else:
        pass # nothing todo, use preconverted c files
    return modules

ext = extensions()
assert len(ext) == 1 # this fails, because ext == [] !

setup(ext_modules=ext)

#### end of snippet

I'am using setuptools 7.0 and cython 0.21.1

All examples from Cython use distutils. But this does not seem to be the
problem since, I also tried to use distutils.Extension class and it
leads to the same problem.

Best,
Martin

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Zahra Sheikh | 6 Nov 14:22 2014
Picon

ValueError: ndarray is not C-contiguous in cython

Hi all,
I have written the following function in cython to estimate the log-likelihood

    <at> cython.boundscheck(False)
    <at> cython.wraparound(False)
    def likelihood(double m,
                   double c,
                   np.ndarray[np.double_t, ndim=1, mode='c'] r_mpc not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] gtan not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] gcrs not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] shear_err not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] beta not None,
                   double rho_c,
                   np.ndarray[np.double_t, ndim=1, mode='c'] rho_c_sigma not None):
        r_mpc  = np.ascontiguousarray(r_mpc, dtype=np.double)
        gtan     = np.ascontiguousarray(gtan, dtype=np.double)
        gcrs     = np.ascontiguousarray(gcrs, dtype=np.double)
        shear_err= np.ascontiguousarray(shear_err, dtype=np.double)
        beta     = np.ascontiguousarray(beta, dtype=np.double)
        rho_c_over_sigma_c  = np.ascontiguousarray(rho_c_over_sigma_c, dtype=np.double)

        cdef double rscale = rscaleConstM(m, c,rho_c, 200)
   
        cdef Py_ssize_t ngals = r_mpc.shape[0]
   
        cdef np.ndarray[DTYPE_T, ndim=1, mode='c'] gamma_inf = Sh(r_mpc, c, rscale, rho_c_sigma)
        cdef np.ndarray[DTYPE_T, ndim=1, mode='c'] kappa_inf = Kap(r_mpc, c, rscale, rho_c_sigma)
 
   
        cdef double delta = 0.
        cdef double modelg = 0.
        cdef double modsig = 0.
   
        cdef Py_ssize_t i
        cdef DTYPE_T logProb = 0.
   
           
        #calculate logprob
        for i from ngals > i >= 0:
          
            modelg = (beta[i]*gamma_inf[i] / (1 - beta[i]*kappa_inf[i]))
   
            delta = gtan[i] - modelg
            
            modsig = shear_err[i]
   
            logProb = logProb -.5*(delta/modsig)**2  - logsqrt2pi - log(modsig)
   
           
        return logProb

but when I run the compiled version of this function, I get the following error message:

      File "Tools.pyx", line 3, in Tools.likelihood
        def likelihood(double m,
    ValueError: ndarray is not C-contiguous

I even added np.ascontiguousarray(arr, dtype=np.double) to get rid of this error message but didn't work.
I could not quite understand why this problem occurs??!!! I will appreciate to get any useful tips.

Cheers,
Zahra

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Ghislain Vaillant | 5 Nov 16:37 2014
Picon

Safe solution for wrapping a C-struct with optionally malloc'd attributes

Hi everyone,

I am working on providing bindings for a library, which uses a specific data structure (and associated computational routines) which contains pointers to internally or externally malloc'd memory. The decision is made at init time using a flag parameter.

A simpler but still relevant version of this data structure would be:

typedef struct plan {
    double *data
    unsigned int flags
} plan

where the structure is properly initialized by:

plan *this_plan = (plan*) malloc (sizeof(plan))
initialize_plan(this_plan, MALLOC_DATA)
/* do stuff */
finalize_plan(this_plan)  /* would free this_plan->data */

for internally malloc'd memory, or

plan *this_plan = (plan*) malloc (sizeof(plan))
initialize_plan(this_plan, NO_MALLOC_DATA)
this_plan -> data = external_data_ptr
/* do stuff */
finalize_plan(this_plan) /* would leave this_plan->data alone */

to use externally allocated data.


I wonder what is the safest solution to wrap such data structure in Cython and expose its data attribute as a Numpy array-like object, whilst keeping the ability to use external arrays for computation without explicit copy. So far, I can see 3 options:

1) Force the structure to always use internal malloc'd memory and expose the data via a ndarray object created with PyArray_New. But you lose the ability to efficiently switch between data sources without explicit copy. That's the solution I went for, because it seemed the easiest.

2) Force the structure to always use external malloc'd memory and connect an external ndarray to the internal data pointer via PyArray_DATA. This is however more dangerous as you need to be ensure the array is of the right size and dtype, its C-contiguousness, that it is writeable...

3) A mixture of both ? i.e. leave the possibility to use either internal or external and adapt the properties of the ndarray object consequently ?


Are you guys aware of any examples of an implementation which may address this problem ?

Cheers,
Ghis

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Adrian Price-Whelan | 4 Nov 16:54 2014
Picon

Questions about specific use case involving many numerical integrations

Hi all --

(Apologies if this is not the place to ask questions like this -- please let me know if this is the case!)

I'm a grad student in astrophysics, and in working on some machinery for my own research I've hit some stumbling blocks that I'm hoping someone might be able to help with. 

Some background about me: I've been programming in Python for about ~8 years or so and am very comfortable with numerical methods. At one point was also fairly comfortable in C but now it has been a while... I've tried my best to learn Cython from the documentation, but it is somewhat lacking in details about more complicated use cases.

The core of the current project I'm working on relies on numerically integrating 1000's of orbits of test particles for 1000's of timesteps, buried within a likelihood function -- so I need this to be fast so I can do some optimization or sampling with this likelihood. The bulk of the time is therefore spent calling some function that computes the acceleration due to a gravitational potential at a given position. I also need it to be easy to swap out the Gravitational field (potential) in which these orbits are integrated. 

The code has the basic structure:

- loop over optimization (e.g., MCMC)
    - loop over number of particles (~1000's)
        - loop over number of timesteps (~1000's)

Let me describe the way I'm doing this now. I have Python classes to represent the Gravitational potentials that implement some convenient high-level things and support array operations, which work by creating and storing a respective Cython (cdef'd) class that contains functions to evaluate the quantities at a single position using memory-views. The reason I've written it this way is because 1) I wanted to minimize the number of array allocations that happen, so my solution is to just pass the entire array of positions with an index to specify which row to take, and 2) I've thought about using openMP or some parallelization, and the only part of the loops I can parallelize is if I integrate each orbit separately.

The classes look like this:

class PointMassPotential(CPotential, CartesianPotential):
    """ Gravitational potential for a point mass. """
    def __init__(self, m):
        # ... create the C-instance and store
        self.c_instance = _PointMassPotential(m)
        # etc.

cdef class _PointMassPotential(_CPotential):

    # here need to cdef all the attributes
    cdef public double G, GM
    cdef public double m, c

    def __init__(self, double m):

        # gravitational constant in crazy units
        self.G = 4.499753324353495e-12

        # potential parameters
        self.m = m

        self.GM = self.G * self.m

    cdef public inline void _acceleration(self, double[:,::1] r, 
                                          double[:,::1] grad, int k) nogil:
        cdef double R, fac
        R = sqrt(r[k,0]*r[k,0] + r[k,1]*r[k,1] + r[k,2]*r[k,2])
        fac = self.GM / (R*R*R)

        grad[k,0] += -fac*r[k,0]
        grad[k,1] += -fac*r[k,1]
        grad[k,2] += -fac*r[k,2]

To do the integration, I just use my own Leapfrog integration, the meat of which looks like this:

cdef inline void leapfrog_step(double[:,::1] r, double[:,::1] v, 
                               double[:,::1] v_12, double[:,::1] acc, 
                               int k, double dt, _CPotential potential) nogil:

    # increment position by full-step
    r[k,0] = r[k,0] + dt*v_12[k,0]
    r[k,1] = r[k,1] + dt*v_12[k,1]
    r[k,2] = r[k,2] + dt*v_12[k,2]

    # zero out the acceleration container
    acc[k,0] = 0.
    acc[k,1] = 0.
    acc[k,2] = 0.

    potential._acceleration(r, acc, k)

    # increment synced velocity by full-step
    v[k,0] = v[k,0] + dt*acc[k,0]
    v[k,1] = v[k,1] + dt*acc[k,1]
    v[k,2] = v[k,2] + dt*acc[k,2]

    # increment leapfrog velocity by full-step
    v_12[k,0] = v_12[k,0] + dt*acc[k,0]
    v_12[k,1] = v_12[k,1] + dt*acc[k,1]
    v_12[k,2] = v_12[k,2] + dt*acc[k,2]

And finally, the likelihood function is just loops:

cpdef likelihood_func(...):
        
    # define stuff...

    with nogil:
        for k in range(nparticles):
            for i in range(nsteps):
                leapfrog_step(x, v, v_12, acc, k, dt, potential)

            # compute other stuff using integrated positions ...


My question is whether this is a sensible thing to do -- using memoryviews this way -- or whether I'm missing some obvious optimization points? I'm finding that this is only a factor of ~10s faster than a pure-Python implementation, and I'm a little surprised that this isn't many orders of magnitude faster. But also maybe my intuition for this is just way off... Happy to provide more detail if anyone feels up to helping out.

Thanks!
- Adrian

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jonathan Hanke | 4 Nov 08:50 2014
Picon

GMP "Symbol not found" Error Question

Hi,

I'm trying to use cython with GMP 6.0.0 to wrap some C++ libraries, and I'm getting the error

In [2]: from my_test2 import PyMatrix_mpz
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-2-b378b34db57e> in <module>()
----> 1 from my_test2 import PyMatrix_mpz

ImportError: dlopen(./my_test2.so, 2): Symbol not found: __ZlsRSoPK12__mpz_struct
  Referenced from: ./my_test2.so
  Expected in: flat namespace
 in ./my_test2.so

when I try to import the cythonized PyMatrix_mpz class.  This error resulted when I added some C++ routines that made an explicit call to the C-level GMP routines, but I'm not sure why this should be a problem since I include and link against both the gmp and gmpxx libraries, which are installed in /usr/local/bin.  I'm using GMP 6.0.0 on a MacBook Air, building with "make clean; make build" and have files that look like:

my_test2.pyx:
===========
# distutils: language = c++
# distutils: sources = ['misc_utilities.cpp', 'test2.cpp', 'call_operators.cpp', 'useful_tests.cpp', 'local_normal.cpp']
# distutils: include_dirs = ['/usr/local/include']
# distutils: libraries = ['gmp', 'gmpxx', 'm']
# distutils: library_dirs = ['/usr/local/lib']
....

setup.py:
=======
from distutils.core import setup
from Cython.Build import cythonize

setup(
    name = "QFLIBapp",
    ext_modules = cythonize('*.pyx'),
)

Makefile:
=======
build:
        python setup.py build_ext --inplace

clean:
        rm -rf build/
        rm -f *.so


Any comments are appreciated!  Thanks,

-Jon

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Daniele Nicolodi | 2 Nov 17:07 2014
Picon

Allocating c++ object on the stack

Hello,

there is a way in cython to allocate on the stack a c++ object that has
a constructor with arguments?  Example:

cdef extern from "example.h":
    cppclass Foo:
        Foo()
        Foo(int v)

# this works fine
cdef Foo f1
# this results in an error
cdef Foo f2(25)
# those also work fine
cdef Foo *f3 = new Foo(1)
cdef Foo *f4 = new Foo(1)

the error is:

example.pyx:14:12: Expected an identifier, found 'INT'

Thank you!

Cheers,
Daniele

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ivan Smirnov | 31 Oct 13:39 2014
Picon

[cython-users] Regression in v0.21.1: ‘__pyx_convert_PyStr_string_to_py_std__string’ was not declared in this scope

It looks like "c_string_type = str" interferes with the new string conversion logic in v0.21.1, see the minimal example below:

#cython: c_string_type = str
#cython: c_string_encoding = utf-8

from libcpp.string cimport string

cdef string s = 'foo'
print '%s' % s

$ cython --cplus test.pyx
$ g++ -I/home/smirnov/anaconda/include/python2.7 -c test.cpp

test.cpp: In function ‘void inittest()’:
test.cpp:964: error: ‘__pyx_convert_PyStr_string_to_py_std__string’ was not declared in this scope

This compiles (and works) just fine in v0.21.0; in v0.21.1 it only compiles if one removes c_string_* qualifiers.

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jeroen Demeyer | 30 Oct 11:10 2014
Picon

PyInt_FromSize_t is missing from int.pxd

This is missing from the cpython/int.pxd file:

PyInt_FromSize_t(size_t ival)

and also:

int PyInt_ClearFreeList()

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Marc Schlaich | 30 Oct 08:08 2014
Picon

Cannot overload in a custom cppclass

Hello,

I want to write a custom cppclass with overloading but this fails with "Function signature does not match previous declaration". Simple Example:


    cdef cppclass OverloadTest:

        float area():
            return 0.0

        float area(int i):
            return 0.5 * i


Is this a bug or just not supported?

Regards, Marc

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Gmane