Espen | 11 Nov 20:29 2014
Picon

Vector STL versus more traditional interface to C/C++

I hope some of you can give me some pointers. I discovered it was possible to use a STL vector interface from this post:

http://blog.birving.com/2014/05/passing-numpy-arrays-between-python-and.html

I've been using the more traditional way (defined later in the example), but making a multidimensional interface is not so straightforward. I like the STL vectors and would like to use this more exclusively in the future when I need some work done in C++ or when I need to link inn code that runs the STL vectors. Tried to follow the example above, however it does not seem the data is not returned properly (or assigned) from interface.cpp using the STL vector interface. I suspect it is not as easy as I initially thought and put together a small example:

running python run.py yields:
vector: [0 0 0 0 0 0 0 0 0 0]
traditional: [1 0 0 0 0 0 0 0 0 0]


run.py:
import patchwork
import numpy as np
somearray=np.zeros(10,dtype='intc')
patchwork.python_function_vector(somearray)
print "vector:", somearray
somearray=np.zeros(10,dtype='intc')
patchwork.python_function_traditional(somearray)
print "traditional:", somearray


interface.cpp:
#include <vector>
void cpp_function_vector(std::vector<int> &);
void cpp_function_traditiona(int *);

void cpp_function_vector(std::vector<int> &somearray) {
        somearray[0]=1;
}

void cpp_function_traditional(int *somearray) {
        somearray[0]=1;
}


patchwork.pyx:
import cython
import numpy as np
cimport numpy as np
from libcpp.vector cimport vector

cdef extern from "interface.cpp":
        void cpp_function_vector(vector[int] &)
        void cpp_function_traditional(int *)

def python_function_vector(somearray not None):
        cpp_function_vector(somearray)

def python_function_traditional(np.ndarray[int,ndim=1,mode="c"] somearray not None):
        cpp_function_traditional(&somearray[0])

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
sulabh tiwari | 9 Nov 19:31 2014
Picon

how to use c code in cython

Hi guys i am new to cython and i am trying to use C code in cython
example.c


#include <stdlib.h>
#include <stdio.h>
#include <math.h>

double* reverseCoeffs(double* coeffs, int N)
{
    double *revCoeffs = (double*) malloc(7 * sizeof(double));

    if (N < 7)
    {
        coeffs = (double *) realloc(coeffs, 7 * sizeof(double));
        while (N < 7) coeffs[N++] = (double) 0;
    }

    // The first coefficient is the linear part
    revCoeffs[0] = 1 / coeffs[0];
    revCoeffs[1] = -coeffs[1] / pow(coeffs[0], 3);
    revCoeffs[2] = (2 * pow(coeffs[1], 2) - coeffs[0] * coeffs[2]) / pow(coeffs[0], 5);



    free(coeffs);

    return revCoeffs;
}

this works perfectly fine in C but when i use it in cython it throws an error
 example.pyx

import numpy as np
cimport numpy as np
cdef extern from "reverseCoeffs.c":
     double* reverseCoeffs(double* coeffs, int N)

def cisr(np.ndarray[np.double_t,ndim=1] x, int N):
    cdef double* x_data = <double *>x.data
    cdef np.ndarray[np.double_t, ndim=1] res
    res = reverseCoeffs(x_data, N)
    return res

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Wiktor Grębla | 9 Nov 18:04 2014
Picon

Mingw-w64 -municode and Cython on MS Windows.

Hi guys.

First of all, thanks for such a great tool.

I had an idea which finally led me to MS Windows and for a few days I was struggling with correct handling of Windows command line arguments and filenames. Generally it was a good thing as I went through lots of documentation but as it turned out, -municode flag added to gcc (mingw-w64) solved all my problems with UTF-16 Windows console. 

I've no intention to bother anybody and generally I don't care too much about MS Windows tools, but I've to admit that MSYS and Mingw-w64 is a great toolchain and it'd be worth to mention about this somwhere in Cython documentation:

http://sourceforge.net/p/mingw-w64/wiki2/Unicode%20apps/

Shortly speaking: mingw-gcc 4.8.1 doesn't have -municode flag in 32-bit release,  mingw-w64 supports it and int wmain(int argc, wchar_t **argv) generated by Cython's --embed option works like a charm and possibly a lot of unicode-related problems are solved as well.

Cheers,
W.

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
1989lzhh | 9 Nov 03:26 2014
Picon

Define array as heap array

Hello,
I noticed the "cdef int[10] a" define "a" as stack array. I wonder if it is possible to define "a" as heap
array, Since the heap array can hold bigger size. 

Regards,
Liu zhenhai

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

stonebig34 | 8 Nov 11:14 2014
Picon

a small updating of 64Bit Cython Extensions On Windows from Mingw ?

Hi,

With Winpython 64 bit version, I don't encounter "big" problem (that said : that I notice) using https://github.com/numpy/numpy/wiki/Mingw-static-toolchain
(which points to  https://bitbucket.org/carlkl/mingw-w64-for-python/downloads/mingw64static-2014-07.7z)

Of course, there is ONE  "hack" to do on python original script :

Find_And_replace in "..\Lib\distutils\cygwinccompiler.py"
this
"-O -W"
per that
"-O -DMS_WIN64 -W"

==> Shouldn't you nuance a little the pretty much "don't use ever the 64bit MINGW compiler" message of the official site
"https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows" ?



--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Andre Ribeiro | 7 Nov 22:23 2014
Picon

numpy array float64

Hi

I have just started using Cython and I ran into an issue with numpy arrays. I'm trying to get a simple sum function working:

import numpy as np
cimport numpy as np

def myfunc(np.ndarray[np.float64_t, ndim=1] A):
    cdef Py_ssize_t i
    cdef double s
    for i in range(A.shape[0]):
     s += A[i]
    return s


a = np.array([1.0,2.0,3.0])

#print a

print myfunc(a)

If I import the corresponding module I get the value 9. If I change the np.array to (dtype = int) I get the correct value of 6. Also, if I print the float64 array before the loop I end up with a sum of 6. I really don't know what's going on here.

I'd appreciate any help.

Thanks
Andre.

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Martin Scherer | 7 Nov 16:31 2014
Picon
Picon

Cython.Build.cythonize returns empty list

Hi,

cythonize() returns an empty module list. The source tarball contains
already precythonized c files.

This is the code snippet:

#### begin snippet

from setuptools import setup, Extension

def extensions():
    USE_CYTHON = False
    try:
        from Cython.Build import cythonize
        USE_CYTHON = True
        ext = '.pyx'

    except ImportError:
        warnings.warn('cython not found. Using precythonized files')
        ext = '.c'

    e = Extension('pyemma.msm.estimation.dense.mle_trev_given_pi',

sources=['pyemma/msm/estimation/dense/mle_trev_given_pi' + ext,
                     'pyemma/msm/estimation/dense/_mle_trev_given_pi.c'],

include_dirs=[os.path.abspath('pyemma/msm/estimation/dense')],
                  extra_compile_args=[])

    modules = [e]

    if USE_CYTHON:
        modules = cythonize(modules)
    else:
        pass # nothing todo, use preconverted c files
    return modules

ext = extensions()
assert len(ext) == 1 # this fails, because ext == [] !

setup(ext_modules=ext)

#### end of snippet

I'am using setuptools 7.0 and cython 0.21.1

All examples from Cython use distutils. But this does not seem to be the
problem since, I also tried to use distutils.Extension class and it
leads to the same problem.

Best,
Martin

--

-- 

--- 
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Zahra Sheikh | 6 Nov 14:22 2014
Picon

ValueError: ndarray is not C-contiguous in cython

Hi all,
I have written the following function in cython to estimate the log-likelihood

    <at> cython.boundscheck(False)
    <at> cython.wraparound(False)
    def likelihood(double m,
                   double c,
                   np.ndarray[np.double_t, ndim=1, mode='c'] r_mpc not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] gtan not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] gcrs not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] shear_err not None,
                   np.ndarray[np.double_t, ndim=1, mode='c'] beta not None,
                   double rho_c,
                   np.ndarray[np.double_t, ndim=1, mode='c'] rho_c_sigma not None):
        r_mpc  = np.ascontiguousarray(r_mpc, dtype=np.double)
        gtan     = np.ascontiguousarray(gtan, dtype=np.double)
        gcrs     = np.ascontiguousarray(gcrs, dtype=np.double)
        shear_err= np.ascontiguousarray(shear_err, dtype=np.double)
        beta     = np.ascontiguousarray(beta, dtype=np.double)
        rho_c_over_sigma_c  = np.ascontiguousarray(rho_c_over_sigma_c, dtype=np.double)

        cdef double rscale = rscaleConstM(m, c,rho_c, 200)
   
        cdef Py_ssize_t ngals = r_mpc.shape[0]
   
        cdef np.ndarray[DTYPE_T, ndim=1, mode='c'] gamma_inf = Sh(r_mpc, c, rscale, rho_c_sigma)
        cdef np.ndarray[DTYPE_T, ndim=1, mode='c'] kappa_inf = Kap(r_mpc, c, rscale, rho_c_sigma)
 
   
        cdef double delta = 0.
        cdef double modelg = 0.
        cdef double modsig = 0.
   
        cdef Py_ssize_t i
        cdef DTYPE_T logProb = 0.
   
           
        #calculate logprob
        for i from ngals > i >= 0:
          
            modelg = (beta[i]*gamma_inf[i] / (1 - beta[i]*kappa_inf[i]))
   
            delta = gtan[i] - modelg
            
            modsig = shear_err[i]
   
            logProb = logProb -.5*(delta/modsig)**2  - logsqrt2pi - log(modsig)
   
           
        return logProb

but when I run the compiled version of this function, I get the following error message:

      File "Tools.pyx", line 3, in Tools.likelihood
        def likelihood(double m,
    ValueError: ndarray is not C-contiguous

I even added np.ascontiguousarray(arr, dtype=np.double) to get rid of this error message but didn't work.
I could not quite understand why this problem occurs??!!! I will appreciate to get any useful tips.

Cheers,
Zahra

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Ghislain Vaillant | 5 Nov 16:37 2014
Picon

Safe solution for wrapping a C-struct with optionally malloc'd attributes

Hi everyone,

I am working on providing bindings for a library, which uses a specific data structure (and associated computational routines) which contains pointers to internally or externally malloc'd memory. The decision is made at init time using a flag parameter.

A simpler but still relevant version of this data structure would be:

typedef struct plan {
    double *data
    unsigned int flags
} plan

where the structure is properly initialized by:

plan *this_plan = (plan*) malloc (sizeof(plan))
initialize_plan(this_plan, MALLOC_DATA)
/* do stuff */
finalize_plan(this_plan)  /* would free this_plan->data */

for internally malloc'd memory, or

plan *this_plan = (plan*) malloc (sizeof(plan))
initialize_plan(this_plan, NO_MALLOC_DATA)
this_plan -> data = external_data_ptr
/* do stuff */
finalize_plan(this_plan) /* would leave this_plan->data alone */

to use externally allocated data.


I wonder what is the safest solution to wrap such data structure in Cython and expose its data attribute as a Numpy array-like object, whilst keeping the ability to use external arrays for computation without explicit copy. So far, I can see 3 options:

1) Force the structure to always use internal malloc'd memory and expose the data via a ndarray object created with PyArray_New. But you lose the ability to efficiently switch between data sources without explicit copy. That's the solution I went for, because it seemed the easiest.

2) Force the structure to always use external malloc'd memory and connect an external ndarray to the internal data pointer via PyArray_DATA. This is however more dangerous as you need to be ensure the array is of the right size and dtype, its C-contiguousness, that it is writeable...

3) A mixture of both ? i.e. leave the possibility to use either internal or external and adapt the properties of the ndarray object consequently ?


Are you guys aware of any examples of an implementation which may address this problem ?

Cheers,
Ghis

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Adrian Price-Whelan | 4 Nov 16:54 2014
Picon

Questions about specific use case involving many numerical integrations

Hi all --

(Apologies if this is not the place to ask questions like this -- please let me know if this is the case!)

I'm a grad student in astrophysics, and in working on some machinery for my own research I've hit some stumbling blocks that I'm hoping someone might be able to help with. 

Some background about me: I've been programming in Python for about ~8 years or so and am very comfortable with numerical methods. At one point was also fairly comfortable in C but now it has been a while... I've tried my best to learn Cython from the documentation, but it is somewhat lacking in details about more complicated use cases.

The core of the current project I'm working on relies on numerically integrating 1000's of orbits of test particles for 1000's of timesteps, buried within a likelihood function -- so I need this to be fast so I can do some optimization or sampling with this likelihood. The bulk of the time is therefore spent calling some function that computes the acceleration due to a gravitational potential at a given position. I also need it to be easy to swap out the Gravitational field (potential) in which these orbits are integrated. 

The code has the basic structure:

- loop over optimization (e.g., MCMC)
    - loop over number of particles (~1000's)
        - loop over number of timesteps (~1000's)

Let me describe the way I'm doing this now. I have Python classes to represent the Gravitational potentials that implement some convenient high-level things and support array operations, which work by creating and storing a respective Cython (cdef'd) class that contains functions to evaluate the quantities at a single position using memory-views. The reason I've written it this way is because 1) I wanted to minimize the number of array allocations that happen, so my solution is to just pass the entire array of positions with an index to specify which row to take, and 2) I've thought about using openMP or some parallelization, and the only part of the loops I can parallelize is if I integrate each orbit separately.

The classes look like this:

class PointMassPotential(CPotential, CartesianPotential):
    """ Gravitational potential for a point mass. """
    def __init__(self, m):
        # ... create the C-instance and store
        self.c_instance = _PointMassPotential(m)
        # etc.

cdef class _PointMassPotential(_CPotential):

    # here need to cdef all the attributes
    cdef public double G, GM
    cdef public double m, c

    def __init__(self, double m):

        # gravitational constant in crazy units
        self.G = 4.499753324353495e-12

        # potential parameters
        self.m = m

        self.GM = self.G * self.m

    cdef public inline void _acceleration(self, double[:,::1] r, 
                                          double[:,::1] grad, int k) nogil:
        cdef double R, fac
        R = sqrt(r[k,0]*r[k,0] + r[k,1]*r[k,1] + r[k,2]*r[k,2])
        fac = self.GM / (R*R*R)

        grad[k,0] += -fac*r[k,0]
        grad[k,1] += -fac*r[k,1]
        grad[k,2] += -fac*r[k,2]

To do the integration, I just use my own Leapfrog integration, the meat of which looks like this:

cdef inline void leapfrog_step(double[:,::1] r, double[:,::1] v, 
                               double[:,::1] v_12, double[:,::1] acc, 
                               int k, double dt, _CPotential potential) nogil:

    # increment position by full-step
    r[k,0] = r[k,0] + dt*v_12[k,0]
    r[k,1] = r[k,1] + dt*v_12[k,1]
    r[k,2] = r[k,2] + dt*v_12[k,2]

    # zero out the acceleration container
    acc[k,0] = 0.
    acc[k,1] = 0.
    acc[k,2] = 0.

    potential._acceleration(r, acc, k)

    # increment synced velocity by full-step
    v[k,0] = v[k,0] + dt*acc[k,0]
    v[k,1] = v[k,1] + dt*acc[k,1]
    v[k,2] = v[k,2] + dt*acc[k,2]

    # increment leapfrog velocity by full-step
    v_12[k,0] = v_12[k,0] + dt*acc[k,0]
    v_12[k,1] = v_12[k,1] + dt*acc[k,1]
    v_12[k,2] = v_12[k,2] + dt*acc[k,2]

And finally, the likelihood function is just loops:

cpdef likelihood_func(...):
        
    # define stuff...

    with nogil:
        for k in range(nparticles):
            for i in range(nsteps):
                leapfrog_step(x, v, v_12, acc, k, dt, potential)

            # compute other stuff using integrated positions ...


My question is whether this is a sensible thing to do -- using memoryviews this way -- or whether I'm missing some obvious optimization points? I'm finding that this is only a factor of ~10s faster than a pure-Python implementation, and I'm a little surprised that this isn't many orders of magnitude faster. But also maybe my intuition for this is just way off... Happy to provide more detail if anyone feels up to helping out.

Thanks!
- Adrian

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jonathan Hanke | 4 Nov 08:50 2014
Picon

GMP "Symbol not found" Error Question

Hi,

I'm trying to use cython with GMP 6.0.0 to wrap some C++ libraries, and I'm getting the error

In [2]: from my_test2 import PyMatrix_mpz
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-2-b378b34db57e> in <module>()
----> 1 from my_test2 import PyMatrix_mpz

ImportError: dlopen(./my_test2.so, 2): Symbol not found: __ZlsRSoPK12__mpz_struct
  Referenced from: ./my_test2.so
  Expected in: flat namespace
 in ./my_test2.so

when I try to import the cythonized PyMatrix_mpz class.  This error resulted when I added some C++ routines that made an explicit call to the C-level GMP routines, but I'm not sure why this should be a problem since I include and link against both the gmp and gmpxx libraries, which are installed in /usr/local/bin.  I'm using GMP 6.0.0 on a MacBook Air, building with "make clean; make build" and have files that look like:

my_test2.pyx:
===========
# distutils: language = c++
# distutils: sources = ['misc_utilities.cpp', 'test2.cpp', 'call_operators.cpp', 'useful_tests.cpp', 'local_normal.cpp']
# distutils: include_dirs = ['/usr/local/include']
# distutils: libraries = ['gmp', 'gmpxx', 'm']
# distutils: library_dirs = ['/usr/local/lib']
....

setup.py:
=======
from distutils.core import setup
from Cython.Build import cythonize

setup(
    name = "QFLIBapp",
    ext_modules = cythonize('*.pyx'),
)

Makefile:
=======
build:
        python setup.py build_ext --inplace

clean:
        rm -rf build/
        rm -f *.so


Any comments are appreciated!  Thanks,

-Jon

--

---
You received this message because you are subscribed to the Google Groups "cython-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cython-users+unsubscribe <at> googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Gmane