Picon

Re: [Cython] Cython 0.16 Release Candidate

All tests pass with Python 2.6 (2.6.7 release).
All tests pass with Python 2.7 (snapshot of 2.7 branch, revision 3623c3e6c049).
All tests pass with Python 3.1 (3.1.4 release).
4 failures with Python 3.2 (snapshot of 3.2 branch, revision 0a4a6f98bd8e).

Failures with Python 3.2:

======================================================================
FAIL: NestedWith (withstat)
Doctest: withstat.NestedWith
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib64/python3.2/doctest.py", line 2153, in runTest
    raise self.failureException(self.format_failure(new.getvalue()))
AssertionError: Failed doctest test for withstat.NestedWith
  File
"/var/tmp/portage/dev-python/cython-0.16_rc0/work/Cython-0.16rc0/tests-3.2/run/c/withstat.cpython-32.so",
line unknown line number, in NestedWith

----------------------------------------------------------------------
File
"/var/tmp/portage/dev-python/cython-0.16_rc0/work/Cython-0.16rc0/tests-3.2/run/c/withstat.cpython-32.so",
line ?, in withstat.NestedWith
Failed example:
    NestedWith().runTest()
Exception raised:
    Traceback (most recent call last):
      File "/usr/lib64/python3.2/doctest.py", line 1288, in __run
        compileflags, 1), test.globs)
      File "<doctest withstat.NestedWith[0]>", line 1, in <module>
(Continue reading)

Stefan Behnel | 2 Apr 13:13 2012
Picon

Re: [Cython] [cython-users] GSoC 2012

Vitja Makarov, 11.03.2012 09:51:
> 2012/3/11 Stefan Behnel:
>> mark florisson, 11.03.2012 07:44:
>>> - better type inference, that would be enabled by default and again
>>> handle thing like reassignments of variables and fallbacks to the
>>> default object type. With entry caching Cython could build a database
>>> of types ((extension) classes, functions, variables) used in the
>>> modules and functions that are compiled (also def functions), and
>>> infer the types used and specialize on those. Maybe a switch should be
>>> added to cython to handle circular dependencies, or maybe with the
>>> distutils preprocessing it can run all the type inference first and
>>> keep track of unresolved entries, and try to fill those in after
>>> building the database. For bonus points the user can be allowed to
>>> write plugins to aid the process.
>>
>> That would be my favourite. We definitely need control flow driven type
>> inference, local type specialisation, variable renaming, etc. Maybe even
>> whole program (or at least module) analysis, like ShedSkin and PyPy do for
>> their restricted Python dialects. Any serious step towards that goal would
>> be a good outcome of a GSoC.
> 
> I think we should be careful here and try to avoid making Cython code
> more complicated.

I agree that WPA is probably way out of scope. However, control flow driven
type inference would allow us to infer the type of a variable in a given
block, e.g. for code like this:

  if isinstance(x, list):
      ...
(Continue reading)

Vitja Makarov | 2 Apr 14:14 2012
Picon

Re: [Cython] [cython-users] GSoC 2012

2012/4/2 Stefan Behnel <stefan_ml@...>:
> Vitja Makarov, 11.03.2012 09:51:
>> 2012/3/11 Stefan Behnel:
>>> mark florisson, 11.03.2012 07:44:
>>>> - better type inference, that would be enabled by default and again
>>>> handle thing like reassignments of variables and fallbacks to the
>>>> default object type. With entry caching Cython could build a database
>>>> of types ((extension) classes, functions, variables) used in the
>>>> modules and functions that are compiled (also def functions), and
>>>> infer the types used and specialize on those. Maybe a switch should be
>>>> added to cython to handle circular dependencies, or maybe with the
>>>> distutils preprocessing it can run all the type inference first and
>>>> keep track of unresolved entries, and try to fill those in after
>>>> building the database. For bonus points the user can be allowed to
>>>> write plugins to aid the process.
>>>
>>> That would be my favourite. We definitely need control flow driven type
>>> inference, local type specialisation, variable renaming, etc. Maybe even
>>> whole program (or at least module) analysis, like ShedSkin and PyPy do for
>>> their restricted Python dialects. Any serious step towards that goal would
>>> be a good outcome of a GSoC.
>>
>> I think we should be careful here and try to avoid making Cython code
>> more complicated.
>
> I agree that WPA is probably way out of scope. However, control flow driven
> type inference would allow us to infer the type of a variable in a given
> block, e.g. for code like this:
>
>  if isinstance(x, list):
(Continue reading)

Stefan Behnel | 2 Apr 14:23 2012
Picon

Re: [Cython] [cython-users] GSoC 2012

Vitja Makarov, 02.04.2012 14:14:
> 2012/4/2 Stefan Behnel:
>> Vitja Makarov, 11.03.2012 09:51:
>>> 2012/3/11 Stefan Behnel:
>>>> mark florisson, 11.03.2012 07:44:
>>>>> - better type inference, that would be enabled by default and again
>>>>> handle thing like reassignments of variables and fallbacks to the
>>>>> default object type. With entry caching Cython could build a database
>>>>> of types ((extension) classes, functions, variables) used in the
>>>>> modules and functions that are compiled (also def functions), and
>>>>> infer the types used and specialize on those. Maybe a switch should be
>>>>> added to cython to handle circular dependencies, or maybe with the
>>>>> distutils preprocessing it can run all the type inference first and
>>>>> keep track of unresolved entries, and try to fill those in after
>>>>> building the database. For bonus points the user can be allowed to
>>>>> write plugins to aid the process.
>>>>
>>>> That would be my favourite. We definitely need control flow driven type
>>>> inference, local type specialisation, variable renaming, etc. Maybe even
>>>> whole program (or at least module) analysis, like ShedSkin and PyPy do for
>>>> their restricted Python dialects. Any serious step towards that goal would
>>>> be a good outcome of a GSoC.
>>>
>>> I think we should be careful here and try to avoid making Cython code
>>> more complicated.
>>
>> I agree that WPA is probably way out of scope. However, control flow driven
>> type inference would allow us to infer the type of a variable in a given
>> block, e.g. for code like this:
>>
(Continue reading)

Stefan Behnel | 3 Apr 13:59 2012
Picon

[Cython] class optimisations (Re: [cython-users] How to pass Cython flags from Distutils?)

[moving this discussion from cython-users to cython-devel]

Robert Bradshaw, 03.04.2012 09:43:
> On Mon, Apr 2, 2012 at 11:01 PM, Stefan Behnel wrote:
>> Robert Bradshaw, 03.04.2012 07:51:
>>> auto_cpdef is expiremental
>>
>> Is that another word for "deprecated"?
> 
> No, it's another word for "incomplete."

Ah, just a typo then.

> Can something be deprecated if
> it was never even finished? It's probably something we should
> eventually do by default as an optimization, at least for methods, as
> well as letting compiled classes become cdef classes (minus the
> semantic idiosyncrasies) whenever possible (can we always detect this?

We can at least start with the "obviously safe" cases, assuming we find
any. A "__slots__" field would be a good indicator, for example. And when
we get extension types to have a __dict__, that should fix a lot of the
differences already.

> What about subclasses that want to multiply-inherit?

You can inherit from multiple extension types in a Python type, and classes
with more than one parent aren't candidates anyway. So this doesn't
restrict us.

(Continue reading)

Wes McKinney | 3 Apr 23:18 2012
Picon

[Cython] Bug report with 0.16 RC

I don't have a Trac account yet, but wanted to report this bug with
the 0.16 RC. This function worked fine under 0.15.1:

 <at> cython.wraparound(False)
 <at> cython.boundscheck(False)
def is_lexsorted(list list_of_arrays):
    cdef:
        int i
        Py_ssize_t n, nlevels
        int32_t k, cur, pre
        ndarray arr

    nlevels = len(list_of_arrays)
    n = len(list_of_arrays[0])

    cdef int32_t **vecs = <int32_t**≥ malloc(nlevels * sizeof(int32_t*))
    for i from 0 <= i < nlevels:
        vecs[i] = <int32_t *> (<ndarray> list_of_arrays[i]).data
    # assume uniqueness??

    for i from 1 <= i < n:
        for k from 0 <= k < nlevels:
            cur = vecs[k][i]
            pre = vecs[k][i-1]
            if cur == pre:
                continue
            elif cur > pre:
                break
            else:
                return False
(Continue reading)

Stefan Behnel | 9 Apr 09:16 2012
Picon

[Cython] Cython on PyPy is (mostly) functional

Hi,

Cython is now mostly functional on the latest PyPy nightly builds.

https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly/

There are still crashers and a number of tests are disabled for that
reason, but the number of passing tests makes it fair to consider it usable
(if it works, it works). Most of the failing tests are due to bugs in
PyPy's cpyext (the C-API compatibility layer), and most of the crashers as
well. Some doctests just fail due to different exception messages, PyPy has
a famous history of that. Also, basically any test for __dealloc__()
methods is bound to fail because PyPy's garbage collector has no way of
making sure that they have been called at a given point.

Still, it's worth taking another look through the test results, because
Cython can sometimes work around problems in cpyext more easily than it
would be to really fix them on PyPy side. One major source of problems are
borrowed references, because PyPy cannot easily guarantee that they stay
alive in C space when all owned references are in Python space. Their
memory management can move objects around, for example, and cpyext can't
block that because it can't know when a borrowed reference dies. That means
that something as ubiquitous in Cython as PyTuple_GET_ITEM() may not always
work well, and is also far from being as fast in cpyext as in CPython.

The crashers can be seen in a forked complete test run in addition to the
stripped test job above:

https://sage.math.washington.edu:8091/hudson/job/cython-scoder-pypy-nightly-safe/lastBuild/consoleFull

(Continue reading)

Lisandro Dalcin | 10 Apr 20:32 2012
Picon

[Cython] never used numpy.pxd, but now my code is failing

Is there any way to disable special-casing of numpy arrays? IMHO, if
I'm not using Cython's numpy.pxd file, Cython should let me decide how
to manage the beast.

Error compiling Cython file:
------------------------------------------------------------
...
    if ((nm != PyArray_DIM(aj, 0)) or
        (nm != PyArray_DIM(av, 0)) or
        (si*bs * sj*bs != sv)): raise ValueError(
        ("input arrays have incompatible shapes: "
         "rows.shape=%s, cols.shape=%s, vals.shape=%s") %
        (ai.shape, aj.shape, av.shape))
          ^
------------------------------------------------------------

PETSc/petscmat.pxi:683:11: Cannot convert 'npy_intp *' to Python object

--

-- 
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
3000 Santa Fe, Argentina
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
Dag Sverre Seljebotn | 10 Apr 21:52 2012
Picon
Picon

Re: [Cython] never used numpy.pxd, but now my code is failing

On 04/10/2012 08:32 PM, Lisandro Dalcin wrote:
> Is there any way to disable special-casing of numpy arrays? IMHO, if
> I'm not using Cython's numpy.pxd file, Cython should let me decide how
> to manage the beast.
>
>
> Error compiling Cython file:
> ------------------------------------------------------------
> ...
>      if ((nm != PyArray_DIM(aj, 0)) or
>          (nm != PyArray_DIM(av, 0)) or
>          (si*bs * sj*bs != sv)): raise ValueError(
>          ("input arrays have incompatible shapes: "
>           "rows.shape=%s, cols.shape=%s, vals.shape=%s") %
>          (ai.shape, aj.shape, av.shape))
>            ^
> ------------------------------------------------------------
>
> PETSc/petscmat.pxi:683:11: Cannot convert 'npy_intp *' to Python object
>

Whoops, sorry about that. I patched on yet another hack here:

https://github.com/dagss/cython/commit/6f2271d2b3390d869a53d15b2b70769df029b218

Even if there's been a lot of trouble with these hacks I hope it can 
still go in; it is important in order to keep a significant part of the 
Cython userbase happy.

Dag
(Continue reading)

Dag Sverre Seljebotn | 10 Apr 21:53 2012
Picon
Picon

Re: [Cython] never used numpy.pxd, but now my code is failing

On 04/10/2012 09:52 PM, Dag Sverre Seljebotn wrote:
> On 04/10/2012 08:32 PM, Lisandro Dalcin wrote:
>> Is there any way to disable special-casing of numpy arrays? IMHO, if
>> I'm not using Cython's numpy.pxd file, Cython should let me decide how
>> to manage the beast.
>>
>>
>> Error compiling Cython file:
>> ------------------------------------------------------------
>> ...
>> if ((nm != PyArray_DIM(aj, 0)) or
>> (nm != PyArray_DIM(av, 0)) or
>> (si*bs * sj*bs != sv)): raise ValueError(
>> ("input arrays have incompatible shapes: "
>> "rows.shape=%s, cols.shape=%s, vals.shape=%s") %
>> (ai.shape, aj.shape, av.shape))
>> ^
>> ------------------------------------------------------------
>>
>> PETSc/petscmat.pxi:683:11: Cannot convert 'npy_intp *' to Python object
>>
>
> Whoops, sorry about that. I patched on yet another hack here:
>
> https://github.com/dagss/cython/commit/6f2271d2b3390d869a53d15b2b70769df029b218

BTW, that's the _numpy branch.

Dag

(Continue reading)


Gmane