VanL | 17 Apr 18:58 2015

Can someone explain __extend__?

I am having some trouble wrapping my head around it. Reading through rpython/tools/, it looks like it could be one or more of a number of things:

- An implementation of javascript-style prototypes. (The similarity: you don't subclass an object in js - you use the base object as a prototype and extend it with new functionality)

- A way to do specialization and automatic dispatching on types so that a+b works (both "a" and "b" know what they are, and whether they are compatible with each other in an __add__/__radd__ sense, and what type should be returned as a result of that call)

- Sort of a first draft of ABCs, allowing composition and type buildup without explicit inheritance (roughly, __extend__ is similar to ABC.register)

- Other?


pypy-dev mailing list
pypy-dev <at>
VanL | 15 Apr 21:34 2015

Porting PyPy/rpython to Python 3

Hi everyone,

For the last little bit I have been working on porting the rpython toolchain to Python 3. My initial goal is to get either pypy2 or pypy3 to build with either pypy2 or pypy3.

I had gotten the impression from some previous statements that these efforts would not be welcome, so I was doing my work in a private fork. After a few conversations at PyCon, though, I was encouraged to package some of these changes up and send them as a series of pull requests.

A couple questions/thoughts:

1. I am happy to send the pull requests up using bitbucket. Rather than do a big dump, I will send up chunks that each address a particular issue across the entire codebase. Even if a PR touches a number of files, each PR will implement the same change so that correctness is easy to check. If these PRs are not wanted, let me know, and I will stop sending them up.

2. I am initially doing this work in a way that maintains 2/3 compatibility - my check before each major commit is whether I can still build pypy using pypy2. Would the pypy devs be willing to make building pypy be 2.7+ only? That way I could use __future__ imports to ease some of the porting.

3. I will likely vendor or require six before I am done. Let me know if this would likely be a problem.

4. At some point in the future, I plan on reworking the rpython toolchain in various ways - use python 3 function and type annotations so as to make the flow of types be easier to see, fully split out the rpython and non-rpython bits, etc. Again, I am happy to do this on my own, but will gladly contribute upstream if wanted.

pypy-dev mailing list
pypy-dev <at>
Michael Kennedy | 15 Apr 20:07 2015

Be on my podcast

I'd love to have you guys on my podcast, Talk Python To Me. You can learn more here:

Interested in being a guest? Or a couple of you even?

pypy-dev mailing list
pypy-dev <at>
Richard Plangger | 14 Apr 13:16 2015

Vectorizing pypy traces no3


I have recently managed to correctly transform a trace to a vectorized
trace that includes a guard. I'm hoping that this might be merged into
the code base of pypy (when it is finished), thus it would be nice to
get feedback and iron out some problems I currently have. Of course this
needs explanation (hope that does not lead to tl;dr):

Consider the following trace:
short version (pseudo syntax):

store(c,i) = load(a,i) + load(b,i)
j = i+1
long version:

By unrolling this short trace, it is _NOT_ possible to vectorize it. The
guard prohibits the store operation to be executed after the guard. I
solved this problem by introducing a new guard (called 'early-exit'). It
saves the live variables at the beginning of the trace. By finding the
index calculations + guards and moving them above the early exit the
following is possible:

short version (pseudo syntax):

j = i + 1
k = j + 1
guard_early_exit() # will not be emitted
va = vec_load(a,i,2)
vb = vec_load(b,i,2)
vc = vec_add(va,vb)
vec_store(c, i, 2) = vc
long version

My assumptions: Any guard that fails before the early exit must guide
blackhole to the original loop at instruction 0. Only pure operations
and the guards protecting the index are allowed to move before early-exit.

The previous and the use of the live variables of the early exit (at the
guard instructions) preserve correctness.

I'm not quite sure how to handle the following problems:

1) I had the problem that uneven iterations moved to the blackhole
interpreter and executed the loop from the beginning. I fixed it by
resetting the blackhole interpreter position to the jitcode index 0.
Is this the right way to start from the beginning?

2) Is there a better way to tell the blackhole interpreter to resume
from the beginning of the trace, or even do not blackhole and just jump
into the normal interpreter?

3) Are there any objections to do it this way (guard early-exit)?

Attachment (0xCF1B1C8D.asc): application/pgp-keys, 3112 bytes
pypy-dev mailing list
pypy-dev <at>
Armin Rigo | 14 Apr 09:25 2015

Re: Which pypy with >=3.3 Python compatibility

Hi Ludovic,

On 13 April 2015 at 19:03, Ludovic Gasc <gmludo <at>> wrote:
> FYI, I'm trying to implement monotonic timer in PyPy3.3 during PyCON sprint
> code, Benoît Chesneau finds me an example:

Fwiw, clock_gettime() and similar functions are already present in
PyPy2 in the module ``__pypy__.time``.  I didn't check where that code
is in py3.3.  I would guess it is similarly present in the
``__pypy__.time`` module, but simply needs to be made accessible from
the standard place in Python 3 (the ``time`` module?).

A bientôt,

pypy-dev mailing list
pypy-dev <at>
Mike Müller | 13 Apr 17:29 2015

Which pypy with >=3.3 Python compatibility

I need pypy that is Python 3.3 or, even better, Python 3.4 compatible.
I can found the nightly builds at

Which one should I use py3.3 or py3k? There are many more version.
Should I use one of them?

Laura Creighton | 12 Apr 14:46 2015
Picon -- show the code?

I thought there was a way to show the code that was actually run to get
the results.  Maybe I am confusing things with the pshootout site.  Have
I just forgotten how to do this?

I wanted to show an astronomer what sort of code we run blazingly fast vs
what sort we are less speedy at, so he can decide if he needs to write his
algorithm in C++ or not.  If doesn't have such a feature,
it would seem to be a good one to add.

Ajit Dingankar | 11 Apr 20:00 2015

PyPy translation on Xeon Phi (pka MIC)

I tried the translator for BF example on Xeon Phi:

It failed due to a "safety check" related to asserts at the end of 
the translate module. I use CPython v2.7.2 since that's the latest 
I could find for the Phi accelerator. I thought I'd try it before 
going to the step of cross-compiling a more recent version. (Just 
for reference the example works with CPython v2.7.5 on the Xeon 

I tried to search for previous experience with Phi (or MIC) but 
could only find this old post on the mailing list: 
which is mainly about STM but mentions MIC at the very end: 
"Still trying to see whether I can get PyPy to run on the MIC. :)"

I'd appreciate any pointers to making PyPy translate work on Phi, 
with CPython or PyPy binary itself (if needed, since it may be 
hard to get it working on Phi). At least whether Xeon Phi is or 
is not a supported platform, and what options there are to support 
it, in the latter case). 

PS: For full disclosure, I work for Intel but my day job is related 
to hardware, hence posting from my personal account. 
Armin Rigo | 10 Apr 23:51 2015


Hi all,

I'm preparing a EuroPython submission about STM and/or about CFFI, and
wondering if someone else also planned to submit a talk.  If not, I'll
include a general "status of PyPy" part in my submission.

William ML Leslie | 10 Apr 10:44 2015

Re: FAQ entry

On 8 April 2015 at 04:54, Yuriy Taraday <yorik.sar <at>> wrote:
Did you miss mailing list intentionally?

​Ach no!  I always seem to do this on pypy-dev.  Thanks for pointing that out.​


On Tue, Apr 7, 2015 at 5:59 PM William ML Leslie <william.leslie.ttg <at>> wrote:
On 8 April 2015 at 00:00, Yuriy Taraday <yorik.sar <at>> wrote:
GC introduces concurrency to user code anyway: call to some __del__ method can happen in any time in user code, so it might be called in a separate thread just as well

​Obligatory: ​

Yes, that's how I see it: one can't bet on where and when finalizers are run, so they appear to the rest of the program as if they're run in some special thread that wake ups in some scary moments. So a separate thread is just as good for them.

Except that, up until now, you can expect that __del__ is run ​in /one/ of the threads you've started.  If you only have one thread, you know exactly which thread your __del__ will be run in.  So you could make assumptions about thread-local state when you write such a method.

​Not that I have an opinion here.  __del__ is problematic, and entirely to be avoided in new code, afaiac.​

William Leslie

Likely much of this email is, by the nature of copyright, covered under copyright law.  You absolutely MAY reproduce any part of it in accordance with the copyright law of the nation you are reading this in.  Any attempt to DENY YOU THOSE RIGHTS would be illegal without prior contractual agreement.
pypy-dev mailing list
pypy-dev <at>
Maciej Fijalkowski | 6 Apr 14:48 2015

FAQ entry

maybe we should add something along those lines to FAQ