Alex Gaynor | 14 Dec 22:15 2014


Hey all,

Earlier today I created the 2.7.9 branch, with the copy of the 2.7.9 stdlib.

It's no surprise, the biggest work to be done is for the ssl module, 2.7.9 contains a complete backport of 3.4's ssl module.

We have up through 3.2s version of the ssl module implemented on the py3k branch. I'd like some feedback from folks on how you think we should best handle finishing the 2.7.9 work.

Should I copy the work from py3k, finish anything missing, and then when we get to python 3.4 on the py3k branch the work is just "already done"? Something else?

Feedback please!
pypy-dev mailing list
pypy-dev <at>
| 13 Dec 07:05 2014

Ask Pypy within virtualenv of Windows 7/8 for Help ---- bitpeach from china

Dear Ms./Mr. Director / Dear Pypy Team:
     I'm a e-pal named bitpeach. I'm so interested in your work and admire the Pypy.
     Therefore, I try to install Pypy and pratice with Pypy during my working and studying. But I got a problem, what is worrying me is that there is few versions of Pypy running on the Windows. So the problem comes as follows:
     (1)I want to instal Pypy but do not confuse the the 3rd packages or libraries with Python2.7 already in my operate system Windows 7/8 (32bit). Then I choose to follow the tutorial settings to install VirtualEnv just like "Installing using virtualenv".
     (2)After I install VirtualEnv successfully, I need to arrange a new space for Pypy so that I download the Pypy available to Windows as Python2.7 compatible PyPy 2.4.0 - Windows binary (32bit) shown.
     (3)Then I ectract the file to normal foleder and use VirutalEnv commands like ">virtualenv.exe -p \pathto\pypy.exe". This command "-p $PATH" means I need to choose Pypy as a default Python interpreter,otherwise it will choose Python27 already installed in my Windows system. However the command comes a error and fail to build a virtual environment for pypy.
     From now on, I truly realized that the specific parameters in the command in windows and Unix/Linux is different. Although I notice that your tutorial shows that $ virtualenv -p /opt/pypy-c-jit-41718-3fb486695f20-linux/bin/pypy my-pypy-env with a difference between Windows7/8 and Unix/Linux, I still can not solve the problem in Windows 7/8 and do not know how to build a virtual environment with appointing Pypy to a default interpreting python environment in Windows 7/8.
     And I remember your team or website is very good at this and cannot praise your speeding performance  Pypy too highly. So you are the best and professional. Hence I come to seek your help.
     Best Regards! Hope Pypy more and more better! Especially the versions about Windows
     Sincerely yours!:-)
                                                                                  2014-12-13  Sat.
                                                                                  Email From China
pypy-dev mailing list
pypy-dev <at>
David Malcolm | 13 Dec 03:13 2014

Experiments with PyPy and libgccjit

I'm the maintainer of a new feature for the (not-yet-released) GCC 5:
libgccjit: a way to build gcc as a shared library, suitable for
generating code in-process.  See:

I've been experimenting with embedding it within PyPy - my thought was
that gcc has great breadth of hardware support, so maybe PyPy could use
libgccjit as a fallback backend for targets which don't yet have their
own pypy jit backends.

I'm attaching the work I've got so far, in patch form; I apologize for
the rough work-in-progress nature of the patch.  It has:

  * a toy example of calling libgccjit from cffi, to build and
    run code in process (see

  * doing the same from rffi (see
    rpython/jit/backend/libgccjit/ and 
    These seem to work: the translator builds binaries that call
    into my library, which builds machine code "on the fly".
    Is there a way to do this without going through the
    translation step?

  * the beginnings of a JIT backend:
    I hack up rpython/jit/backend/ to always use:
    and this merely raises an exception, albeit dumping the
    operations seen in loops.

My thinking is that I ought to be able to use the rffi bindings of
libgccjit to implement the backend, and somehow turn the operations I'm
seeing into calls into my libgccjit API.

Does this sound useful, and am I on the right track here?

Is there documentation about the meaning of the various kinds of
operations within a to-be-JITted-loop?


pypy-dev mailing list
pypy-dev <at>
Timothy Baldridge | 9 Dec 07:01 2014

Getting rid of "prebuilt instance X has no attribute Y" warnings

I'm getting a ton of these sort of warnings. They seem to go away when I either a) type hint the object via assert (gross) or b) access the attribute via a getter method. Is there a better way? Would there be a problem with somehow just turning this warning off?


pypy-dev mailing list
pypy-dev <at>
VanL | 5 Dec 16:38 2014

Unify lib_pypy by vendoring six

Hi all,

I’ve been doing some experiments with pypy and I would interested in making parts of the codebase more 3x compatible. As a first step, I notice that there are slight differences between the lib_pypy shipped in the 2.7 and 3.2 releases. How would people feel about reducing the duplication by consolidating the lib_pypy implementations?

The strategy would be:

- vendor within lib_pypy
- unify implementation as much as possible, using either compatible syntax or six helpers
- if the implementation cannot be unified, putting individual implementations behind six.PY2 or six.PY3 conditionals



pypy-dev mailing list
pypy-dev <at>
Toby St Clere Smithe | 28 Nov 20:13 2014

GSoC 2015: cpyext project?

Hi all,

I've posted a couple of times on here before: I maintain a Python
extension for GPGPU linear algebra[1], but it uses boost.python. I do
most of my scientific computing in Python, but often am forced to use
CPython where I would prefer to use PyPy, largely because of the
availability of extensions.

I'm looking for an interesting Google Summer of Code project for next
year, and would like to continue working on things that help make
high-performance computing in Python straight-forward. In particular,
I've had my eye on the 'optimising cpyext'[2] project for a while: might
work in that area be available?

I notice that it is described with difficulty 'hard', and so I'm keen to
enquire early so that I can get up to speed before making a potential
application in the spring. I would love to work on getting cpyext into a
good enough shape that both Cython and Boost.Python extensions are
functional with minimal effort on behalf of the user. Does anyone have
any advice? Are there particular things I should familiarise myself
with? I know there is the module/cpyext tree, but it is quite formidable
for someone uninitiated!

Of course, I recognise that cpyext is a much trickier proposition in
comparison with things like cffi and cppyy. In particular, I'm very
excited by cppyy and PyCling, but they seem quite bound up in CERN's
ROOT infrastructure, which is a shame. But it's also clear that very
many useful extensions currently use the CPython API, and so -- as I
have often found -- the apparent relative immaturity of cpyext keeps
people away from PyPy, which is also a shame!





Toby St Clere Smithe
haael | 26 Nov 12:12 2014

Re: An idea about automatic parallelization in PyPy/RPython

Hi 黄若尘, Armin,

> > I thought about that too, but the granularity is very wrong for STM:
> > the overhead of running tiny transactions will completely dwarf any
> > potential speed gains.

Then maybe we should go with a slightly different version of STM, specific to assembler loops in particular.

A loop in assembly language operates on a linear memory model and usually modifies only a small number of
memory cells. In fact, most changes made by a loop iteration get overwritten in the next iteration.
Assuming that a single assembler instruction may modify only one memory cell, the number of cells changed
will be no more than the count of loop iterations.

We could replace (para-virtualize) any instruction that changes a memory cell with two stack pushes: save
the memory address to the stack and the actual value that is written. Any memory read will also be replaced
with the stack search before falling back to reading the actual memory contents. This is a big penalty of
course but it's worth checking whether it pays in the future.

This is a simple, assembler-specific flavor of STM.

Then we could employ loop scheduling and run the modified code on all cores. Then we could check whether all
the memory modifications agree, that means whether any two cores did not try to write different value to
the same memory address. If not, then we could commit the transaction and exit the loop.

The kind of loops that would benefit most from such optimization would be memset, memcpy and all map-like constructs:

dst = map(fun, src)


for(int i = 0; i < len(src); i++)
 dst[i] = fun(src[i]);

> can we just make a hybrid-system that firstly slightly screen 
> some loops that is not suitable for parallelization and then run others with STM?

Following the general "try and fail" philosophy of Python, I would suggest the following:
Just run the unmodified loop on one core and use the other cores to optimize/execute the modified version.
If the optimization turns out unsuitable or the serial execution ends first, just abort the optimized
run. If the loop turns out to be parralelizable, return the results instead.


Od: "黄若塵" <hrc706 <at>>
Do: "Armin Rigo" <arigo <at>>; haael <at>; 
Wysłane: 9:07 Środa 2014-11-26
Temat: Re: [pypy-dev] An idea about automatic parallelization in PyPy/RPython

> Hi Haael, Rigo,
> > 2014/11/21 19:21、Armin Rigo  のメール:
> > 
> > Hi Haael, hi 黄若尘,
> > 
> > On 21 November 2014 10:55,   wrote:
> >> I would suggest a different approach, more similar to Armin's idea of parallelization.
> >> 
> >> You could just optimistically assume that the loop is parallelizable. Just execute few steps at once
(each in its own memory sandbox) and check for conflicts later. This also plays nice with STM.
> > 
> > I thought about that too, but the granularity is very wrong for STM:
> > the overhead of running tiny transactions will completely dwarf any
> > potential speed gains.  If we're talking about tiny transactions then
> > maybe HTM would be more suitable.  I have no idea if HTM will ever
> > start appearing on GPU, though.  Moreover, you still have the general
> > hard problems of automatic parallelization, like communicating between
> > threads the progress made; unless it is carefully done on a
> > case-by-case basis by a human, this often adds (again) considerable
> > overheads.
> Well, recently I have read some papers about TLS, and also realized the heavy
> performance penalty of STM. What am I considering is that, is it possible to 
> simplify a STM for the trace generated by RPython using some features of it (for
> example there is no control flow but only guard; there are some jit.elidable functions
> in the interpreter), or, can we just make a hybrid-system that firstly slightly screen 
> some loops that is not suitable for parallelization and then run others with STM?
> > 
> > To 黄若尘: here's a quick answer to your question.  It's not very clean,
> > but I would patch rpython/jit/backend/x86/, prepare_loop(),
> > just after it calls _prepare().  It gets a list of rewritten
> > operations ready to be turned into assembler.  I guess you'd need to
> > check at this point if the loop contains only operations you support,
> > and if so, produce some different code (possibly GPU).  Then either
> > abort the job here by raising some exception, or if it makes sense,
> > change the 'operations' list so that it becomes just a few assembler
> > instructions that will start and stop the GPU code.
> > 
> > My own two cents about this project, however, is that it's relatively
> > easy to support a few special cases, but it quickly becomes very, very
> > hard to support more general code.  You are likely to end up with a
> > system that only compiles to GPU some very specific templates and
> > nothing else.  The end result for a user is obscure, because he won't
> > get to use the GPU unless he writes loops that follow exactly some
> > very strict rules.  I certainly see why the end user might prefer to
> > use a DSL instead: i.e. he knows he wants to use the GPU at specific
> > places, and he is ready to use a separate very restricted "language"
> > to express what he wants to do, as long as it is guaranteed to use the
> > GPU.  (The needs in this case are very different from the general PyPy
> > JIT, which tries to accelerate any Python code.)
> > 
> > 
> > A bientôt,
> > 
> > Armin.

pypy-dev mailing list
pypy-dev <at>
Adam Savey | 20 Nov 23:46 2014

Python Users list



I was reviewing your website and thoughts you would be interested in reaching out to Python Users from USA.


A few other technologies include: Java Users, J2EE Users, Linux Users, Dot net Users, Oracle RDBMS Users, Spring Users, Puppet Users.


Ø  100% permission based contacts.

Ø  We guarantee 95% accuracy on data fields and 85% on email deliverability.

Ø  Once you purchase the list you can use it for multiple times, no restrictions.

Ø  The list can be used for Email Marketing, Direct Mail Marketing, Fax Marketing and Tele Marketing.


We also have: Oracle Users, IBM Users, AWS Users, Salesforce Users, Google Apps Users, Google Analytics Users, Zoho Users, Sage Users, Sugar CRM Users, Infor Users, Epicor Users, Kronos Users, Netsuite Users, VMWare Users, IBM Cognos Users, SAP Users, SaaS Users, Middleware Users & many more....


Hit reply & send in your target criteria, I'll get back to you with counts, cost and other details for your review. If you've different target, feel free to get back to me with it.



Adam Savey

Demand Generation Executive






We respect your privacy, if you do not wish to receive any further emails from our end, please reply with a subject “Cancel”






pypy-dev mailing list
pypy-dev <at>
Luciano Ramalho | 19 Nov 21:06 2014

Was dict subclass discrepancy "fixed" (issue 708)?


I am writing a book about Python 3 [0] and while researching the
caveats of subclassing built-in types I discovered the page
"Differences between PyPy and CPython" [1] and issue #708 "Discrepancy
in dict subclass __getitem__ calls between CPython 2.7 and PyPy 1.5"


However, when testing with pypy3-2.4.0 and pypy-2.4.0 my results were
the same as with CPython, and not as documented in [1].

So was issue 708 "fixed" and now PyPy misbehaves in the same way as CPython?





Luciano Ramalho
Twitter:  <at> ramalhoorg

Professor em:
Twitter:  <at> pythonprobr
Shelly Greer | 18 Nov 22:18 2014

How to get your iPhone 6

Get a brand new
iPhone 6*!

Enter your ZIP for availability:
GO > >
*Participation required. See offer for details

This advertisement was sent to you by a third party. If you are not interested in receiving future RewardZoneUsa advertisement, please Click Here. Alternatively, you can opt out by sending a letter to:
128 Court Street, 3rd FL
White Plains, NY 10601

pypy-dev mailing list
pypy-dev <at>
黄若尘 | 18 Nov 02:46 2014

An idea about automatic parallelization in PyPy/RPython

Hi everyone,

   I’m a master student in Japan and I want to do some research in PyPy/RPython. 
   I have read some papers about PyPy and I also had some ideas about it.  I have communicated with Mr. Bloz and
been advised to send my question here.

   Actually, I wonder if it is possible to make an automatic parallelization for the trace generated by JIT,
that is, check if the hot loop is a parallel loop, if so, then try to run the trace parallel in multi-core CPU
or GPU, make it faster.
   I think it maybe suitable because:
   1. The traced-base JIT is targeting on loops, which is straight to parallel computation.
   2. There is no control-flow in trace, which is suitable to the fragment program in GPU. 
   3. We may use the hint of  <at> elidable in interpreter codes, since the elidable functions are nonsensitive in
the execution ordering so can be executed parallel.

   What do you think about it?

Best Regards, 
Huang Ruochen
pypy-dev mailing list
pypy-dev <at>