lozinski | 28 Oct 12:51 2015

Durus News

In my blog, I am covering ZODB and Durus. 


I believe they are both based on the class Persistent.   The best thing
for both of them, is two have two databases
which support the same API.

Durus makes a lot of sense for the Python Kivi Community.  Much better
for running on cell phones than
ZODB. And Durus runs on SQLite, ZODB does not. 

Comments Appreciated.  And particularly what I need are links to active
users of Durus. 

On Twitter I am "Persistent Python"   <at> zodb4. 

Anyone using it with Kivy?  With SQLIte?

Richard JENNINGS | 3 Mar 20:04 2015

Durus-users Ping

I started using Zope but found it too unwieldy for my purposes. I found Durus much more to my liking. I use only the web-server part plus the durus database. All the rest I replaced with my own objects. I now have a small web tool for tracking real-estate portfolios and associated information.  I had an automated process execution environment running on my earlier version and have still to recommission it following a major refactoring.


Richard J


Durus-users mailing list
Durus-users <at> mems-exchange.org
Jesus Cea | 26 Feb 00:55 2015


Is this still alive?


Jesús Cea Avión                         _/_/      _/_/_/        _/_/_/
jcea <at> jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
Twitter:  <at> jcea                        _/_/    _/_/          _/_/_/_/_/
jabber / xmpp:jcea <at> jabber.org  _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz

Durus-users mailing list
Durus-users <at> mems-exchange.org
Roger Masse | 12 Jun 16:46 2013

RELEASED: DurusWorks v1.2, Dulcinea v0.22

Greetings --

(this is cross posted from qp <at> mems-exchange.org)

It's been awhile since we've done update releases of DurusWorks and Dulcinea.

These updates have some small pieces of new functionality, but largely reflect the tracking of changes in Python and the evolution of style in programing web applications… and of course fixing bugs!

This software is made available through the generosity of my employer CNRI under an open source license from the MEMS Exchange website under software.

DurusWorks, the repackaging of Durus, qp, qpy and sancho, was originally released in Sept 2011, and Dulcinea which goes back more than 10 years, has been the basis of several successful projects here at CNRI including: 

- The GRIN Exchange

Here are the details of the changes made since the last releases of  DurusWorks and Dulcinea .

For the foreseeable future, I will be the sole developer and maintainer of these packages here at CNRI.

Much of this great work was accomplished by my former colleague here at CNRI, David Binger, who for the moment anyway, is not doing software development.

My direct contact information is Roger Masse <rmasse <at> mems-exchange.org>, but please first consider using the qp list (qp <at> mems-exchange.org) to post comments or report bugs about DurusWorks or Dulcinea.

Specific questions about the durus database can be posted to the durus-users list (durus-users <at> mems-exchange.org).

Since this is my first time managing these releases, please let me know if something doesn't look right.

Thank you,

-Roger Masse - Software Developer - CNRI

Durus-users mailing list
Durus-users <at> mems-exchange.org
Jesus Cea | 4 Feb 23:33 2013

DurusWorks 1.1 is not installing "_persistence.so"

I just installed current DurusWorks (1.1) on a fresh new machine, and
I was seeing this warning in my logs: "Using Python base classes for

"_persistence.so" extension is being compiled correctly, but not
installed when doing "python setup.py install". I copied the shared
lib by hand, and it works correctly.

Since durus installation directory is not versioned (I think this
should be a bug), people is possibly not seeing this because they are
installing over an old Durus version, and using the old
"_persistence.so" already installed (risking suttle bugs here :).

Please, use versioned durus directories :-). Very convenient if you
want several Durus versions in the same system (for whatever reason).
I am still running 3.7 somewhere because heavily modified code :-).
And this bug would be very obvious :-P


Jesús Cea Avión                         _/_/      _/_/_/        _/_/_/
jcea <at> jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea <at> jabber.org         _/_/    _/_/          _/_/_/_/_/
.                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
David Binger | 16 Jan 17:38 2012

Programming Job

Anyone looking for full-time steady employment using Durus and QP?

CNRI is looking to hire for a python programmer.
We've posted it on the Python Jobs Board:

Please contact me directly if you are interested,
and put "PYTHON" in the subject line to make sure
that I don't miss it.

- David Binger
MEMS Exchange
David Hess | 14 Aug 18:44 2011

Re: Interesting BTree failure

After some more investigation - this occurred right after live packing the database and changes to this
BTree were made and committed during the pack.

Afterwards, the oids seem to be jumbled up in this BTree (at least - maybe elsewhere in the database too).
Unpickled Persistent objects are not what they should be - interior BNodes are sometimes application
classes and stored values are sometimes BNodes rather than application classes.


On Aug 14, 2011, at 10:45 AM, David Hess wrote:

> We have a long running process that does a lot of work with BTrees and in the middle of doing an "in"
operation, we got this traceback:
> File "/usr/local/lib/python2.6/dist-packages/durus/btree.py", line 343, in __contains__
>  return self.root.search(key) is not None
> File "/usr/local/lib/python2.6/dist-packages/durus/btree.py", line 93, in search
>  return self.nodes[position].search(key)
> File "/usr/local/lib/python2.6/dist-packages/durus/btree.py", line 93, in search
>  return self.nodes[position].search(key)
> File "/usr/local/lib/python2.6/dist-packages/durus/persistent.py", line 173, in _p_load_state
>  self._p_connection.load_state(self)
> File "/usr/local/lib/python2.6/dist-packages/durus/connection.py", line 182, in load_state
>  pickle = self.get_stored_pickle(oid)
> File "/usr/local/lib/python2.6/dist-packages/durus/connection.py", line 111, in get_stored_pickle
>  record = self.storage.load(oid)
> File "/usr/local/lib/python2.6/dist-packages/durus/file_storage.py", line 96, in load
>  raise KeyError(oid)
> KeyError: '\x00\x00\x00\x00\x00\x00T\xb4'
> This is using shelf storage on Durus 3.7 and is a long running process. My best guess is a ghosted BNode that
thought it was persisted to disk but really wasn't? I think I've seen this once before a couple of years ago
(i.e. it seems to be really rare).
> Is this new and/or known and fixed in Durus 3.8?
> Dave
David Hess | 11 Aug 16:37 2011

Durus repair

We have servers that deal with power outages (and dirty power in general) and have ended up with some
corrupted durus databases. We've used the normal "repair" feature to handle a lot of the cases but we have
another that's occurring on occasion that is not handled by repair. It is failing as this exception:

Traceback (most recent call last):
  File "/usr/local/bin/durus", line 22, in <module>
  File "/usr/local/lib/python2.6/dist-packages/durus/client.py", line 108, in client_main
  File "/usr/local/lib/python2.6/dist-packages/durus/client.py", line 35, in interactive_client
    storage = FileStorage(file, readonly=readonly, repair=repair)
  File "/usr/local/lib/python2.6/dist-packages/durus/file_storage.py", line 73, in __init__
    self.shelf = Shelf(filename, readonly=readonly, repair=repair)
  File "/usr/local/lib/python2.6/dist-packages/durus/shelf.py", line 91, in __init__
    self.file, repair=repair)
  File "/usr/local/lib/python2.6/dist-packages/durus/shelf.py", line 296, in read_transaction_offsets
    file.seek(position + 8 + record_length)
  File "/usr/local/lib/python2.6/dist-packages/durus/file.py", line 41, in seek
    self.file.seek(n, whence)
IOError: [Errno 22] Invalid argument

We've come up with the following patch:

--- /usr/local/lib/python2.6/dist-packages/durus/shelf.py	2007-04-24 16:10:16.000000000 -0500
+++ durus/shelf.py	2011-08-11 08:18:51.169288642 -0500
 <at>  <at>  -297,18 +297,17  <at>  <at> 
         if file.tell() != transaction_end:
             raise ShortRead
         return transaction_offsets
-    except ShortRead, e:
+    except (ShortRead, IOError), e:
         position = file.tell()
         if position > transaction_start:
             if repair:
-                e.args = repr(dict(
+                raise ShortRead(repr(dict(
                     transaction_end = transaction_end,
-                    position=position))
-                raise
+                    position=position)))
         return None

Or test database started out as:

-rw-r--r--  1 fishfinder fishfinder 49371713 2011-08-10 14:12 db.durus

We ran with this patch and without the repair switch and got the following (expected) report:

durus.utils.ShortRead: {'position': 47501336L, 'transaction_start': 47501312L,
'transaction_end': 12061693972974895416L}

We then ran again with --repair and the database was able to be opened. The resulting database looked like this:

-rw-r--r--  1 fishfinder fishfinder 47501312 2011-08-11 08:14 db.durus

Can anybody comment on whether this patch makes the appropriate sense and is safe enough? We
(fortunately!) have a limited number of corrupted databases to try this on.


David Hess | 3 Jun 20:17 2011

Prevent a persistent object from being ghosted?

NB: I don't have to worry about data invalidation due to aborts and conflicts - there's only one writer to this database so by design those cannot occur.

So, my question is: if I have a long lasting reference to a PersistentObject instance, is there a safe way for that object to veto being ghosted by the cache manager? It looks like reimplementing _p_set_status_ghost as a "pass" might work but there may be some not so obvious side-effects.


Durus-users mailing list
Durus-users <at> mems-exchange.org
Jesus Cea | 24 Sep 19:57 2010

Moving cache from object count to size

Durus objects, when in RAM, could keep the object size in a (volatile,
not stored in disk) attribute. This attribute can be generated when
loading the pickle from disk (you directly have the size), or when
storing the object in the disk (you have to create the pickle, so you
have the size too). In fact, this size bookkeeping could be managed in a
separate internal dictionary, don't have to be inside the object.

I have objects of very dissimilar sizes in my storage, so current cache
control (that is, object count) is not representative of actual memory
usage. I have objects 50 bytes long, and 60Kbytes long :-(. I would
suggest to change the cache code to control cache size, instead of
object count.


PS: I could consider patching this myself, if durus developers are
interested but not spare time.


Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea <at> jcea.es - http://www.jcea.es/     _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea <at> jabber.org         _/_/    _/_/          _/_/_/_/_/
..                              _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
Jesus Cea | 23 Sep 19:02 2010

My wishlist for Durus 3.8 (20070503)

This is a draft I wrote three years ago. Maybe can be useful.

I would like durus to be more "community driven".

This is a preview version.

As ever, I'm ready to help to implement this, if you ask.

* "Factorize" the durus server socket management (in particular, socket
creation for incomming connections and socket "select") to be able to
reuse the server code in other communication media, like shared memory,
intraprocess queues or mmaped files.

* "connection" objects should provide a method to query how much
accumulated idle time was spent waiting for the storage.

* A precompiled DURUS distribution for Windows users. Please!. Durus
programmers can't do it. Any other gentle soul able to provide this
service?. Durus deserves it!.

* When getting a "late conflict", the client should get the *real* OID
list of the conflicting objects. Since an storage implementation could
choose to only report conflicts in a "late" way, this check should be
done even for "no changed objects" commits.

* Be able to raise a "read only" exception when a client requests new
OIDs or try to commit with objects changed. This would allow for
read-only connections without setting the entire storage "read-only".
Also currently, if an storage is read-only, clients will be disconnected
when trying to commit changes, with no real indication of the problem.

* New "gen_oid_record" implementation in "Storage" class breaks
encapsulation and percolates internal implementation details of FileStorage.

* Add a "connection.close()" method.

* Add a storage wrapper to be able to share any arbitrary storage
between multiple "connection()"'s. For example, you pass an storage
instance to a function and it gives you a builder object, related to
that storage. Calling that generator gives you a "mutexed" storage
instance wrapping the original storage.

 This way you can share a single filestorage between threads, for example.


Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea <at> argo.es http://www.argo.es/~jcea/ _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea <at> jabber.org         _/_/    _/_/          _/_/_/_/_/
                               _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz