Eric B | 1 Oct 21:53 2014

Are attachments duplicated for each revision as well?

Given that attachments are seemingly stored as key/value pairs within a
document, does that mean that each revision of a document contains the
attachments as well?  Or are they stored independently?

For instance, given a 5kb document with a 100Mb attachment that has 10 revs
(where the attachment was added in rev 1), will the total storage
requirements be 5kb * 10 + 100Mb or (5kb + 10Mb) * 10?


Eric B | 1 Oct 20:02 2014

How to store the delta between doc revisions?

I'm new to CouchDB and trying to figure out the best way to store a history
of changes for a document.

Originally, I was thinking the thing that makes the most sense is to use
the update function of CouchDB but not entirely sure if I can.  Is there
someway to use the update function and modify/create a second document in
the process?

For example, if I have a document which contains notes for a client.
Everytime I modify the notes document (ie: add new lines or delete lines),
I want to maintain the changes made to it.  If there was a way to use
CouchDB's rev fields for this, my problem would be solved, but since
CouchDB deletes non-current revs upon compaction, that is not an option.

So instead, I want to create a "history_log" document, where I can just
store the delta between documents (as a patch, for example).

In order to do this, I need to have my existing document, my new document,
compare the changes and write them to a history_log document.  But I don't
see if/where I can do that within and update handler.

Is there something that can help me do this easily within CouchDB?  Are
there patch or json compare functions I can have access to from within a
CouchDB handler?


Lena Reinhard | 1 Oct 13:27 2014

[NEWS] Your links for the CouchDB Weekly News?

Hi everyone,

if you want to submit a link for this week's CouchDB Weekly News, please send it to me until thursday, October
02, 2014, 12pm UTC+2.

Thanks in advance for your support & best regards

Luca Morandini | 30 Sep 11:05 2014

"keys is null" message when a view is performed on many docs


I wrote a view to join docs on a common attribute, making the reduce part store 
docs with the same key for both sides of the join.

This works for a few docs, but when I try to run the view on, say 100K docs, 
CouchDB's logs keep on saying:
OS Process #Port<0.3458> Log :: function raised exception (new TypeError("keys is 
null", "undefined", 5))

The view is as such:

The Map part:
function (doc) {
   if (doc.joinside) {
     emit([ doc.joinkey, doc.joinside ], doc.feature);

The Reduce part:
function (keys, values, rereduce) {
   var lefts = [];
   var rights = [];

   for (var i = 0; i < keys.length; i++) {
     if (keys[i][0][1] == "left") {
     } else {
(Continue reading)

Nikolay Vlasov | 29 Sep 16:05 2014

Geocouch and CouchDB 1.6.1 comatibility

Hi everyone,

I'm trying to compile Geocouch with CouchDB 1.6.1 on Raspberry Pi with
Raspbian Wheezy. The code compiled correctly, without errors but system
doesn't pass tests. I'm also trying to implement example from this
book: Getting
Started with GEO, CouchDB, and Node.js
and on the request like the following:,0,180,90

I'm getting this in the log (server freshly restarted):

[Mon, 29 Sep 2014 14:01:28 GMT] [info] [<0.32.0>] Apache CouchDB has
started on
[Mon, 29 Sep 2014 14:01:53 GMT] [info] [<0.185.0>] Opening index for db:
geoexample idx: _design/geocats sig: "76cb79e5a7640ee806c556ed29b882d5"
[Mon, 29 Sep 2014 14:01:53 GMT] [info] [<0.189.0>] Starting index update
for db: geoexample idx: _design/geocats
[Mon, 29 Sep 2014 14:01:54 GMT] [error] [emulator] Error in process
<0.195.0> with exit value:

[Mon, 29 Sep 2014 14:01:54 GMT] [error] [<0.117.0>] {error_report,<0.31.0>,
(Continue reading)

Dragos Stoica | 27 Sep 18:10 2014

Appzip - CouchDB Application deployment tool

Hello Couch-ers,

We invite you to use and get fun with Appzip - a CouchDB Application deployment tool.
You may download and install this tool from:

The purpose of this tool is to ease CouchDB applications development life cycle and distribution.
It was inspired by couchapp, that we used for a while and then we decided to make this process easier.

You can distribute you CouchDB application a zip archive and upload/publish it to CouchDB with this tool.
The install process of Appzip is direct: clone git repository, edit and run ./, open in
browser  http://localhost:5984/appzip/_design/appzip/index.html 

In order to build a CouchDB application you construct a directory and subdirectory structure that copies
the database structure:
- database name
- documents, if it is a design document you create subforlders with views, lists, shows etc
- there are a couple special files: manifest.json in the root directory containing a list of your
databases, and in each document subforlder doc_attributes.json containing doc._id, and other attributes.

You can see examples of Appzip at:

We will be pleased to answer to your questions.

We will be back with good news on designeditor --- documentation generator for _design docs and a site (WIP)
on github.

Relax and have fun with CouchDB!
(Continue reading)

Conor Mac Aoidh | 26 Sep 10:57 2014

Post Insert Validation

Hi All,

I was previously using the validate_doc_update function for document 
validation. However, now I need to be able to change the contents of a 
document if it is invalid.

I was wondering what the best way to go about this is? Is there any 
design function that executes after a document insert and that will 
allow me to edit the document contents. I understand this will result in 
an additional revision.



Mike Marino | 25 Sep 18:09 2014

Workarounds for multipart/related ordering

Hi all,

PUTs of documents with attachments using multipart/related associate the
attachments based upon the ordering of the submitted JSON in the
_attachment dictionary.  (This has been noted as an issue here: , but not yet addressed
for the past couple of years.)  I'm curious if anyone has developed a
workaround or uses any workaround for this to ensure that the output JSON
e.g. in javascript follows a defined ordering?

Lena Reinhard | 25 Sep 17:58 2014

[BLOG] The CouchDB Weekly News is out

Hi everyone,

this week’s CouchDB Weekly News is out:

- How to roughly estimate the maximum disk space needed for a CouchDB?
- Setting up CouchDB 2.0
- Physical database movement without shutting down CouchDB
- "Learn you CouchDB!" and more releases
- many new, upcoming events
… as well as the regular Q&A, discussions, “get involved”, job opportunities and “time to relax!”-content

Thanks to Dave, Andy, Jan, Robert and Noah for submitting links!

We want to ask you to help us promote the News, this is also a good way to contribute to the project –

Thank you & all the best 

Panny Wang | 25 Sep 10:43 2014

Is there a general rule to estimate the maximum disk space for couchdb database?


We are using CouchDB in our new project but there's question raised that
how much space should I prepare for a couchdb database?

My simple example is:
1000 * 1KB document that will be appended to database in the beginning of
each day and each document will be updated every hour a day (i.e. 24
revisions are kept for a document).
So can I know how much disk space at most will be used in this case?

I understand that the disk space is not as expensive as before and
compaction can be done every day to save the disk space. But if we don't
have a basis to know how much the disk space will be used, then the whole
system may run into a 'out of space' situation and it is unpredictable when
the space will be run out.

It is appreciated if you can shed some light on the rule about how estimate
the maximum disk space for a couchdb database.

Thank you!

Pavlo Tilinin | 24 Sep 13:08 2014

physical database movement without shutting down couchdb

Dear users,
Is there any way to move database files physically without shutting-down whole couchdb engine?

My environment:
Couchdb v1.2.0,
Ubuntu server 12.04,
>300 couch databases which totally consume more than 300GB of space.
Largest database has size 25GB, about twenty databases of size 10-25GB, about twenty 5-10GB. All other are
1GB or less.
Every database serves one customer.

My task:
Currently I am running out of free space on partition which hosts database_dir, and looking for way to move
databases to another physical location (which is already connected and ready to use). I need to move if not
all DBs - then at least part of them, to free some space for databases which will left at their current place. 
But I Can't stop couchdb for this maintenance more than for 15 minutes due to SLAs. 15 minutes is not enough
to copy 300Gigs :(, that is why I am asking for any known method to move databases physically.

What I already tried:

*         Rsync files from old location to new, to make copy as close to original as possible. Then stop engine for
shorter time, rsync again and start with new database_dir. But rsync makes no sense, because files to
large and continuously changing, so every run of rsync is almost equal to copy.

*         Replicate databases inside one couchdb instance: if we have /oldlocation/db1.couch - then I create
symlink `ln -s /newlocation/db1.couch /oldlocation/localreplica_db1.couch`, and then PUT to
_replicator database with request to replicate from db1 to localreplica_db1.  My idea was to keep two
copies of each db some time, and then restart couchdb with new database_dir. I will have the same names, but
this method disregards "_changes" sequence, which is significant.

(Continue reading)