Tupolov | 31 Aug 10:27 2014

Alivemod.com looking for CouchDB ninja

We are a volunteer game modding team working on ALiVE Mod (http://alivemod.com) - an Arma 3 mod. We are
looking for some volunteer expertise to help us optimise our CouchDB implementation.

Our mod uses an extension to Arma 3 that allows us to capture in game events and pass them to external
components. Our current integration sends events to a CouchDB. This data can be viewed on our War Room
application, providing leaderboards, after action review and more for Arma 3 players. 

We have had an overwhelming response from the Arma 3 community and have over 4000 players registered and
over 1 million events captured.

We are looking for someone that could review our CouchDB implementation and provide some recommendations
on optimisation. Any help is much appreciated and you will be credited as part of the mod! Please contact us
for more information!


ALiVE Mod Team

Conor Mac Aoidh | 29 Aug 15:50 2014

validate_doc_update design function

Hi All,

I'm writing a validate_doc_update function at the moment. I want to 
validate that document inserts comply with a strict schema. I have been 
thinking of how to do this, without making the validate function too 

Since there is no way to pass parameters to the validate_doc_update 
function, I was thinking of fetching the schema (contained in a local 
JSON file) asynchronously. This could be a terrible idea. However, I've 
found that I can request the schema once and then store it. So, there 
would be one initial performance hit in fetching the file, and from then 
on it would be saved. See the example function:
//function validate(new_doc, old_doc, userCtx){//
//   if(typeof this.schema == 'undefined')//{//
//        // get the schema//
//   }//
//   // make sure new_doc conforms to the this.schema[new_doc.type]//

I'm just wondering, is there a better way to do this? Are there any 
compelling reasons not to do this?

Also, I have considered just including the schema statically in the 
function but the solution above is preferable as the schema changes 
often and I don't want to have to update the design functions.


(Continue reading)

Ilion Blaze | 29 Aug 00:10 2014

Help troubleshooting CouchDB crashing problem during Iron Cushion load testing

I'm working on an app that makes use of CouchDB to store orchestration 
configurations for OpenStack. Before we started putting things in 
production we decided to run some stress testing with Iron Cushion. I 
ran this in three environments - my own desktop (which ended up being 
the most powerful), a virtual server that I have for development 
(weakest), and a virtual server that is part of one of our staging 
environments (mid).

I basically used the settings used in the examples provided in the Iron 
Cushion documentation, the important ones being that I was using 100 
connections to insert 1000 documents each 20 times during the bulk 
insert phase. This had no problem running on my home system, but the 
other two would cause the CouchDB server to crash every time. I toyed 
with the settings and the only one that seemed to affect this was 
changing the number of documents per bulk insert. At 10 (100 connections 
* 10 documents per * 20) the tests passed in all environments. At 100 
documents per, the tests occasionally passed in my personal dev server 
(weakest) and always on my home system, but would consistently crash on 
the staging server.

After trying this several times I deleted the document store I'd been 
using on the staging server and created a new empty one. I ran the 
original (100x100x20) test. It managed to insert ~1.5 million documents 
before dying. (It should have completed at 2 million).

The CouchDB log only shows the successful 201 requests for the inserts. 
There's no indication of an error or any message about it dying. I have 
the output on pastebin at: http://pastebin.com/pF8AXceY . In summary I'm 
getting IOExceptions (Connection reset by peer) and 
ClosedChannelExceptions. This to me sounds like time out issues. Is 
(Continue reading)

Lena Reinhard | 28 Aug 16:05 2014

[BLOG] The CouchDB Weekly News is out

Hi everyone,

this week's CouchDB Weekly News is out: 


- Summary of the CouchDB meeting
- Discussions about recommended rev_limit value and deleted documents being replaced by previous versions
- Rewriting the CouchDB HTTP Layer
… as well as all the usual community releases, opinions, questions, events, jobs, "relax!" section and links.

Thanks to Andy for submitting a link!

Please help us promote the News –
Twitter: https://twitter.com/CouchDB/status/504992620939321345
Reddit: http://www.reddit.com/r/CouchDB/comments/2etmdc/couchdb_weekly_news_august_28_2014/
LinkedIn: https://www.linkedin.com/company/5242010/comments?topic=5910757586822533120&type=U&scope=5242010&stype=C&a=2a0B&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb
G+: https://plus.google.com/b/109226482722655790973/+CouchDB/posts/8V8bPxfz5bW
Facebook: https://www.facebook.com/permalink.php?story_fbid=567150479983847&id=507603582605204

We'd invite you to share the link in the social networks you use. This helps us promote the News and is also a
way to contribute to this project. 

Thank you & all the best


Lena Reinhard | 27 Aug 09:01 2014

[NEWS] Your links for the CouchDB Weekly News?

Hi everyone,

if you want to submit a link for this week's CouchDB Weekly News, please send it to this thread until
thursday, August 28, 11am UTC+2.

I'd especially invite you to submit content for the sections
- Releases in the CouchDB Universe
- Opinions, Talks, …
- Events
- Time to relax

Thanks in advance for your support, and have a good day today.


muji | 27 Aug 08:18 2014

Question: COPY /{db}/_local/{docid}

Hi all,

Just wondering if this is expected behaviour.

When using COPY with _local documents and the Destination header contains a
document identifier that does not start with _local then the document
appears to be copied without the copied document identifier starting with
_local/, ie, the copied document is no longer local, it is a regular

If the destination id starts with _local then it is copied as a local

As the API call is COPY /{db}_local/{docid} I would expect it to only copy
to another local document.

I imagine this is by design but I find it a little counter-intuitive and a
possible way to end up replicating documents inadvertently.

The full request/response log looks like:

[COPY] http://localhost:5984/mock%2Fdatabase/_local/mock%2Fdocument
  "destination": "mock/document/copy",
  "accept": "application/json",
  "host": "localhost:5984"
[201] http://localhost:5984/mock%2Fdatabase/_local/mock%2Fdocument
  "server": "CouchDB/1.6.0 (Erlang OTP/R15B03)",
(Continue reading)

david rene comba lareu | 26 Aug 22:09 2014

best practices on replication? recommended rev_limit value? network config?


i'm a new user of couchdb. my company is developing a SaaS app that
relies completely on json manifest to work, so couchdb was perfect for
the task. We expect a heavy load (100K users), so the replication is a
very important feature for us and like it was promoted that
replication was easy on couchdb, we finally decided to use it.

Before subscribing to the mailing list, i supposed that master ->
master replication was a good option, removing the failure point of
having only one write master at a time, but i just saw that exist's
"leaf"  revisions where the data is not consistent between masters.

so i have a couple of questions, about this:

1) what is the best setup to assure consistency? write-only masters to
read-only slaves like common node setups? Even that performance is
really important we need to prevail consistency on top of everything.
2) we don't need revs at all, all changes are final. reducing
rev_limit value has a positive impact on performance? if it has, what
is the recommended value?
3) like the wiki said that ssl was not supported correctly by erlang,
we set up an haproxy in the frontend that forward the request to the
couchdb http. like is the first time we work on a DB system with an
http frontend (and not a permanent connection like mysql or redis)
what is the recommended setup in terms of network? (like timeout, keep
alive options etc..). any documentation about this is useful.

Any advice regarding on this is highly appreciated !

(Continue reading)

Stefan Klein | 26 Aug 17:51 2014

Reliable Behavior or usage of a "flaw"


i got a "master" couchdb and a couchdb for each user. A filtered
replication is set up for each user, so documents from master where the
owner field === username get replicated to the corresponding users database.
If a document changes the owner i have to remove the document from the
former owners db.

This could be done in application logic, but might get complicated.

I came up with a different solution:

When the owner is changed, i don't simply post the new revision to the
master but i use bulk_docs (with all_or_nothing:true) to create 2 documents.
One representing the new version of the document.
One just containing the former owner and _deleted = true.
Both contain the _rev of the current document of the master db.

If i get the document from the master db, i "see" the new version, because
the other branch is deleted and the replication to the new owners db picks
up the new version since this is the one where the owner field matches. The
replication to the former owners db picks up the deleted version of the
document, because this is the one where the owner matches.

Is this a valid use of couchdb and reliable behavior or am a using some
kind of "flaw" and should feel bad?

(Continue reading)

Sanjuan, Hector | 25 Aug 14:16 2014

Deleted documents being replaced by previous revisions


we are running a couchdb 1.5.0 setup with master-master replication.

I am observing that sometimes, a document has multiple revisions stored,
and when deleting the most current one, a previous one replaces it
and becomes available.

The old revision numbers that are available are non-consecutives (i.e.
rev 1234 would be replaced by 742). Querying the revs would come back
with a list of non-consecutive revisions, for which a full document
exists even after compactation.

As I understand it, old revision records are kept around for
replication and its contents subject to disappear on compactation. I'd
assume writing a document 1000 times and then issuing a DELETE would
mark it as deleted and inform of this on subsequent GETs.

Has anyone come across anything similar? I have searched around without
much luck.

Is this maybe related to replication conflicts were the conflict is
resolved but the conflicting revisions left behind?

As of now, getting the documents truly deleted means issuing DELETE
a few times until every leftover revision is gone. Of course this only
shows up randomly here and there, and in small tests couchdb deletes
and works as expected.

(Continue reading)

Suraj Kumar | 25 Aug 12:09 2014

How to make use of couchdb.request_time stats?


The _stats API exposes a metric called request_time. It looks like this:

   "couchdb" : {
      [... snip ...]
      "request_time" : {
         "stddev" : 6205.853,
         "min" : 1,
         "max" : 550399.642857143,
         "current" : 13495123.406,
         "mean" : 1083.685,
         "sum" : 13495123.406,
         "description" : "length of a request inside CouchDB without

the 'current' value is a counter but the denominator is not available. What
is a good denominator to use? Perhaps there is some other section of the
_stats API which can be used?




An Onion is the Onion skin and the Onion under the skin until the Onion
Skin without any Onion underneath.

(Continue reading)

Andrey Kuprianov | 25 Aug 07:36 2014

Replication error on Couch 1.5.0

Hi guys,

We are having this replication problem on a couple of our servers. Here's a
Gist with the log (https://gist.github.com/andreyvk/62f788075fe42acf4720).
I had to star some sensitive data.

Servers are pulling data from each other and each of them is also pulling
from one more server. The whole setup was working very well until recently.

The error log contains a lot of stuff which i dont know how to interpret. I
can see some request timeout messages between the lines and hopefully that
what's happening right now, but I cannot be sure because I can freely ssh
between the servers (hence, connectivity is good).

Also, when replication is started, Futon shows "Checkpointed source
sequence 2677194, current source sequence 2694398, progress 100%", which
cant be true. Moreover, checkpointed sequence is stuck on the same number
and is not changing.

Did anyone experience similar problem?