Alexander Shorin | 24 Apr 11:16 2014
Picon

[ANN] CouchDB monitoring: you're doing it wro...you can do it better!

Hi everyone again,

Actually, I have another one thing to share with you for today: it's
the post-guidance I wrote about monitoring CouchDB:

http://gws.github.io/munin-plugin-couchdb/guide-to-couchdb-monitoring.html

While it located in the same repository as plugin for specific
monitoring system, it's completely project neutral (with a bit
exception at the end) and aims to cover all the possibilities for
monitoring CouchDB server state. Just using only /_stats isn't enough.

If you prefer Zabiix or Nagios or something else you'll also may found
some interesting bits there.

As for discussion topic I'd like to ask everyone about how you monitor
your CouchDB in production? Which metrics are important for your and
which ones you feel missed? Experience sharing and cool stories about
how monitoring helps you to keep CouchDB good and well is welcome!

P.S. English isn't my native language, so please if you'd noticed any
misspelling or sentences with incorrect syntax, please don't shy to
send me private email with corrections. Thanks!

--
,,,^..^,,,

Alexander Shorin | 24 Apr 10:53 2014
Picon

[ANN] munin-plugin-couchdb 0.6 released

Hi everyone,

I'm happy to announce two news for today:

First is about the new release of Munin plugin for CouchDB:

https://github.com/gws/munin-plugin-couchdb/

It includes:

- Monitor server admins and users
- Monitor specified databases docs count and fragmentation rate
- More verbose autoconf
- Graphs fixes and spelling
- Another code cleanup cycle
- Better README with examples

And second one is about that munin-plugin-couchdb isn't abandoned
project: it's alive and going to monitor all your couches like a boss!

Thanks a lot to Gordon Stratton for starting this project and for help
with merging all the forks into main repository! And also to Nicholas
A. Evans for keeping project fork active while activity on main one
was stalled.

--
,,,^..^,,,

Liran | 23 Apr 21:43 2014
Picon

Custom reduce is slower then _stats

Hello everyone,

I tried replacing the built-in _stats function with my own Erlang version
that does only the "min" part:
fun(Keys, Values, ReReduce) ->
 lists:min(Values)
end.

I expected it to be faster if not the same, but the _stats was finished in
half the time of my custom reduce!

anyone know why?
i wanted to extend it later to return the doc._id with the minimum value
(the _stats just gives you value, but you don't know who is the document
that this value came from)
but if its gonna be so much slower, its useless.

thanks,
Liran
Schroiff, Klaus | 24 Apr 01:40 2014

BigCouch merge - conflict management

Hi all,

I am "just" a user so I'm not sure where to post this because the following is about the BigCouch merge.
Anyway ... I noticed the following link on the developer ML about the conflict management in BigCouch:

http://atypical.net/archive/2014/04/17/understanding-race-induced-conflicts-in-bigcouch

Compared to other NoSQLs, it is at least, in my opinion, a godsend value prop of (non-Big-)CouchDB that it
provides immediate feedback upon a document collision via MVCC.
You either win or lose unless you deal with replication where we have to live with more complex conflict resolution.
With the BigCouch merge things seems to be getting more fuzzy here.  Or to phrase it differently - BigCouch is
not drop-in compatible when relying on this mechanism.

Now I understand the reasoning behind all this but is this what we really want ?

Of course, we could alter our code as mentioned in the doc above but this appears to be a workaround rather
than a solution. And it costs performance.
I feel that it would make at least sense to have a configuration parameter where BigCouch "simply declares a
winner" rather than leaving it to the (multiple) clients to clean up the doc revisions somehow.
In my view there should be a higher strictness of the replica handling within the cluster compared to
external replication.
Just accepting the consequences by pointing to "eventually consistency" is a bit weak IMHO.
From my viewpoint CouchDB has two key differentiators - MVCC and replication - and MVCC is getting less
powerful then.

Thoughts ?

Thanks

Klaus
(Continue reading)

Lena Reinhard | 24 Apr 01:16 2014

[NEWS] Your link for the CouchDB Weekly News?

Hey everyone,

if you want to submit a link for tomorrow's CouchDB Weekly News, please don't hesitate to send them to this
thread until April 24, 10:45am CEST.

I appreciate your support!

Best from Berlin
Lena
Del Checcolo, Christopher | 23 Apr 00:52 2014

CouchDB won't start on Fedora 19 with systemctl

My colleague and I just installed couchdb-1.5.0-1.fc19.x86_64 on Fedora 19 and we are unable to start
couchdb using systemctl. We are able to start using /usr/bin/couchdb.

Running 'systemctl start couchdb' followed by 'systemctl status couchdb' results in the following output

[root <at> localhost yum.repos.d]# systemctl status couchdb
couchdb.service - CouchDB Server
   Loaded: loaded (/usr/lib/systemd/system/couchdb.service; disabled)
   Active: failed (Result: start-limit) since Tue 2014-04-22 18:44:25 EDT; 3s ago
  Process: 10551 ExecStart=/usr/bin/erl +Bd -noinput -sasl errlog_type error +K true +A 4 -couch_ini
/etc/couchdb/default.ini /etc/couchdb/local.ini -s couch -pidfile /var/run/couchdb/couchdb.pid
-heart (code=exited, status=1/FAILURE)

Apr 22 18:44:25 uxw-laptop-7.bio.noblis.org systemd[1]: Unit couchdb.service entered failed state.
Apr 22 18:44:25 uxw-laptop-7.bio.noblis.org systemd[1]: couchdb.service holdoff time over,
scheduling restart.
Apr 22 18:44:25 uxw-laptop-7.bio.noblis.org systemd[1]: Stopping CouchDB Server...
Apr 22 18:44:25 uxw-laptop-7.bio.noblis.org systemd[1]: Starting CouchDB Server...
Apr 22 18:44:25 uxw-laptop-7.bio.noblis.org systemd[1]: couchdb.service start request repeated too
quickly, refusing to start.
Apr 22 18:44:25 uxw-laptop-7.bio.noblis.org systemd[1]: Failed to start CouchDB Server.
Apr 22 18:44:25 uxw-laptop-7.bio.noblis.org systemd[1]: Unit couchdb.service entered failed state.

Has anyone had any luck running couchdb as a service on fedora 19?

Any assistance is greatly appreciated.

Thanks.

Chris.
(Continue reading)

Jean-Yves Moulin | 22 Apr 16:09 2014

Issues with terabytes databases

Hi everybody,

we use CouchDB in production for more than two years now. And we are almost happy with it :-) We have a heavy
writing workload, with very few update, and we never delete data. Some of our databases are terabytes with
billions of documents (sometimes 20 millions of doc per day). But we are experiencing some issues, and the
only solution was to split our data: today we create a new database each week, with even and odd on two
different servers (thus we have on-line and off-line servers). This is not perfect, and we look forward
BigCouch :-)

Below is some of our current problems with these big databases. For the record, we use couchdb-1.2 and
couchdb-1.4 on twelve servers running FreeBSD (because we like ZFS).

I don't know if these issues are known or not (or specific to us).

* Overall speed: we are far from our real server performance: it seems that CouchDB is not able to use the full
potential of the system. Even with 24 disks in RAID10, we can't go faster that 2000 doc/sec (with an average
document size of 1k, that's only a few MB/s on disk) on replication or compaction. CPU and disk are almost
idle. Tweaking the number of Erlang I/O thread doesn't help.

* Insert time: At 1000 PUT/sec the insert time is good, even without bulk. But it collapses when launching
view calculation, replication or compaction. So, we use stale view in our applications and views are
processed regularly by a crontab scripts. We avoid compaction on live servers. Compaction are launched
manually on off-line servers only. We also avoid replication on heavy loaded servers.

* Compaction: When size of database increase, compaction time can be really really long. It will be great if
compaction process can run faster on already compressed doc. This is our biggest drawback, which implies
the database split each week. And the speed decreases slowly: compaction starts fast (>2000 doc/sec) but
slow down to ~100 doc/sec after hundred of millions of documents.

Is there other people using CouchDB this kind of database ? How do you handle a write-heavy workload ?
(Continue reading)

Boaz Citrin | 22 Apr 14:06 2014
Picon

Compaction of a database with the same number of documents is getting slower over time

Hello,

Our database contains more or less the same number of documents, however
the documents themselves change frequently.
I would expect that compaction time will be the same, but I see that over
time it takes longer to compact the database.

Why is it so? Any way to overcome this?

Thanks,

Boaz
Alex Schenkman | 19 Apr 17:48 2014

How to delay view responses until index is rebuilt

Hi list,

According to what I see in the logs and the results I get, the following
might be happening. Given that:

1) I update a document (using an update handler)
2) Couch starts an index update
3) I request a view
4) I get the "old" view results
5) Couch finnish its re-indexing
6) I request the same view again
7) I get the new view results

I understand why this might be happening, but in my use case, I woud prefer
couch delaying the answer to the view request, until the index is rebuilt.
That is, once I update a document, I want views to reflect the change. Even
if this means being unresponsive for a second.

Is it possible to tell couch to behave this way?
I could not find any setting for this in the config file.

Thanks in advance!
Alex Schenkman | 18 Apr 21:46 2014

Howto delay view requests until indexes are rebuilt

Hi list,

According to what I see in the logs and the results I get, the following
might be happening. Given that:

1) I update a document (using an update handler)
2) Couch starts an index update
3) I request a view
4) I get the "old" view results
5) Couch finnish its re-indexing
6) I request the same view again
7) I get the new view results

I understand why this is happening, but in my use case, I woud prefer couch
delaying the answer to the view request, until the index is rebuilt.
Is it possible to tell couch to behave this way?
I could not find any setting for this in the config file.

Thanks in advance!
Omer Yousaf | 17 Apr 16:21 2014
Picon

Installing CouchDB from Source: Problem Installing SpiderMonkey1.8.5

Hi,
In installing CouchDB from source I am stuck at the installing
SpiderMonkey1.8.5 step. In order to run the ./configure command for couchdb
I must specify paths "--with-js-lib" and "--with-js-include" but failing to
install SpiderMonkey prevents me from doing so. I have tried to install
spidermonkey by doing following steps:
1) unpack js185-1.0.0.tar.gz and libmozjs185-devel-1.0.0-3.tar.bz2 packages
2) cd to js-1.8.5\js\src folder
3) run the command: ./configure
    or:  ./configure
--with-js-lib=/cygdrive/c/cygwin/home/omer.yousaf/src/js-1.8.5/js/src/usr/lib

--with-js-include=/cygdrive/c/cygwin/home/omer.yousaf/src/js-1.8.5/js/src/usr/include
I get the error "configure: error: installation or configuration problem:
C++ compiler cannot create executables".
Can you guide me as to how to get spidermonkey installed. Just FYI, I am
using this page for instructions on how to build couchdb from source
https://github.com/apache/couchdb/blob/master/INSTALL.Windows.
Best Regards,
Omer Yousaf
NorthBay Solutions, Lahore

Gmane