Herman Chan | 23 Oct 23:49 2014

Find long running request within couch

Hi all,

 From the logs of our api server, we noticed that there were some 
requests that take a long time to return from our couchdb server.  A 
quick look at /_stats tell us there are definitely some offending 
requests taken place (max response time being 900 seconds).

We can setup monitoring but it won't help us to pinpoint the exact 
request(s) that takes a long time to run.  Any hints on where to start 
to look into this problem?

Any help is appreciated.

Herman

Lena Reinhard | 23 Oct 19:01 2014

[BLOG] The CouchDB Weekly News is out

Hi folks,

this week’s CouchDB Weekly News is out:

http://blog.couchdb.org/2014/10/23/couchdb-weekly-news-october-23-2014/

Highlights:
- video: "A CouchDB replication endpoint in PHP"
- Registration for CouchDB Day 2015 is open
- various new job opportunities for people with CouchDB skills
… as well as the regular Q&A, “get involved”, job opportunities and freshly
added “time to relax!”-content

Thanks to Alex for submitting a link!

We want to ask you to help us promote the News, this is also a way to
contribute to the project –
Twitter: https://twitter.com/CouchDB/status/525327013163175937
Reddit:
http://www.reddit.com/r/CouchDB/comments/2k3ymg/the_couchdb_weekly_news_october_23_2014/
Linkedin:
https://www.linkedin.com/company/5242010/comments?topic=5931094874072850432&type=U&scope=5242010&stype=C&a=VMC7&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb
G+:
https://plus.google.com/u/1/b/109226482722655790973/+CouchDB/posts/56sTFyHuRVz
Facebook:
https://www.facebook.com/permalink.php?story_fbid=586621048036790&id=507603582605204

Thank you & best regards

Lena
(Continue reading)

Martin Monperrus | 22 Oct 21:13 2014

OS process timed out

Hi,

I'm suffering from an intermittent but severe couchdb error.

This error is of the form of "OS process timed out." as shown in
http://pastebin.ubuntu.com/8629425/

My current troubleshooting is as follows:
- There is no big document, no heavy load on the server and few requests
on couchdb.
- Setting "query_server_config/os_process_limit" and
"couchdb/os_process_timeout" do not seem to help.
- The bug happened in version 1.2, I migrated to 1.6 and it is still there.
- Restarting does not solve the error (at least not always)

Any clue on how to diagnose and fix the problem?

Best regards,

--Martin

Martin Monperrus | 22 Oct 18:30 2014

OS process timed out

Hi,

I'm suffering from an intermittent but severe couchdb error.

This error is of the form of "OS process timed out." as shown in http://pastebin.ubuntu.com/8629425/

My current troubleshooting is as follows:
- There is no big document, no heavy load on the server and few requests on couchdb.
- Setting "query_server_config/os_process_limit" and "couchdb/os_process_timeout" do not seem to help.
- The bug happened in version 1.2, I migrated to 1.6 and it is still there.
- Restarting does not solve the error (at least not always)

Any clue on how to diagnose and fix the problem?

Best regards,

--Martin

Mat Degerholm | 22 Oct 06:25 2014
Picon

Assistance in tuning CouchDB heap and/or replicator config

Hi guys,

Just looking for a little assistance in tuning the CouchDB heap and/or replicator config.

When performing an initial/full replication to an empty 2nd node using the _replicator database, the 2nd
node crashes with the following error:

hend=0x00007f280d67f628
stop=0x00007f280d67f418
htop=0x00007f280d6a27b0
heap=0x00007f280d592028
beam/erl_gc.c, line 427: <0.194.0>: Overrun stack and heap
heart: Tue Oct 21 13:29:55 2014: Erlang has closed.
heart: Tue Oct 21 13:29:56 2014: Executed "/opt/couchdb/bin/couchdb -k" -> 0. Terminating.

As my replication source node does have a sizeable data set (16,000 docs, 180,000 updates, total size
around 1GB), I found I could prevent the crash by reducing the replication worker batch size.  But I found I
had to drop down 2 orders of magnitude (from the default of 500 down to 5) before the problem disappeared!

[replicator]
; With lower batch sizes checkpoints are done more frequently. Lower batch sizes
; also reduce the total amount of used RAM memory.
worker_batch_size = 5

I'm nervous having to tweak the replicator setup so far from the defaults, so is there any way I can tune the
stack/heap?  I have included my ulimit output in case you could recommend any changes (note that my stack
size and max memory are unlimited).

Thanks in advance,
Mat
(Continue reading)

Andy Wenk | 21 Oct 20:56 2014
Picon

CouchDB Day 2015 - update and new sponsor

Hi everyone,

first of all, we are very excited to announce another Sponsor for the
CouchDB Day 2015 in Hamburg. It is SinnerSchrader (
http://www.sinnerschrader.com/). Yay.

http://day.couchdb.org/#sponsors

And here is a quick update.

Our search for a cool location seems to be successful. This means we will
be able to announce the date and the location for the CouchDB Day in
between the next fee days. Can't wait to write the email :)

And finally, the interest in the CouchDB Day is really awesome. There are
many people how pre-registered at
https://ti.to/andywenk/couchdbday-hamburg-2015/ already. Remember that this
is a free event but the seats are limited. So ... :)

That's it for now! Stay tuned :)

All the best and Cheers

Andy and Robert

--

-- 
Andy Wenk
Hamburg - Germany
RockIt!

(Continue reading)

Lena Reinhard | 16 Oct 18:10 2014

[BLOG] The CouchDB Weekly News is out

Hi everyone,

this week’s CouchDB Weekly News is out:

http://blog.couchdb.org/2014/10/16/couchdb-weekly-news-october-16-2014/

Highlights:
- many new releases in the CouchDB universe
- upcoming events and announcement of CouchDB Day in Hamburg
- Ben Bastian has been elected as a CouchDB Committer
… as well as the regular Q&A, discussions, “get involved”, job opportunities and “time to relax!”-content

Thanks to Dave, Andy and Alex for submitting links!

We want to ask you to help us promote the News, this is also a way to contribute to the project –
Twitter: https://twitter.com/CouchDB/status/522780426130423808
Reddit: http://www.reddit.com/r/CouchDB/comments/2jff1j/couchdb_weekly_news_october_16_2014/
Linkedin: https://www.linkedin.com/company/5242010/comments?topic=5928545314414825472&type=U&scope=5242010&stype=C&a=Nks_&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb
G+: https://plus.google.com/b/109226482722655790973/+CouchDB/posts/Un27dSnZhRb
… and on Facebook (http://facebook.com/couchdb), which is down at the moment but might be back someday.

Thank you!

With best regards

Lena

Andy Wenk | 14 Oct 22:59 2014
Picon

Announcing CouchDB Day hamburg 2015

Hi dear CouchDB community,

we are delighted to announce the CouchDB Day 2015 in Hamburg. We are
looking forward to create a day for all people interested in CouchDB.
Wether you are interested in core development in Erlang, frontend
development in Fauxton, community management, CouchDB client creation or
simply creating awesome stuff with CouchDB, this event is for you.

http://day.couchdb.org/

We do not have a date yet, but we will announce it in the next days. It
will be most likely a Saturday between January 10th and February 14th. If
you already know, that you "have" to attend at the event, please show your
interest at our ticket site (by Tito):

https://ti.to/andywenk/couchdbday-hamburg-2015/ (please use the form at the
bottom)

We are really looking forward to meet you in Hamburg. Please spread the
word - a lot :)

All the best from Hamburg

Andy and Robert

P.S.: Relax!

--

-- 
Andy Wenk
Hamburg - Germany
(Continue reading)

Ingo Radatz | 14 Oct 15:04 2014

CouchDB server responses 204 for request that trigger indexing

I use a CouchDB 1.6.0 behind a HAproxy. The CouchDB gets bigger amounts of docs as bulk uploads.

After an upload the next request triggers indexing as expected. Now my problem:

When the indexing takes too long the CouchDB seems to request a 204 (seen in the haproxy.log) to the
transparent HAproxy (which itself translates that to a 502 Bad Gateway).

I there an timeout setting for the mochiweb server which can be increased?
Gijs Nelissen | 10 Oct 16:37 2014

Email statistiscs : using reduce for uniques

Hi,

I have a couchDB view with about 20 million very simple events:

key: [1,1,1,1,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'click'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- second
open by user
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- third open
by user

Now i want to do very mailchimp/campaignmonitor like summary per campaign
(key[3}) that show nr of unique delivers, nr of unique opens, nr of unique
clicks.

I have been trying different approaches to achieve this by using a custom
map and reduce function.

//map
function(doc) {
   emit([doc.license.id,10, doc.release.id, doc.email.id, doc.contact.id,
doc.type], null);
}

//reduce
function(keys, values, rereduce){
    if (rereduce){
(Continue reading)

Gijs Nelissen | 10 Oct 16:10 2014

Email statistiscs : using reduce for uniques

Hi,

I have a couchDB view with about 20 million very simple events:

key: [1,1,1,1,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'click'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- second
open by user
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- third open
by user

Now i want to do very mailchimp/campaignmonitor like summary per campaign
(key[3}) that show nr of unique delivers, nr of unique opens, nr of unique
clicks.

I have been trying different approaches to achieve this by using a custom
map and reduce function.

//map
function(doc) {
   emit([doc.license.id,10, doc.release.id, doc.email.id, doc.contact.id,
doc.type], null);
}

//reduce
function(keys, values, rereduce){
    if (rereduce){
(Continue reading)


Gmane