Lena Reinhard | 16 Oct 18:10 2014

[BLOG] The CouchDB Weekly News is out

Hi everyone,

this week’s CouchDB Weekly News is out:

http://blog.couchdb.org/2014/10/16/couchdb-weekly-news-october-16-2014/

Highlights:
- many new releases in the CouchDB universe
- upcoming events and announcement of CouchDB Day in Hamburg
- Ben Bastian has been elected as a CouchDB Committer
… as well as the regular Q&A, discussions, “get involved”, job opportunities and “time to relax!”-content

Thanks to Dave, Andy and Alex for submitting links!

We want to ask you to help us promote the News, this is also a way to contribute to the project –
Twitter: https://twitter.com/CouchDB/status/522780426130423808
Reddit: http://www.reddit.com/r/CouchDB/comments/2jff1j/couchdb_weekly_news_october_16_2014/
Linkedin: https://www.linkedin.com/company/5242010/comments?topic=5928545314414825472&type=U&scope=5242010&stype=C&a=Nks_&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb
G+: https://plus.google.com/b/109226482722655790973/+CouchDB/posts/Un27dSnZhRb
… and on Facebook (http://facebook.com/couchdb), which is down at the moment but might be back someday.

Thank you!

With best regards

Lena

Andy Wenk | 14 Oct 22:59 2014
Picon

Announcing CouchDB Day hamburg 2015

Hi dear CouchDB community,

we are delighted to announce the CouchDB Day 2015 in Hamburg. We are
looking forward to create a day for all people interested in CouchDB.
Wether you are interested in core development in Erlang, frontend
development in Fauxton, community management, CouchDB client creation or
simply creating awesome stuff with CouchDB, this event is for you.

http://day.couchdb.org/

We do not have a date yet, but we will announce it in the next days. It
will be most likely a Saturday between January 10th and February 14th. If
you already know, that you "have" to attend at the event, please show your
interest at our ticket site (by Tito):

https://ti.to/andywenk/couchdbday-hamburg-2015/ (please use the form at the
bottom)

We are really looking forward to meet you in Hamburg. Please spread the
word - a lot :)

All the best from Hamburg

Andy and Robert

P.S.: Relax!

--

-- 
Andy Wenk
Hamburg - Germany
(Continue reading)

Ingo Radatz | 14 Oct 15:04 2014

CouchDB server responses 204 for request that trigger indexing

I use a CouchDB 1.6.0 behind a HAproxy. The CouchDB gets bigger amounts of docs as bulk uploads.

After an upload the next request triggers indexing as expected. Now my problem:

When the indexing takes too long the CouchDB seems to request a 204 (seen in the haproxy.log) to the
transparent HAproxy (which itself translates that to a 502 Bad Gateway).

I there an timeout setting for the mochiweb server which can be increased?
Gijs Nelissen | 10 Oct 16:37 2014

Email statistiscs : using reduce for uniques

Hi,

I have a couchDB view with about 20 million very simple events:

key: [1,1,1,1,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'click'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- second
open by user
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- third open
by user

Now i want to do very mailchimp/campaignmonitor like summary per campaign
(key[3}) that show nr of unique delivers, nr of unique opens, nr of unique
clicks.

I have been trying different approaches to achieve this by using a custom
map and reduce function.

//map
function(doc) {
   emit([doc.license.id,10, doc.release.id, doc.email.id, doc.contact.id,
doc.type], null);
}

//reduce
function(keys, values, rereduce){
    if (rereduce){
(Continue reading)

Gijs Nelissen | 10 Oct 16:10 2014

Email statistiscs : using reduce for uniques

Hi,

I have a couchDB view with about 20 million very simple events:

key: [1,1,1,1,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,1,'click'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'deliver'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."}
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- second
open by user
key: [1,1,1,2,'open'] { email: "john@...", ip: "..."} <- third open
by user

Now i want to do very mailchimp/campaignmonitor like summary per campaign
(key[3}) that show nr of unique delivers, nr of unique opens, nr of unique
clicks.

I have been trying different approaches to achieve this by using a custom
map and reduce function.

//map
function(doc) {
   emit([doc.license.id,10, doc.release.id, doc.email.id, doc.contact.id,
doc.type], null);
}

//reduce
function(keys, values, rereduce){
    if (rereduce){
(Continue reading)

Lena Reinhard | 9 Oct 20:52 2014

[BLOG] The CouchDB Weekly News is out

Hi everyone,

this week’s CouchDB Weekly News is out:

http://blog.couchdb.org/2014/10/09/couchdb-weekly-news-october-09-2014/

Highlights:
- summary of the IRC Meeting yesterday
- many new events
- content about self-care and community care in the "Time to relax!" section
… as well as the regular Q&A, discussions, “get involved”, job opportunities and "… and also in the
news" content.

Thanks a lot to Andy for submitting links!

We want to ask you to help us promote the News, this is also a way to contribute to the project –
Twitter: https://twitter.com/CouchDB/status/520285086047301634
Reddit: http://www.reddit.com/r/CouchDB/comments/2is6ty/couchdb_weekly_news_october_09_2014/
Linkedin: https://www.linkedin.com/company/5242010/comments?topic=5926051110136610816&type=U&scope=5242010&stype=C&a=72NV&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb
G+: https://plus.google.com/b/109226482722655790973/+CouchDB/posts/C5YtD16pPQp
Facebook: https://www.facebook.com/permalink.php?story_fbid=582320425133519&id=507603582605204

Thank you.

Best regards

Lena
Lena Reinhard | 9 Oct 20:52 2014

[BLOG] The CouchDB Weekly News is out

Hi everyone,

this week’s CouchDB Weekly News is out:

http://blog.couchdb.org/2014/10/09/couchdb-weekly-news-october-09-2014/

Highlights:
- summary of the IRC Meeting yesterday
- many new events
- content about self-care and community care in the "Time to relax!" section
… as well as the regular Q&A, discussions, “get involved”, job opportunities and "… and also in the
news" content.

Thanks a lot to Andy for submitting links!

We want to ask you to help us promote the News, this is also a way to contribute to the project –
Twitter: https://twitter.com/CouchDB/status/520285086047301634
Reddit: http://www.reddit.com/r/CouchDB/comments/2is6ty/couchdb_weekly_news_october_09_2014/
Linkedin: https://www.linkedin.com/company/5242010/comments?topic=5926051110136610816&type=U&scope=5242010&stype=C&a=72NV&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb
G+: https://plus.google.com/b/109226482722655790973/+CouchDB/posts/C5YtD16pPQp
Facebook: https://www.facebook.com/permalink.php?story_fbid=582320425133519&id=507603582605204

Thank you.

Best regards

Lena
Nathan Vander Wilt | 9 Oct 19:33 2014

Trying to get to bottom of another CouchDB crash scenario

Any idea what might have caused the second crash, at bottom of this email? Yesterday the same CouchDB server
went down like this and didn't come back up:

-- first crash
    heart: Wed Oct  8 10:31:25 2014: Erlang has closed.
    Segmentation fault (core dumped)
    sh: echo: I/O error
    heart: Wed Oct  8 10:31:26 2014: Executed "/home/natevw/bc16/build/bin/couchdb -k" -> 256. Terminating.

…which have been because I was just starting it from crontab and hoping the `-b -r 5` options would
actually work. By today I've got the daemonization more properly setup, using upstart and its respawn option.

No big outage today, however I did notice another crash in the logs — I'd like to avoid the daemon
restarting at all in routine use if possible. I don't see anything particularly useful/interesting as to
the cause of the crash…does the backtrace below imply anything in particular?

The main difference the last two days is that this system is now back under some load (maybe 50 users, up from
maybe one or two in preceding weeks). Right now (under "higher" load) the server is showing "0.00, 0.01,
0.05" load average and 2.6 of 3.7GB memory free, so it doesn't seem offhand we're pushing the system too
hard. Besides basic reads/writes/view stuff, we still haven't migrated off use of per-user filtered
changes, which is the only thing I can think might lead to a load-related problem.

thanks,
-natevw

-- second crash

[Thu, 09 Oct 2014 15:23:24 GMT] [info] [<0.21979.2>] 127.0.0.1 - - GET
/production-db/org.couchdb.user%3Au123456 200
[Thu, 09 Oct 2014 15:23:26 GMT] [error] [<0.108.0>] {error_report,<0.31.0>,
(Continue reading)

Lena Reinhard | 8 Oct 16:24 2014

[NEWS] Your links for the CouchDB Weekly News

Hi everyone,

if you want to submit a link for tomorrow's CouchDB Weekly News, please send them to this thread until
tomorrow, October 9, 2014, 12pm UTC+2. 

Thanks for your support, and thanks to everyone who already submitted a link!

Best

Lena

Boaz Citrin | 5 Oct 01:24 2014
Picon

View question

Hello,

My documents contain two fields to maintain group associations, say "group"
holds the group document id, and "associated" holds the date this document
was added to the group.
Now I want to be able to know how many documents were added to a given
group[s] between two given dates.
The challenge is that to be able to filter by dates, I need to have the
date as the key first part.
But I also need the group as the first key part in order to aggregate the
number of group associations.

So I see two options here:

1.
Map: associated, {"group": group}
Reduce: a function that aggregates all values by group, which I assume is
fine as I know the number of groups is relatively small.
(plus configuring reduce_limit=false ...)

2.
Map: [group,associated], 1
Reduce: sum(values)
Here I cannot retrieve multiple groups at once, so I use a request per
desired group.

Tried the two approaches, with the first one gives faster response. Which
leads me to two questions:
1. Is there any risk in a reduce function that produces a potentially long
string?
(Continue reading)

Sundar Sankarnarayanan | 3 Oct 23:45 2014

Re: Error:not found

Hi Dave,

Curl and Dig looked ok to me. Here are the results
The curl Result is this

  *
Adding handle: conn: 0x7f93eb803000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f93eb803000) send_pipe: 1, recv_pipe: 0
* About to connect() to proxy proxy-***** port **(#0)
*   Trying 148.XXX.XXX.XXX...
* Connected to proxy-****** (148.XXX.XXX.XXX) port ** (#0)
* Establish HTTP proxy tunnel to skimdb.npmjs.com:443
* Proxy auth using Basic with user ’sundars user ID'
> CONNECT skimdb.npmjs.com:443 HTTP/1.1
> Host: skimdb.npmjs.com:443
> Proxy-Authorization: Basic c3NhbmsxMzpwdXRodXN1LTM=
> User-Agent: curl/7.30.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.0 200 Connection established
<
* Proxy replied OK to CONNECT request
* TLS 1.2 connection using TLS_RSA_WITH_AES_256_CBC_SHA256
* Server certificate: skimdb.npmjs.com
* Server certificate: RapidSSL CA
* Server certificate: GeoTrust Global CA
* Server certificate: Equifax Secure Certificate Authority
(Continue reading)


Gmane