Giovanni Lenzi | 18 Apr 11:35 2015

Hi from Smileupps Team

Hi everybody,

Thanks for sharing our tutorial link.

I just wanted to introduce myself and the Smileupps Team in a more formal
way. Last months, we put a lot of effort in our project, so we didn't find
much time for socials and PR.

As some of you may know, we are an italian team, withouth any external
financing, interested in CouchDB growth. We started www.couchappy.com
project in 2013, as a plain CouchDB hosting, with the mission to ease
developer life in creating great apps. We thought CouchDB was great for it
and we still continue to think it.

Two months ago, we released our 2.0 version and, in agreement with the PMC,
we renamed the project to www.smileupps.com to leave the word "Couch" out
of it, so preventing any confusion to CouchDB community and ecosystem.

With 2.0 we finally reached what was our target from the beginning: an app
store for CLOUD web applications... an app-store which, being based on
cloud apps, may really be more business-oriented and where customers may
finally find something useful for their daily job, besides the latest
puzzle-game or notepad.

We wanted to let developers focus on building apps and providing support
only, instead of worrying about side activities like housing, server
administration, security and so on. To this purpose, we introduced a new
cost-effective pricing model, to clearly separate app from hosting costs.
However, we firmly don't want to lock developers with us, so we support
only opensource CouchDB versions, to give them option to change us anytime.
(Continue reading)

Lena Reinhard | 17 Apr 17:20 2015
Giovanni Lenzi | 17 Apr 15:46 2015

New Couchapp Tutorial available <at> smileupps

Hi guys,

We just published a tutorial on "Couchapps: How to write a secure web
application using CouchDB only as a 3-Tier Single Server"

You can find it at:
https://www.smileupps.com/couchapp-tutorial-chatty

Will you include it in your CouchDB weekly newsletter?

Thanks,
--------
Giovanni Lenzi,
Smileupps Cloud App Store
https://www.smileupps.com
ken tashiro | 15 Apr 11:55 2015
Picon

how to use an included document's key?

Hi,

I have a question. Suppose documents are
[
{ "_id": "11111" },
{ "_id": "22222", "ancestors": ["11111"], "value": "hello" },
{ "_id": "33333", "ancestors": ["22222","11111"], "value": "world" }
]
like
http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views#Linked_documents
and "include_docs=true".

I want to make a map function like
function(doc) {
    if (doc.ancestors) {
      for (var i in doc.ancestors) {
        if ( doc.value === 'world' AND ' value of ancestor is "hello" '  ) {
          emit (doc._id,ancestor's id);
        }
      }
    }
}
and get {"key":"33333",value:"22222"}.

My question is how can I use an  included document and know
 "value of ancestor" ?

Thank you.

ken tashiro
(Continue reading)

Chris Thro | 13 Apr 19:27 2015

can replication be setup between different versions?

Hi,

Can couchdb replication be setup between 1.2 and the newest version (I think it is 1.6)?

 

Thank you

 

Chris Thro


Senior Database Administrator, Operations

T: +1 805-690-7925 | M: +1 707-364-0682
chris.thro-Sxgqhf6Nn4DQT0dZR+AlfA@public.gmane.org

 

 

Christopher D. Malon | 10 Apr 21:39 2015
Picon

no speed-up on GET with horizontal scaling

[cross-post from Server Fault, where apparently nobody looked at it]

Everyone raves about CouchDB's horizontal scaling, but I must be doing something wrong, because my simple
test isn't getting faster performance with more servers.

My backend lives in an EC2 VPC, so I'm in admin party mode in a private subnet, using plain HTTP without
authorization.  Each of the N backend instances has (N-1) `_replicator` entries per table, continuously
pulling from the (N-1) peers.  The architecture looks like

    [M x m1.small] REST client -> [1 x m1.small] HaProxy -> [N x m1.medium] CouchDB

Because M is small, I've set up HaProxy with `balance roundrobin`; otherwise the requests end up going to a
single instance.

I test by (manually) launching a script on each of the M clients, just a split-second apart, to do the following:

- Each client forks into 30 processes before connecting, so that roughly 30 * M requests can be simulated. 
Each client will establish its own keep-alive HTTP connection to the load balancer.

- Each forked process creates 100 tiny randomly named records and PUTs them in a single table.  A GET is done
before each PUT to make sure there is no previous revision (but with random names, there never is).  I
measure the wallclock time before all processes finish on each of the M clients.

- About thirty seconds after all the PUTs finish, I do the same thing with GETS.  Each forked child GETs the
records that it just created.  I measure wallclock time on each of the M clients again.

I find that

- the PUT job gets slower as N increases (2:21 for N=1, 3:43 for N=2)

- the GET job takes the same amount of time for N=1,2,3 (0:16)

I'm not surprised that PUT is slower, because each write now has to be sent N places instead of one.  However,
I'm surprised that GET stays constant.  My post-facto guess at an explanation is:

- No time is saved on HTTP requests per machine, because the bottleneck would be at the load balancer.  (And
according to [AWS
documentation](http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-load.html),
"one small instance [of HaProxy] is usually sufficient to handle all application server traffic" (under
what assumptions, I don't know).

- No time is saved on disk access because everything is still hot in the disk cache.

How can I make this a realistic test of the number of clients and requests per second I can serve with a given
setup?  Should I fill the disk with trivial records in order to make cache hits less likely?  Or can I already
conclude that there's no benefit to horizontal scaling (and the only way to do better is to buy provisioned IOPS)?

Thanks in advance for your help!

Chris Thro | 9 Apr 19:34 2015

replication continuosly restarts

We are constantly seeing the following in the log:
[Thu, 09 Apr 2015 11:03:30 GMT] [error] [<0.105.0>] Error in replication
`64f5d1684f0e22273f165de8b44893fc+continuous` (triggered by document
`docs-g2w_couchdb3_las`): {checkpoint_commit_failure,<<"Error updating the source checkpoint
document: conflict">>}
Restarting replication in 5 seconds.
[Thu, 09 Apr 2015 11:03:30 GMT] [error] [<0.25603.1698>] ** Generic server <0.25603.1698> terminating
** Last message in was {'$gen_cast',checkpoint}
** When Server state == {rep_state,

Replication is setup the following way:
1.dc1 <-> 1.dc2
2.dc1 <-> 2.dc2
1.dc1 <->2.dc1
1.dc2 <-> 2.dc2

We have 7 dbs replicating this way but only 5 of them are showing this problem.
Any help solving this would be greatly appreciated.

Thank you

Jerry | 9 Apr 08:34 2015

CouchDB 1.6.1 crashes when compacting 5GB database

Hi

When compacting a 5GB database and it proceeded for a while then exited.
No error from the log.

And I have no view in my database

Apache CouchDB 1.6.1 (LogLevel=info) is starting.
Apache CouchDB has started. Time to relax.
[info] [<0.32.0>] Apache CouchDB has started on http://0.0.0.0:5984/
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200
[info] [<0.105.0>] 192.168.1.2 - - GET 
/cms/_changes?limit=10&include_docs=true&feed=longpoll&timeout=180000&since=11799 
200

Br

Lena Reinhard | 8 Apr 21:12 2015

[BLOG] The CouchDB Weekly News is out

Hi CouchDB Community,

this week’s CouchDB Weekly News is out:

http://blog.couchdb.org/2015/04/08/couchdb-weekly-news-april-08-2015/

If you want to help us promote the News, please share them, e.g. in these networks:  
Twitter:  https://twitter.com/CouchDB/status/585882218314260480
Reddit: 
http://www.reddit.com/r/CouchDB/comments/31wn2g/vote_on_the_new_couchdb_logo_releases_use_cases/ <http://www.reddit.com/r/CouchDB/comments/31wn2g/vote_on_the_new_couchdb_logo_releases_use_cases/>
G+: https://plus.google.com/b/109226482722655790973/+CouchDB/posts <https://plus.google.com/b/109226482722655790973/+CouchDB/posts>
Facebook:
https://www.facebook.com/permalink.php?story_fbid=639448319420729&id=507603582605204 <https://www.facebook.com/permalink.php?story_fbid=639448319420729&id=507603582605204>
Linkedin:
https://www.linkedin.com/company/5242010/comments?topic=5991647619770777601&type=U&scope=5242010&stype=C&a=45d6&goback=%2Ebzo_*1_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb <https://www.linkedin.com/company/5242010/comments?topic=5991647619770777601&type=U&scope=5242010&stype=C&a=45d6&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb>

In case you haven’t joined the CouchDB Advocate yet: we want to invite you to join us there and help us
spread the word about CouchDB: https://couchdb.influitive.com/ Also, if you know people who may be
interested in getting into Open Source through CouchDB and are not developers – this can be a good place
for them to get started as well.

Have a good week! 

Best,
Lena
Raj Singh | 8 Apr 17:32 2015

Boston CouchDB enthusiasts meetup -- April 30th

For those of you in the Boston area, we're organizing a meetup on April
30th. Many CouchDB committers from all over the globe will be in town for
an IBM Cloudant company meeting, and with version 2 coming out soon (with
many new features contributed by Cloudant) it's a perfect time to get
together.

RSVP here:
http://www.meetup.com/CouchDB-Boston/events/221693785/

-----
Raj R. Singh
IBM Cloudant Developer Advocate
linkedin: http://www.linkedin.com/in/rajrsingh/
twitter: rajrsingh
Tito Ciuro | 4 Apr 21:22 2015
Picon

Unable to open CouchDb database

Hello,

We’ve had our database working for a few months without an incident. I just started seeing our app fail due
to CouchDB. The CouchDB log states the following:

> SyntaxError: JSON.parse: unterminated string
> Stacktrace:
> 	() <at> share/couchdb/server/main.js:1556
> 	 <at> share/couchdb/server/main.js:1573
> 	 <at> :0
> Failed to execute script.
> Apache CouchDB 1.6.1 (LogLevel=warning) is starting.
> Apache CouchDB has started. Time to relax.
> [error] [<0.148.0>] Could not open file /Library/Developer/Database/xcs.couch: file already exists
> [Sat, 04 Apr 2015 19:00:37 GMT] [error] [<0.148.0>] Could not open file
/Library/Developer/Database/xcs.couch: file already exists
> [error] [emulator] Error in process <0.277.0> with exit value:
{function_clause,[{couch_compress,decompress,[<<0
bytes>>],[{file,"/SourceCache/XCSCouchDB/XCSCouchDB-2/dependencies/couchdb/src/couchdb/couch_compress.erl"},{line,67}]},{couch_file,pread_term,2,[{file,"/SourceCache/XCSCo… 

Why is it stating "Could not open file <…> file already exists”? If the database exists (expected), why
cannot be opened? And why is share/couchdb/server/main.js giving an error? AFAIK, our CouchDB
installation hasn’t been touched in months. Any ideas?

Thanks,

— Tito

Gmane