Conor Mac Aoidh | 26 Nov 15:38 2014
Picon

Re: A elegant way around purging documents?

Hi Stefan,

I have a similar setup - individual user database which are filter 
replicated to a main database and also to pouchdb in the client.

I ran into a similar issue also of keeping the user databases (and pouch 
dbs) clean. The server side logic consists of tracking user sessions 
with a daemon, and then when a user is known to be inactive run a 
compaction on their db. In pouch, instead of cleaning the db and 
creating a new one, I opted as much as possible to use API calls (to the 
server application, not couch) to fetch data that would likely expire. 
This does however deviate from a pure-couch way of doing things. I would 
be interested to know if you come up with other alternatives!

Thanks

Conor

On 25/11/14 20:51, Sebastian Rothbucher wrote:
> Hi Stefan,
>
> as you delete per User-DB only, can you also CREATE per User-DB only. This
> way, you'd lose replication as a feature, but why not have a different
> Doc-ID and storing the original doc-ID in some field (so _id=123 on the
> central DB would become _id=345, originalid=123 on the user's DB).with a
> custom index, you can retrieve again. You'd never replicate the
> user-specific image documents and you can do whatever you like. I don't
> know how probable it is for an image getting deleted and then re-added, but
> it can hardly be more probable than getting another image alltogether.
>
(Continue reading)

Stefan Klein | 25 Nov 15:18 2014
Picon

A elegant way around purging documents?

Hi couch users,

I got a main database and a database per user.
A users Db is replicated to his device (mobile phone) - idea is as much
functionality should be available offline.

To simplify a bit, let's say i got documents of type "item" and of type
"image", each "item" may reference multiple "images" and each "image" may
be referenced by multiple "items".
When a user gets a new "item" from the main database, a daemon checks if
all referenced images are available for that user and triggers replication
of the images missing.
If a user still got images which aren't needed on his device anymore, i
want to delete them, why waste space on his device?

Here comes my problem:
Say he got the document "image1" in revision: "1-abc" and doesn't need it
anymore i will delete it, which creates {_id: 'image1', rev: '2-cde',
_deleted: true} if for some reason he needs "image1" again because it is
also referenced by a different "item" i will replicate {_id: 'image1', rev:
'1-abc', ./*.. */} from the main database to the users database again. The
users database "says" i know an ancestor of that document the revision
"2-cde" in that example and the document "image1" will not show up again.

One way to solve that by attaching the images directly to the "items" - but
we got ~7 million items sharing ~4000 images, so that would increase the db
size by much.

An other way is using purge to delete an image so it can be replicated
again, but it seams to be wrong to use purge for general application logic,
(Continue reading)

Alexander Harm | 23 Nov 21:40 2014
Picon

Offline replication/synchronisation

Hello,

I work at a large humanitarian organisation and I would like to create a new app to manage our local staff
(roughly 9.000 in more than 30 countries). I was looking into CouchDB as possible backend and it meets many
of my requirements. However, there is one little issue I could find nothing on the web about. Although
probably all of our projects have some sort of Internet connection (DSL, V-Sat, Thuraya) it still happens
that online synchronisation is simply impossible due to miserable connection quality and not just
somewhere in the bush.

I was wondering if it is possible to create some sort of file with the delta-update that could be transported
to a place with better Internet access or to the next hub (e. g. country coordination) where it is imported
in the instance running there. Does anyone know if something like that could be feasible?

Regards,

alh
Lena Reinhard | 20 Nov 19:52 2014

[BLOG] The CouchDB Weekly News is out

Hi everyone,

this week’s CouchDB Weekly News is out:

http://blog.couchdb.org/2014/11/20/couchdb-weekly-news-november-20-2014/

Highlights:
- CouchDB 2.0 Developer Preview is out
- the ASF adopted CouchDB's Code of Conduct
- many talks about CouchDB, e.g. during ApacheCon Europe
- a guide on events about CouchDB (WIP)
… as well as the regular Q&A, discussions, “get involved”, job opportunities and “time to relax!”-content

Thanks to Joan, Lynnette and Andy for submitting links!

We want to ask you to help us promote the News, this is also a way to contribute to the project –
Twitter:  https://twitter.com/CouchDB/status/535505454022266880
Reddit: http://www.reddit.com/r/CouchDB/comments/2mwfcg/couchdb_weekly_news_november_20_2014/
Linkedin: https://www.linkedin.com/company/5242010/comments?topic=5941270150983479297&type=U&scope=5242010&stype=C&a=Ey1D&goback=.bzo_*1_*1_*1_*1_*1_*1_*1_apache*5couchdb
G+: https://plus.google.com/u/1/b/109226482722655790973/+CouchDB/posts/TKwYeYvrqt7
Facebook: https://www.facebook.com/permalink.php?story_fbid=595645760467652&id=507603582605204

Best,
Lena

Jeldrik | 19 Nov 16:38 2014

Keep a Replication when moving the CouchDB

Hi there,

I already asked this question on #couchdb but I'm not really satisfied
with the answers I got. Just because there are some open questions left
with no answer in IRC. I thought it could be a good idea to open the
question for a wider group. I will paste both my original question and
the answers I got in #couchdb.

Many thanks for your help,
Jeldrik

==

This was the question (I just added some information):

We are moving a couchdb to new hardware but we have a pull replication
(couch_backup.example.com) which we want to keep. Our planned steps are
like these:
1. rsync db files from couch_live.example.com to couch_new.example.com
2. compact dbs on couch_new (this is neccessary because on couch_live
compression was turned off and is wished to be turned on now)
# Meanwhile the couch_live is still live and data is pushed to it from
clients and pulled by the couch_backup replication
3. start pull replication on couch_new with source couch_live and target
couch_new for all dbs
4. if all dbs are nearly in sync have a short downtime until the data is
fully in sync then turn over to couch_new
5. shutdown couch_live and the replication to couch_backup
6. new data is comming in to couch_new
7. start pull replication on couch_backup with source couch_new
(Continue reading)

Eric Ahn | 19 Nov 10:07 2014
Picon

Where can I find to progress about i18n?

I'm Eric, and took a "Contributing To CouchDB - Jan Lehnardt" seminar in
ApacheCon - EUROPE.
So I'm going to review the i18n project.
If there are critical problem or issues, Please let me know it. It's great
help to me.

--

-- 
Eric Ahn
Mobile : +82-10-9536-3338
nino martinez wael | 19 Nov 00:25 2014
Picon

Couchdb guice client?

After listening to Joan and Jan talking at apache con.. I've become
interested..

Would an client integration with guice ala
http://www.baeldung.com/2011/12/22/the-persistence-layer-with-spring-data-jpa/#overview

Be interesting?

Although using Couchdb concepts and with keep it simple in mind.. this
should help an easy adaptation for java people..

Wdyt? Looking forward to hearing comments..

If I begin. I'll put it in onami's sandbox..
Lena Reinhard | 18 Nov 19:09 2014

[NEWS] Your links for the CouchDB Weekly News

HI everyone,

after all great news from ApacheCon in the past two days, there'll be a lot of content for this week's CouchDB
Weekly News.
If you want to submit a link to share with the Community, please don't hesitate to send it to this thread until
thursday, 12pm UTC+1. 

Best regards,

Lena

Dirkjan Ochtman | 16 Nov 10:03 2014
Picon

[ANN] couchdb-python 1.0 released

Hello all,

I just pushed out a 1.0 release, containing these changes:

* Many smaller Python 3 compatibility issues have been fixed
* Improve handling of binary attachments in the ``couchdb-dump`` tool
* Added testing via tox and support for Travis CI

I feel that this code base is now mature enough that it deserves the
1.0 version.

The release can be downloaded from PyPI, here:

https://pypi.python.org/pypi/CouchDB/0.10

Any feedback is welcome on GitHub:

https://github.com/djc/couchdb-python

Many thanks to all contributors, especially new contributors Rémy
Hubscher, Mathieu Agopian, Raman Barkholenka and Thomas Jost.

Cheers,

Dirkjan

Jerry.Wang | 15 Nov 08:45 2014

group=true prevents rereduce?

Hi

I am not sure if rereduce would happen if group is set to true, actually someone raised it here.
But there is no assured answer.

>> group=true is the conceptual equivalent of group_level=exact, so CouchDB runs a reduce per unique key in the map row set.
This is how it is explained in doc for grouping (http://wiki.apache.org/couchdb/Introduction_to_CouchDB_views#Grouping) .
It sounds like CouchDB would collect all the values for the same key and only call reduce once per each distinct key.

But in another article, it is said that "This reduce result will then merge with existing rereduce result to compute the final reduce result of this key"


In my project,  there are many documents but there are most 2 values with the same key after grouping for each distinct key.
Will rereduce happen if I only run one server ?
Will rereduce happen if I have multiple master server?

Best Regards
J.W
Sanjaya Amarasinghe | 14 Nov 11:02 2014

Getting a proper JSON using an erlang list function

Hi,

Using an Erlang list function I do the following :

*Send(list_to_binary(io_lib:format("~p~n", [TheObj]))),*

A sample of what I get as the result is as follows :

----------------
{{[{<<"_id">>,<<"54cc5f3f6db028666fdcb4b75ca0712f">>},
  {<<"_rev">>,<<"3-c477c16c92cdfc94ea5c619c7363650e">>},
  {<<"attrib1">>,true},
  {<<"complexAttrib">>,
   {[{<<"attrib2">>,<<"54cc5f3f6db028666fdcb4b75ca0712f">>},
     {<<"attrib3">>,<<"abc123">>}
     }]}},
  {<<"attrib4">>,<<"qwertyuiop">>},
  {<<"attrib5">>,true}]}
}
----------------

I need the "TheObj" to get printed in the proper JSON format in the
result.  I tried with ejeson:decode() and didn't get any success..

Can somebody help me with this ?

Thank you.
Regards,
Sanjaya

Gmane