Re: Why do couchdb reduce functions have to be commutative
Thanks very much for the reply. That makes sense.
I gather this means that if I'm running a single server, at least with
today's code, commutative isn't required? If so, is that something I can
count on? For example, if I know my application is quite small and will
never be sharded, is it safe for me to use a non-commutative reduce?
On Tue, Dec 3, 2013 at 9:57 AM, Oliver Dain <oliver@...> wrote:
> Because the order that we pass keys and values to the reduce function
> is not defined. In sharded situations (like bigcouch, which is being
> merged) an intermediate reduce value on an effectively random subset
> of keys/values is generated at each node and a final rereduce is done
> on all the intermediates. The constraints on reduce functions exist in
> anticipation of clustering.
> On 1 December 2013 21:45, Oliver Dain <opublic@...> wrote:
> > Hey CouchDB users,
> > I've just started messing around with CouchDB and I understand why CouchDB
> > reduce functions need to be associative, but I don't understand why they
> > also have to be commutative. I posted a much more detailed version of this