Matthew Brender | 17 Apr 21:03 2015

Riak Recap - April 17, 2015

Welcome back to The Recap. Here is a summary of what's come over our
user list of late.

## Code drops
There have been a number of big updates of late!

* Riak 2.1 is available for download [1]
* 2.x compliant .NET client [2]
* 2.x compliant Node.js client [3]
* 2.x compliant PHP client [4]

## Recently answered
Here are some interesting topics that came up in the last few weeks.

* A sneaky problem with PackageCloud was resolved thanks to Greg [5]
* Shawn helps out a newer community member by clarifying that Solr is
not a pre-requisite for Riak k/v
* Zeeshan answered a question about Solr tagging [7] and then wrote up
an example using protocol buffers [8]
* Zeeshan confirms we use distributed Solr in Riak Search and are
impacted by this known limitation around distributed joins [9]
* Luke explains how Riak Objects are returned in the Node.js client [10]
* Ciprian outlines inter-node port requirements for clustering [11]
* A known error was confirmed, showing connections drop while using
protocol buffers across a load balancer [12]
* Bryan reminds us of what behavior you see when drives get 100% full [13]
* Hanning solved his own problem AND posted it to Stackoverflow [14]
* Shawn discovered he had an issue with nodename suffix and resolved
it over IRC [15]

(Continue reading)

Luke Bakken | 17 Apr 19:15 2015

Node.js client addIndex() and map/reduce questions (Was: Object metadata via NodeJS client)

> 1 - when I have several indices to add to an object can I just chain them to the riakObject, ie, call
successive addIndex methods to a riakObject?

Source code is available as part of the API docs. Note that "this" is
returned, which allows chaining:

> 2 - when I am doing a map-reduce,
>       a - can I use key filters? What's the syntax?
>       b - how can I use map reduce to walk links? In riak-js I used to specify the links in the inputs. What is the
syntax with this client?

I am not an expert with map/reduce, but did find the following documents: (Note
deprecation notice)

Luke Bakken
lbakken <at>
Christopher Mancini | 17 Apr 16:16 2015

PHP Client Release

Greetings Riak Users!

Yesterday we released the awaited rewrite for the Official PHP client for Riak supporting version 2 features (bucket types, CRDTs, and user authentication over TLS). The library uses the HTTP interface to communicate with Riak and requires PHP version 5.4 or newer as well as the JSON and cURL PHP extensions.

It's available via composer:

Simply add the following to your composer.json file within your project:

"require": { "basho/riak": "2.0.*" },

The github repo can be found here:

API docs are published here:

Christopher Mancini
riak-users mailing list
riak-users <at>
Stanislav Vlasov | 17 Apr 13:57 2015

riak 2.0 + riak-cs 2.0 trouble

I have troubles setting up a test riak node for riak-cs. Here's how to
reproduce my problem:

1) install on Debian 7 riak 2.0.5 and riak 2.0.0 from apt repository
as in
2) create advanced.config in /etc/riak as in

After that I get an error in advanced.config

last lines of 'riak config generate -l debug':
10:54:47.488 [info] /etc/riak/advanced.config detected, overlaying proplists
10:54:47.488 [error] Error parsing /etc/riak/advanced.config: 17:
syntax error before: ']'

If i remove last comma in advanced.config, I get another error:

10:58:21.398 [info] /etc/riak/advanced.config detected, overlaying proplists
10:58:21.399 [error] Error parsing /etc/riak/advanced.config: 17:
syntax error before:

I think, it is a bug either in documentation or in config generator


Matthew Brender | 16 Apr 22:40 2015

[Announcement] Riak 2.1 - Features & Release Notes

Riak 2.1 is available [1]! Let’s start with the most fun part.

## New Feature
Riak 2.1 introduces the concept of “write once” buckets, buckets whose
entries are intended to be written exactly once, and never updated or
over-written. The write_once property is applied to a bucket type and
may only be set at bucket creation time. This allows Riak to avoid a
"read before write" for write_once buckets only. More information, as
always, is available in the docs [2]

## Other updates
There are a number of GitHub Issues closed with the 2.1 release. Some
noteworthy updates:

* A nice solution for a corner case that could result in data loss [3]
* A public API related to riak_core_ring_manager thanks to Darach Ennis! [4]
* A JSON writer for a number of riak_admin commands - see commit for details [5]
* Updates to Yokozuna (Riak’s Solr integration) that include
additional metrics thanks to Jon Anderson! [6]

Be sure to see the full Release Notes here [7] and the Product Advisories [8].

## Upgrading
Be sure to review documentation [7] before an upgrade. It’s worth
noting that all nodes in a cluster must be at 2.1 before you set the
write_once property on a bucket.

It’s worth noting that there is a known issue with Yokozuna that
causes entry loss on AAE activity [9]. Please keep this in mind before

## Feedback please
Do you have a use case where write_once could be helpful? Please reply
to me directly! I would love to learn about your environment and be
able to share more details with you.

Developer Advocate


riak-users mailing list
riak-users <at>
Gustavo Gonzalez | 14 Apr 18:54 2015

Android client, Riak server


Please excuse me if I am asking a silly question.

Is the Riak Java Client also usable (applies to) in Android application 

That is, if I want to develop an Android app that connects to a Riak 
Server, do I use this riak-Java-Client?

Thanks for your reply.

Shawn Debnath | 16 Apr 01:11 2015

Need help getting riak started

Hi there,

Building out a new cluster (for the first time) and package cloud pushed down 2.1.0-1. I have gone through and installed all the necessary packages, configured riak through riak.conf and attempting to start the first node in the cluster. Unfortunately, even though the processes are running, riak-admin reports that no nodes are running.

root <at> riak-01:/etc/riak# riak-admin diag
Node is not running!
root <at> riak-01:/etc/riak# riak-admin status
Node is not running!

12538 ?        S      0:00 /usr/lib/riak/erts-5.10.3/bin/epmd -daemon
13799 ?        S      0:00 /usr/lib/riak/erts-5.10.3/bin/run_erl -daemon /tmp/riak// /var/log/riak exec /usr/sbin/riak console
13802 pts/2    Ssl+   0:26 /usr/lib/riak/erts-5.10.3/bin/beam.smp -scl false -sfwi 500 -P 256000 -e 256000 -Q 65536 -A 64 -K true -W w -zdbbl 32768 -- -root /usr/lib/riak -progname riak -- -home /var/lib/riak -- -boot /usr/lib/riak/releases/2.1.0/riak -config /var/lib/riak/generated.configs/app.2015. -setcookie riak -name riak-01 <at> -smp enable -vm_args /var/lib/riak/generated.configs/vm.2015. -pa /usr/lib/riak/lib/basho-patches -- console
14047 ?        Ss     0:00 sh -s disksup
14050 ?        Ss     0:00 /usr/lib/riak/lib/os_mon-2.2.13/priv/bin/memsup
14052 ?        Ss     0:00 /usr/lib/riak/lib/os_mon-2.2.13/priv/bin/cpu_sup

The config states that it should be listening on:

listener.http.internal =
##listener.protobuf.internal = 10.IP.IP.IP:8087
listener.protobuf.internal =

I initially had it listening to our internal network IP, but as part of testing, switched to localhost to see if it resolves it but alas, that’s not the case.

In the log directory, files crash.log and error.log are empty and the console.log reports things are ok:

2015-04-15 22:17:03.386 [info] <0.7.0> Application riak_kv started on node 'riak-01 <at>'
2015-04-15 22:17:03.402 [info] <0.7.0> Application merge_index started on node 'riak-01 <at>'
2015-04-15 22:17:03.406 [info] <0.7.0> Application riak_search started on node 'riak-01 <at>'
2015-04-15 22:17:03.421 [info] <0.7.0> Application ibrowse started on node 'riak-01 <at>'
2015-04-15 22:17:03.429 [info] <0.7.0> Application yokozuna started on node 'riak-01 <at>'
2015-04-15 22:17:03.434 [info] <0.7.0> Application cluster_info started on node 'riak-01 <at>'
2015-04-15 22:17:03.443 [info] <0.192.0> <at> riak_core_capability:process_capability_changes:555 New capability: {riak_control,member_info_version} = v1
2015-04-15 22:17:03.461 [info] <0.7.0> Application riak_control started on node 'riak-01 <at>'
2015-04-15 22:17:03.462 [info] <0.7.0> Application erlydtl started on node 'riak-01 <at>'
2015-04-15 22:17:03.485 [info] <0.7.0> Application riak_auth_mods started on node 'riak-01 <at>'
2015-04-15 22:17:19.618 [info] <0.369.0> <at> riak_kv_entropy_manager:perhaps_log_throttle_change:853 Changing AAE throttle from undefined -> 0 msec/key, based on maximum vnode mailbox size 0 from 'riak-01 <at>'
2015-04-15 22:17:19.672 [info] <0.352.0> <at> riak_core:wait_for_service:483 Wait complete for service riak_kv (16 seconds)

Any help will be greatly appreciated.


riak-users mailing list
riak-users <at>
Henning Verbeek | 13 Apr 10:49 2015

Java client: ConflictResolver for RiakObject, how to get the key?

I'm in the process of migrating my code from Riak 1.4 to Riak 2.0.

In Riak 2.0, I'm storing binary data as a RiakObject:

RiakObject obj = new RiakObject();
StoreValue op = new StoreValue.Builder(obj)
   .withLocation(new Location(ns, keyOfObject))
   .withOption(StoreValue.Option.RETURN_BODY, false)

A siphash-digest is computed over the byte-array beforehand, and is
stored in a separate object in Riak (I call it 'manifest').

When fetching the binary data, I want to provide a custom
ConflictResolver. This resolver shall fetch the manifest to the binary
data, where it can look up the expected digest. This can then be used
for identifying and eliminating bad siblings. It can use the object's
key to identify the corresponding manifest.

My problem is: how does the conflict resolver know the key?

In Riak 1.4, I used IRiakObject to transport the data. The key was
available right on the IRiakObject:
public IRiakObject resolve(Collection<IRiakObject> siblings) {
    String key = siblings.iterator().next().getKey();

In Riak 2.0, the RiakObject does not expose this method. Is it
available maybe in the RiakUserMetadata ?

As an alternative, should I maybe create a POJO to encapsulate both
key (annotated with  <at> RiakKey ?) and byte[]-data? I guess, I'd need a
custom converter for that, right?



My other signature is a regular expression.
Jason Greathouse | 10 Apr 00:16 2015

Riak, firewalls and inter-node communication.

I'm working in an environment where the servers don't have access to each other by default, so we have to setup network ACLs.  For most of the ports this is pretty straight forward, but I can't find a good explanation on the inter-Erlang communication ports. 

I've read through this document:

I see that its possible to limit the port range to a specific range though the riak.conf
erlang.distribution.port_range.minimum = 6000
erlang.distribution.port_range.maximum = 7999

What I'm looking for is "What is the trade off of limiting the port range?" 
Is 2000 ports enough? Can I limit it to 5 (one per cluster node)? How about just one port?


Jason Greathouse
Sr. Systems Engineer

riak-users mailing list
riak-users <at>
Alex De la rosa | 9 Apr 15:10 2015

nodes with 100% HD usage

Hi there,

One theoretical question; what happens when a node (or more) hits a 100% HD usage?

Riak can easily scale horizontally adding new nodes to the cluster, but what if one of them is full? will the system have troubles? will this node only be used only for reading and new items get saved in the other nodes? will the data rebalance in newly added servers freeing some space in the fully used node?

riak-users mailing list
riak-users <at>
Alex De la rosa | 9 Apr 11:11 2015

object sizes

Hi there,

I'm using the python client (by the way).

obj = RIAK.bucket('my_bucket').get('my_key')

Is there any way to know the actual size of an object stored in Riak? to make sure something mutable (like a set) didn't added up to more than 1MB in storage size.

riak-users mailing list
riak-users <at>