Cosmin Marginean | 27 Jan 12:29 2015

Riak Java Client and Links

I am implementing a custom way to handle Riak Links using a Java Client. Looking at the samples available ( which is outdated) it seems that it’s not entirely straightforward to use RiakLinks with POJOs and automatic conversion. More importantly, when one wants to use RiakLinks, they have to use RiakObject and manually serialise the object.

I’d like to know if I’m missing something in the docs or if there are alternative practices for this usecase (POJO + “manually" handled links)

Thank you
riak-users mailing list
riak-users <at>
Sorin Manole | 27 Jan 12:21 2015

Rebuilding Solr indexes

Hi all,

We are using Solr indexes on top of Riak keys to make the search more easier.

One of the doubts regarding Solr is rebuilding the indexes. That could happen when we want to add a new index to the schema - which would mean drop and recreate the indexes. 

As far as we know after dropping and recreating the Solr indexes, for the already existing keys in Riak thos indexes don't rebuild. Is there a way to rebuild indexes on already existing keys when recreating a Solr index?

Also, if that's possible, any idea how time costly that'd be? Let's take example rebuilding indexes for 100 million riak keys.

Another doubt is around corrupting Solr indexes? How "probable" is it? Same question as before.. in the case of a corrupted index, how easy would be to re-index 100 million keys? 

Thanks in advance.

Sorin Manole
Senior Software Engineer, Trustev
m:+353 86 051 2658 | e:sorin.manole <at> | w:www.trustev.coma: Trustev Ltd, 2100 Airport Business Park, Cork, Ireland.

This message is for the named person's use only. If you received this message in error, please immediately delete it and all copies and notify the sender. You must not, directly or indirectly, use, disclose, distribute, print, or copy any part of this message if you are not the intended recipient. Any views expressed in this message are those of the individual sender and not Trustev Ltd. Trustev is registered in Ireland No. 516425 and trades from 2100 Cork Airport Business Park, Cork, Ireland.

riak-users mailing list
riak-users <at>
Damien Krotkine | 26 Jan 10:08 2015

measuring gossip bandwidth


I'd like to (at least approximately know) how much gossiping (and other 
things that are non strictly data-copying related) uses of the network 

Is there any information I can lookup in logs or via the console, or 
even any experiment that I can do to measure it?

I am operating clusters in a bandwidth-constrainted environment, and I'd 
like to keep bandwidth usage to a minimum

Santi Kumar | 26 Jan 09:45 2015

<at> RiakIndex annotated field is not stored as part of value

I'm defining a field of an object as 2i index with <at> RiakIndex. While creating the object I initialised both the values and stored the object against the key of that object and stored against the index.

In a separate thread another field of the object is updated where I dont have handle to the 2i field. So I simply updated the field and stored back. As the 2i indexed field is not part of the object, that is gone as null and 2i index is removed from that object.

After this, any search method on the original index key is failing to return this object. Is there any way to access 2i index corresponding to the key of object or storing the 2i index field as part of the object.

When I fetch the object with the key, I'm not able to see the filed which is definied as 2i field?

Please give me some clues on that.

riak-users mailing list
riak-users <at>
Ed | 24 Jan 21:37 2015

Re: Adding nodes to cluster

Hi everyone!

I have a riak cluster, working in production for about one year, with the following characteristics:
- Version 1.4.8
- 6 nodes
- leveldb backend
- replication (n) = 3
~ 3 billion keys

My ssd's are reaching 85% of capacity and we have decided to buy 6 more nodes to expand the cluster. 

Have you got any kind of advice on executing this operation or should I just follow the documentation on
adding new nodes to a cluster?

Best regards!

Implementation of the possibility of "subscribe"

I've used riak for online processing of incoming data and faced with problem.
I've used riak-erlang client. It has a function that allows to get all keys that are in the bucket. But I'd like to receive data continuously as thay arrive (that is to realize a subscribe)
I asked this question on SO and got the answer that the standart tools to do it will not work.
I'd like to realize this opportunity. Is it possible to do this by modifying the riak-erlang-clent?
riak-users mailing list
riak-users <at>
Matthew Brender | 23 Jan 18:39 2015

Riak Recap - Jan 23 2015

Good morning, afternoon and evening everyone.

I'm Matt Brender, new to Basho and focused on efforts out here in the open source portion of the Riak world. Tyler Hannan recommended that the very first task I take on is bringing back the Recap and Mark confirmed it was valuable to others. I hope you enjoy it.

State of the Union

Since we haven't had one of these in a while, here is an update on the latest software we have available.


  • I (Matt) have joined on as a Developer Advocate. If you're wondering who I am, you can check out this blog post or ping me on Twitter
  • The Riak Recap is back up and running. We're starting with the goal of every other week and will adjust based on feedback
  • There's an open TODO of tracking unanswered questions coming across IRC & email. More to come on that
  • Feedback is not just welcome, but is necessary. More ways of working on prioritization are in the plans. Send emails, tweets or otherwise to me in the meantime

Event news

  • We're co-hosting the New Stack Meetup on March 3rd in Seattle
  • We have multiple Basho engineers speaking at Erlang Factory on 26-27 March
  • We'll announce Meetup events in San Francisco and New York City in the next few weeks. Join any of these groups to keep up with the news

Dev news

Study material

That's it for this round. Thanks for reading.


Matt Brender | Developer Advocacy Lead
Basho Technologies
riak-users mailing list
riak-users <at>
Hristo Asenov | 23 Jan 17:32 2015

using g-counter for sequence number synchronization

Hello everyone,

I have noticed that in latest documentation on Riak data types (, for Counters it is not recommended to use them for ordered IDs (UUIDs). Can I implement g-counters (as described in the CRDT paper) using Riak’s Set datatype? I am wondering whether that will work out well for my use case. 

What I would like to do is to have synchronization of sequence numbers between multiple src processes that send their inputs to a single centralized process. I want all the sequence numbers to be unique so that the centralized process can create an ordering of the input messages based on the sequence numbers. Thus I would have an integer entry for each src process in the set, and the sum of all the integers would be my unique id. After the src process writes a value to the DHT for its corresponding entry, would it then have to read the value from the DHT in order to make sure it got committed without conflicts?

- Hristo
riak-users mailing list
riak-users <at>
Santi Kumar | 23 Jan 07:39 2015

Having trouble in connecting to Riak in EC2 with another Ec2 instance

I have installed Riak in ec2 and kept it wide open for inbound connections. From the source instance, We have configured outbound ports 8087 an 8098 to the public IP of Riak Instance.

Any other ports are required? I'm using python and curl clients but still not able to connect on both the ports

Please let me know if I'm missing something

riak-users mailing list
riak-users <at>
Seema Jethani | 23 Jan 02:21 2015

Basho Product Alert Update re DataTypes Disk Format Incompatibility

After further testing we have found that keys that store Maps in general and not just nested maps created by Riak 2.0.0 to 2.0.2 are unreadable after upgrading to 2.0.4

Thus Users will be affected if they
- Use Riak DataTypes to store Maps
- Have already upgraded to Riak 2.0.4

If you do not use Riak DataTypes (aka CRDTs) you are not affected and may upgrade to 2.0.4 as normal.

Note that customers using multi-datacenter replication (MDC) between sites with 2.0.0-2 and 2.0.4 will also experience failures.

Basho is preparing two fixes for the issue. The first fix is to enable reading data in the old and new formats, and the second fix is to add controls for which version is written for rolling upgrade and multi-datacenter configurations. Both are targeted for the 2.0.5 release.

Do let us know if you have any questions or concerns. Thank you for your patience.

Seema Jethani
Director of Product Management, Basho
4083455739 | <at> seemaj
riak-users mailing list
riak-users <at>
改变自己 | 22 Jan 02:47 2015

how to upload a file to rickcs server ?

dear friend 
recently, I installed a rickcs server myself,
then I tried to use the aws s3 interface to upload a file to my rickcs server .

the client side code:
var bucket = new AWS.S3({params: {
Bucket: 'test-bucket',
accessKeyId : 'DVJDPQGX8QG3QLR_T-J2',
secretAccessKey: '6cThOUXOmxHNAlsjyJzE03-Ph2caqIlAiaG7oQ==',
sslEnabled : false,
httpOptions :{proxy :'',xhrWithCredentials :true}
// endpoint:''
var params = {Key:, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
results.innerHTML = err ? 'ERROR!' : 'UPLOADED.';

as a result ,i failed, there is no response information from the server and the file is not uploaded to the server !

now I doubt that maybe it is because the limit to The same-origin policy !
so I want to know whether there is a way to confige cors in the server side ,
or orther ways to solve my problem!

if it is the fault of my client side code,please correct me !

the best wishes!

comment :

I used the aws s3 javascript interface in the client side !

riak-users mailing list
riak-users <at>