Matthew Brender | 31 Aug 21:44 2015

Riak Recap - August 31, 2015

We're starting the week with a Recap!

The most common questions have come up around Riak Search (Apache Solr integration) in Riak KV. Here are some highlights of that topic and more.

## Announcements
* The Riak KV Go (golang) client is out! 1.1 is the latest update [0]
* The Riak Mesos framework is in beta and welcomes your feedback [1]

## Community Update
* Our community repository received a major update this past week. We retired the Release Notes concept and added a set of labels to track projects and priorities [2]. See how you can get involved by checking out the open issues [3]

## Recently Answered
* Zeeshan continues to teach us how Riak Search (Solr) works to help Hao learn about his configuration [4]
* Dmitri explains getting started with Riak S2 (aka Riak CS) [5]
* Zeeshan confirms that write_once buckets are not (yet) indexed in Riak Search [6]
* Dmitri and Zeeshan give examples of custom indexing in Riak Search [7]
* There's a great thread on monitoring protobufs [8]
* We learn that the HTTP listening port is identical on all Riak KV nodes by default [9]
* Toby found a way to get the FQDN recognized in a new cluster [10]
* Dmitri explains that stale pending transfers in riak-admin will be dismissed with force commands [11]
* Magnus links to helpful resources on deep pagination using Riak Search (Solr) [12]

## Open Questions - Need Your Help
* Timur is writing a Scala wrapper project and has some further questions on CRDT logic [13]
* Pete is running into an issue using Riak KV in Docker on Windows and he's due to send out his Vagrantfile [14]

Have a great week,
Matt Brender
Developer Advocate <at> Basho
<at> mjbrender

[0] https://github.com/basho/riak-go-client
[1] https://github.com/basho-labs/riak-mesos#riak-mesos-framework-in-beta
[2] https://github.com/basho-labs/the-basho-community#labels
[3] https://github.com/basho-labs/the-basho-community/issues
[4] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017451.html
[5] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017454.html
[6] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017457.html
[7] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017466.html
[8] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017445.html
[9] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017474.html
[10] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017478.html
[11] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017489.html
[12] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017515.html
[13] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017502.html
[14] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-August/017527.html
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Matt Cochran | 30 Aug 19:45 2015

Creating Specific Users

Hello,

 

I’m trying to use riak-cs as a proxy for S3, and I need to support a specific user/key/secret instead of having riak create the key/secret for me. Does anyone have tips on how to do this?

 

Thanks,

 

Matt

 

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Bryan Fink | 29 Aug 00:04 2015
Picon

Re: worker_limit reached for map reduce jobs

On Thu, 27 Aug 2015 at 18:32:31 +0000 Girish Shankarraman <gshankarraman <at> vmware.com> wrote:

Hello,

Currently using Riak 2.1.1. I have 7 nodes in my cluster.
I am looking for some information on how these worker_limits are configured .


I have 50 linux(client) hosts trying to run the map reduce jobs on Riak. I am getting the error below where some of the hosts complain about the worker_limit being reached.

Looking for some insights on whether I can tune the system to avoid this error? Couldn't find too much documentation around the worker_limit.

{"phase":0,"error":"[worker_limit_reached]","input":"{<<\"provisionentry\">>,<<\"R89Okhz49SDje0y0qvcnkK7xLH0\">>}","type":"result","stack":"[]"} with query MapReduce(path='/mapred', reply_headers={'content-length': '144', 'access-control-allow-headers': 'Content-Type', 'server': 'MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)', 'connection': 'close', 'date': 'Thu, 27 Aug 2015 00:32:22 GMT', 'access-control-allow-origin': '*', 'access-control-allow-methods': 'POST, GET, OPTIONS', 'content-type': 'application/json'}, verb='POST', headers={'Content-Type': 'application/json'}

Thanks,

- Girish Shankarraman
 
Hi, Girish. The `worker_limit` is a riak_pipe tunable. It's there to keep you from trying to do "too much" at once. Your MR jobs spin up riak_pipe workers, and the number of them per vnode is what this tunable limits.

It has been a while since I last recorded this, but to calculate the number of workers per vnode that an MR job spins up, use roughly this formula:

   (using key-listing/2i/search ? 1 : 0) +
   (number of map stages) +
   (number of map stages with pre-reduce enabled) +
   (number of reduce stages)

Multiply that sum by the number of MR jobs you want to have in-flight concurrently. That will give you an overestimate of the worker_limit to use (because not all stages use a worker on every vnode). The default is a somewhat arbitrary 50.

Note that you'll need to re-examine the other limits in the system as well (like the number of JS VMs if your MR jobs use Javascript).

Back in the day, you'd set this by adding a line like so to riak's app.config:

   {riak_pipe, [{worker_limit, 50}]}

That might be something like this in riak.conf, but I'm not sure:

   riak_pipe.worker_limit = 50

-Bryan

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Pete Slater | 28 Aug 12:49 2015

A problem running Riak in a Docker image under Vagrant on Windows 7,8 and 10

I am having a problem with a Docker image running Riak which is causing me concern.

I have a Docker image which is being run inside of a Vagrant container. The Docker image is running Riak in a
CentOS 7 guest OS and is configured to start via a supervisor script.

The issue I am having is that when running the Vagrant VM in a MacOS host OS, Riak in the Docker image starts fine.

However when I run the same Vagrantfile in a Windows host OS, Riak fails to start with an error ‘Protocol:
~tp: the name riak <at> 127.0.0.1<mailto:riak <at> 127.0.0.1> seems to be in use by another erlang node’

I have tried changing the codename but the error persists.

The strange thing is that Riak seems to be running as we can query it via the REST API. But when trying to
communicate to it when logged into the Docker container and doing something like ‘riak ping’ we get
Node ‘riak <at> 127.0.0.1<mailto:riak <at> 127.0.0.1>’ not responding to pings.

Any help would be appreciated. If you need further information then please ask.






_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Alex Moore | 28 Aug 06:21 2015

Riak Go Client 1.1 is Available!

Hi All,

The Go client for Riak is out of beta, so Go get it!

To install it in your go project, just run `go get github.com/basho/riak-go-client`.

You can find more information on the project page (https://github.com/basho/riak-go-client), and in the API docs (https://godoc.org/github.com/basho/riak-go-client).

Special thanks to Luke Bakkan & Chris Mancini for their work on the client, Timo Gatsonides for his inspiring work on goriakpbc, and Sergio Arteaga for submitting GitHub issues.

Thanks,
Alex
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Bora Kou | 28 Aug 02:05 2015

indexes_not_supported,riak_kv_bitcask_backend error on multi backend with leveldb

Hi,

I am testing out riak kv multi backend with leveldb and bitcask.  I configured bitcask as the default backend and leveldb with bucket-type.  I am using riak kv version 2.0.5-1.

I am able to do http PUT with index, but when I try to GET with the index, it give me "{error,{indexes_not_supported,riak_kv_bitcask_backend}}}"

Am I missing something?  Is this type of setting supported?

Here is the information:

## PUT (with index work fine)
curl -v -XPUT localhost:8098/types/xfiles/buckets/xfiles_master_aws_pod98/keys/john \
  -H "x-riak-index-email_bin: john <at> example.com" \
  -d 'John Testing'

* About to connect() to localhost port 8098 (#0)
*   Trying 127.0.0.1... connected
> PUT /types/xfiles/buckets/xfiles_master_aws_pod98/keys/john HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:8098
> Accept: */*
> x-riak-index-email_bin: john <at> example.com
> Content-Length: 12
> Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 12out of 12 bytes
< HTTP/1.1 204 No Content
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained)
< Date: Thu, 27 Aug 2015 23:32:20 GMT
< Content-Type: application/x-www-form-urlencoded
< Content-Length: 0
* Connection #0 to host localhost left intact
* Closing connection #0

## GET INDEX (got error)
curl -v localhost:8098/types/xfiles/buckets/xfiles_master_aws_pod98/index/email_bin/john <at> example.com

* About to connect() to localhost port 8098 (#0)
*   Trying 127.0.0.1... connected
> GET /types/xfiles/buckets/xfiles_master_aws_pod98/index/email_bin/john <at> example.com HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:8098
> Accept: */*
< HTTP/1.1 500 Internal Server Error
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained)
< Date: Thu, 27 Aug 2015 23:41:52 GMT
< Content-Type: text/html
< Content-Length: 305
* Connection #0 to host localhost left intact
* Closing connection #0
<html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,{error,{indexes_not_supported,riak_kv_bitcask_backend}}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>

## GET THE KEY (work fine)
curl localhost:8098/types/xfiles/buckets/xfiles_master_aws_pod98/keys/john
John Testing


## BUCKET-TYPE
riak-admin bucket-type list
default (active)
xfiles (active)

riak-admin bucket-type status xfiles
xfiles is active

active: true
allow_mult: true
backend: <<"leveldb_multi">>
basic_quorum: false
big_vclock: 50
chash_keyfun: {riak_core_util,chash_std_keyfun}
dvv_enabled: true
dw: quorum
last_write_wins: false
linkfun: {modfun,riak_kv_wm_link_walker,mapreduce_linkfun}
n_val: 3
notfound_ok: true
old_vclock: 86400
postcommit: []
pr: 0
precommit: []
pw: 0
r: quorum
rw: quorum
small_vclock: 50
w: quorum
young_vclock: 20


## MY CONFIGURATION (6 nodes)
cat /etc/riak/riak.conf | grep multi
storage_backend = multi
multi_backend.bitcask_multi.storage_backend = bitcask
multi_backend.bitcask_multi.bitcask.data_root = /var/lib/riak/bitcask
multi_backend.leveldb_multi.storage_backend = leveldb
multi_backend.leveldb_multi.leveldb.data_root = /var/lib/riak/leveldb
multi_backend.leveldb_multi.leveldb.maximum_memory.percent = 30
multi_backend.leveldb_multi.leveldb.compaction.trigger.tombstone_count = 1000
multi_backend.default = bitcask_multi

cat /var/lib/riak/generated.configs/* | grep multi
      {storage_backend,riak_kv_multi_backend},
      {multi_backend_default,<<"bitcask_multi">>},
      {multi_backend,
          [{<<"bitcask_multi">>,riak_kv_bitcask_backend,
           {<<"leveldb_multi">>,riak_kv_eleveldb_backend,




Thank you,


-Bora


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Girish Shankarraman | 27 Aug 20:32 2015

worker_limit reached for map reduce jobs

Hello,

Currently using Riak 2.1.1. I have 7 nodes in my cluster.
I am looking for some information on how these worker_limits are configured .

I have 50 linux(client) hosts trying to run the map reduce jobs on Riak. I am getting the error below where some of the hosts complain about the worker_limit being reached.

Looking for some insights on whether I can tune the system to avoid this error? Couldn't find too much documentation around the worker_limit.

{"phase":0,"error":"[worker_limit_reached]","input":"{<<\"provisionentry\">>,<<\"R89Okhz49SDje0y0qvcnkK7xLH0\">>}","type":"result","stack":"[]"} with query MapReduce(path='/mapred', reply_headers={'content-length': '144', 'access-control-allow-headers': 'Content-Type', 'server': 'MochiWeb/1.1 WebMachine/1.10.8 (that head fake, tho)', 'connection': 'close', 'date': 'Thu, 27 Aug 2015 00:32:22 GMT', 'access-control-allow-origin': '*', 'access-control-allow-methods': 'POST, GET, OPTIONS', 'content-type': 'application/json'}, verb='POST', headers={'Content-Type': 'application/json'}


Thanks,

— Girish Shankarraman

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Timur Fayruzov | 20 Aug 02:59 2015
Picon

Default values vs non-existing keys for CRDTs

Hello,

It seems that Riak Datatype API does not allow to distinguish between non-existing keys and default values. For example, if I query for non-existing key as follows:

val fetchOp = new FetchCounter.Builder(key).build() val c = client.execute(fetchOp).getDatatype

I'll get a counter that holds 0. Now, if I put counter with value 0 at this key and run the query, I get the same result. Is there any way to distinguish between these two different states?

Note: when working with sets there is a context that I can check. If I fetch a set and the context is null, it means that the set does not exist under this key. This trick does not work for counters though, as they do not maintain context. Is it a valid trick to use though?

I posted this to StackOverflow a while ago, but no luck: http://stackoverflow.com/questions/31845164/riak-dataypes-default-values-vs-non-existing-keys

Thanks,

Timur

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
changmao wang | 19 Aug 03:01 2015
Picon

s3cmd error: access to bucket was denied

Matthew,

I used s3cmd --configure to generate ".s3cfg" config file and then access RIAK service by s3cmd.
The access_key and secret_key from ".s3cfg" is same as admin_key and admin_secret from "/etc/riak-cs/app.config".

However, I got error as below using s3cmd to access one bucket.

root <at> cluster-s3-hd1:~# s3cmd -c /root/.s3cfg ls s3://pipeline/article/111.pdf
ERROR: Access to bucket 'pipeline' was denied

By the way, I used Riak and Riak-CS 1.4.2 on Ubuntu. Current production cluster is a legacy system without documents for co-workers.

Attached file is "s3cfg" generated by "s3cmd --configure". 
--
Amao Wang
Best & Regards
Attachment (.s3cfg): application/octet-stream, 1608 bytes
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Dennis Nicolay | 18 Aug 23:03 2015
Picon

Fastest Method for Importing Into Riak.

Hi,

 

What is the fastest way to import data from a delimited file into Riak using the .net RiakClient?

 

Is there a bulk insert using the other Riak clients?

 

Thanks in advance,

Dennis

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Brant Fitzsimmons | 18 Aug 01:20 2015
Picon

Search limitations

Hello all,

Are the search suggestions on http://docs.basho.com/riak/latest/dev/using/application-guide/#Search still valid?

Specifically, is it still advisable to use 2i when deep pagination is required, and if the cluster is going to be larger than 8-10 nodes should I still use something else for search?

Brant Fitzsimmons
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Gmane