Patrice Bruno | 19 Sep 12:02 2014

If I go to RICON 2014

Hi,
(my first post)
if I go to RICON can I hope to meet people who can ask to my questions 
about our deployment and our usage of Riak, because I do not want to 
come to stay in my corner ?

Best Regards
Patrice Bruno
Kuantic
Berend Ozceri | 19 Sep 02:38 2014

Rolling upgrade from 1.2 to 2.0

Could someone please point me toward some authoritative documentation on whether a Riak cluster running
version 1.2 can be upgraded in rolling fashion to 2.0 directly, without going through an intermediate
release? We generally held back on 1.4 due to some of the growing pains we perceived around that release,
but are interested in moving to 2.0 in a timely fashion.

Thanks,

Berend
anandm | 17 Sep 06:37 2014
Picon

Yokozuna Scale

I started checking out Riak as an alternative for one my projects. We are
currently using SolrCloud 4.8 (36 Shards + 3 Replica each) - and all stored
fields (64 fields per doc + each doc at about 6kb on average)
I want to get this thing migrated - so push all data out of Solr and store
it in a KV - like Riak, but keep the indexes going in Solr (as I have a lot
of code written already around Solr)

Came across Yokozuna today and sounds like thats going to be a perfect match
for my requirement...

Just a couple of questions I have - I tried searching online for answers
(but couldn't find references to large Scale Yokozuna deployments)

1. I have over 250M documents indexed & stored (thats very bad) in current
SolrCloud deployment - with the replication factor of 3 - total Solr Index +
Data Size is about 4.5TB spread across 6 Servers (12 core (24 threads) +
96GB)
    Index Search performance and write performance is good enough with 36
Shards and Composite Id routing - I want to migrate this straight to Riak
with Yokozuna enabled.
    I'll be deploying a 5-6 node Riak Cluster - that would mean roughly
about 50M docs will be stored on each node - and Yokozuna will index it
locally on each node's Solr too (only indexed fields) -
           a. Will this Solr instance have just one core to index the data?
(As of now I just plan to have one bucket)
           b. Would it be able to handle the load of searching through 50M
docs with just one core? I think RAM wont be an issue - but I have not seen
a single Solr instance serving 50M docs so a bit worried about that.

2. Every time I query the Solr instance via Riak - /search hander - The
(Continue reading)

Ammar Ahmed | 16 Sep 18:23 2014

Getting: curl: (56) Recv failure: Connection reset by peer while trying to create admin user for Riak-CS on AWS

I am getting

curl: (56) Recv failure: Connection reset by peer

error when trying to create admin user for Riak-CS on AWS.

Details:

6 nodes Riak-CS cluster it is up and running:

Using Riak-1.4.10

Riak-CS-1.5.0

Stanchion-1.5.0

OS:

Distributor ID: Ubuntu

Description:    Ubuntu 12.04.5 LTS

Release:        12.04

Codename:       precise

Also, we have added {anonymous_user_creation, true}.

 

 

Regards,

 

Dr Ammar Ahmed | Enterprise Architect

CCC Information Services Inc.              

222 Merchandise Mart Plaza, Suite 900                          

Chicago, IL 60654                                                            

W: 312-229-3655

M: 773-782-6768
Vision for the road ahead™

 

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Jorge Garrido gomez | 15 Sep 17:22 2014
Picon

Error on crash.log

Hello

My team detected the next error on our riak cluster:

2014-09-15 11:09:14 =SUPERVISOR REPORT====
     Supervisor: {local,riak_pipe_fitting_sup}
     Context:    shutdown_error
     Reason:     noproc
     Offender:   [{pid,<0.12743.378>},{name,undefined},{mfargs,{riak_pipe_fitting,start_link,[]}},{restart_type,temporary},{shutdown,2000},{child_type,worker}]

We don’t know what is it, but sometimes our local application is disconnected from riak protobuffers client with a reqpb_timeout, we use erlang client,  the next are the configuration for our cluster:

Riak version 1.4.2.
Cluster with 4 nodes.
Enable search on all nodes.
All default settings on vm.args and app.config

I hope you can help us,


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
bryan hunt | 15 Sep 10:06 2014

Re: Generic server memsup terminating Mountain Lion

Hi Spiro,

What version are you running, If you don’t mind me asking?

Bryan

On 14 Sep 2014, at 21:34, Spiro N <spiro <at> greenvirtualsolutions.com> wrote:

Hi Bryan, thanks for your response. I had used the same commands with the same settings you used before I raised it to no avail. After inspecting the process I noticed Active anti_entropy was the culprit. Since I am running only a single node I ended up disabling Active anti_entropy. The problem disappeared after that. If I had multiple nodes it would be a concern but for now it's a quick fix. I wonder if 2.0 will give me the same errors.

On Sep 14, 2014 11:57 AM, "Bryan Hunt" <bhunt <at> basho.com> wrote:
Spiro,

I am somewhat clueless on OSX, but I use the following command when starting Riak, and it seems to work for me:

sudo launchctl limit maxfiles 65536 65536
ulimit -n 65536

Bryan

On Wed, Sep 10, 2014 at 1:54 AM, Toby Corkindale <toby <at> dryft.net> wrote:
Are you trying to use Riak CS for file storage, or are you just using Riak and storing 20M against a single key?
It's not clear from your email.

I ask because if you're in the latter case, it's just not going to work -- I believe the maximum per key is around a single megabyte.

On 10 September 2014 07:30, Spiro N <spiro <at> greenvirtualsolutions.com> wrote:
Sorry, I am sure you have posted in regards to this topic before but I am at a stand still. It just started after doing a "get"
the video was about 20 MB. The beam.smp  spikes at 100 and riak crashes. I have done everything the Docs ask for and I provided all that I feel may be relevant below. However I don't know what I don't know and could use some help. Mountain Lion does not let you set the ulmit to unlimited. Thanks in advance for anything at all that may help.

Spiro


This is my limit, I am running Mountain Lion 10.8.5

server:riak gvs$ launchctl limit
    cpu         unlimited      unlimited     
    filesize    unlimited      unlimited     
    data        unlimited      unlimited     
    stack       8388608        67104768      
    core        0              unlimited     
    rss         unlimited      unlimited     
    memlock     unlimited      unlimited     
    maxproc     709            1064          
    maxfiles    65336          1000000   
----------------------------------------------------------
This is my Bitcask  content

server:lib gvs$ cd /usr/local/var/lib/riak/
server:riak gvs$ ls bitcask/*/* |wc -l
     206
--------------------------------------------------------------
This is the crash.log message


2014-09-09 14:34:51 =ERROR REPORT====
** Generic server memsup terminating
** Last message in was {'EXIT',<0.20807.0>,{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}}
** When Server state == {state,{unix,darwin},false,undefined,undefined,false,60000,30000,0.8,0.05,<0.20807.0>,#Ref<0.0.0.120573>,undefined,[reg],[]}
** Reason for termination ==
** {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
2014-09-09 14:34:51 =CRASH REPORT====
  crasher:
    initial call: memsup:init/1
    pid: <0.20806.0>
    registered_name: memsup
    exception exit: {{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,747}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
    ancestors: [os_mon_sup,<0.96.0>]
    messages: []
    links: [<0.97.0>]
    dictionary: []
    trap_exit: true
    status: running
    heap_size: 377
    stack_size: 24
    reductions: 204
  neighbours:
2014-09-09 14:34:51 =SUPERVISOR REPORT====
     Supervisor: {local,os_mon_sup}
     Context:    child_terminated
     Reason:     {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
     Offender:   [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]

2014-09-09 14:34:51 =SUPERVISOR REPORT====
     Supervisor: {local,os_mon_sup}
     Context:    shutdown
     Reason:     reached_max_restart_intensity
     Offender:   [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,st
------------------------------------------------------------------------------------------------
This is the error.log message

server:riak gvs$ tail error.log
2014-09-09 17:00:25.907 [error] <0.439.1> gen_server memsup terminated with reason: maximum number of file descriptors exhausted, check ulimit -n
2014-09-09 17:00:25.908 [error] <0.439.1> CRASH REPORT Process memsup with 0 neighbours exited with reason: maximum number of file descriptors exhausted, check ulimit -n in gen_server:terminate/6 line 747
2014-09-09 17:00:25.908 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.439.1> exit with reason maximum number of file descriptors exhausted, check ulimit -n in context child_terminated
2014-09-09 17:00:25.908 [error] <0.442.1> gen_server memsup terminated with reason: maximum number of file descriptors exhausted, check ulimit -n
2014-09-09 17:00:25.908 [error] <0.442.1> CRASH REPORT Process memsup with 0 neighbours exited with reason: maximum number of file descriptors exhausted, check ulimit -n in gen_server:terminate/6 line 747
2014-09-09 17:00:25.909 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.442.1> exit with reason maximum number of file descriptors exhausted, check ulimit -n in context child_terminated
2014-09-09 17:00:25.909 [error] <0.445.1> gen_server memsup terminated with reason: maximum number of file descriptors exhausted, check ulimit -n
2014-09-09 17:00:25.909 [error] <0.445.1> CRASH REPORT Process memsup with 0 neighbours exited with reason: maximum number of file descriptors exhausted, check ulimit -n in gen_server:terminate/6 line 747
2014-09-09 17:00:25.909 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.445.1> exit with reason maximum number of file descriptors exhausted, check ulimit -n in context child_terminated
2014-09-09 17:00:25.910 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.445.1> exit with reason reached_max_restart_intensity in context shutdown
--------------------------------------------------------------------------------------------------
This is the erlang.log

server:riak gvs$ tail erlang.log.1
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:4:4] [async-threads:64] [kernel-poll:true]

Eshell V5.9.1  (abort with ^G)
(riak <at> 172.16.205.254)1>
===== ALIVE Tue Sep  9 16:29:28 EDT 2014

===== ALIVE Tue Sep  9 16:44:28 EDT 2014

===== ALIVE Tue Sep  9 16:59:28 EDT 2014
{"Kernel pid terminated",application_controller,"{application_terminated,os_mon,shutdown}"}
server:riak gvs$

    This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




--
Turning and turning in the widening gyre
The falcon cannot hear the falconer
Things fall apart; the center cannot hold
Mere anarchy is loosed upon the world

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Tom Santero | 11 Sep 15:32 2014
Picon

Re: Single node, nval = 1

(bringing this back to the list) 

So the largest challenge you face is when you attempt to optimize Riak for a single-node installation what you end up doing is introducing new issues down the line in the event that you need to scale out. For example, should you reduce the n_val to 1 and drop the ring_size to something like 8 (in order to keep the # of databases running on that single VM low) you're going to face a few issues. 

1) As John Daily already mentioned, changing the replication factor after the fact will almost certainly lead to odd and unexpected behavior. 

2) Ring resizing is Riak's dirty little secret, and pretty expensive. 

Should you do these hacks in an attempt to reduce the cost of maintaining two separate codebases, the most reasonable approach when you need to scale would be to perform a data migration from the hacky-1-node-cluster to a cluster optimized for a multi-node environment and availability. Note, that its much easier for me to type "do a data migration" than to actually perform one--so take recommendations with a grain of salt. 

Regards,
Tom


On Thu, Sep 11, 2014 at 8:41 AM, Henning Verbeek <hankipanky <at> gmail.com> wrote:
On Thu, Sep 11, 2014 at 1:38 PM, Tom Santero <tsantero <at> gmail.com> wrote:
> Running a singleton instance of Riak throws away all of the advantages aims
> to provide.

I understand that. But are there any penalties? Does Riak as a
single-node-instance run worse than in a full cluster?

> If you need a single database server, why not use something
> designed for that end and rock solid, like PostgreSQL?

My problem is "scaling down". We have developed something very similar
to RiakCS (implementing our special business logic) where we store
arbitrarily large data in Riak (in chunks, in a multi-node-cluster,
with eventual consistency, nval=3, etc.), and thanks to HAProxy and
Riak, we can scale up as much as we want.

But, our application is sometimes installed on customer premises,
scaled down, on a single VM. They don't *need* scalability (yet), they
often don't need (read: want to pay for) higher availability. We don't
want to have to maintain two versions of our application, one that
uses Riak, another that uses ... say a local filesystem for the
chunks. Two versions would mean more code to maintain, test,
package...

Hence the idea of using Riak in a single-node setup, nval=1 (so not to
consume tripple diskspace).

Cheers,
Henning

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Henning Verbeek | 11 Sep 13:27 2014
Picon

Single node, nval = 1

I'd like to run Riak in a single-node installation as a datastore for
my application, when running in small single-available environments.
Is there anything that speaks against doing so? I'm thinking that in
such a setup, nVal should be set to 1 - everything else makes no
sense, right? Would you also set the ring-size to 1?

Thanks for your views and comments,
Henning
Bozhidar Bozhanov | 10 Sep 09:10 2014
Picon

Riak responds with 204 even if returnbody is true

Hello,

We are running load tests and Riak is sometimes responding with 204 No content when we do a PUT operation, even though we've passed returnbody=true and the body we put is not empty (this answer implies this is not expected)

It happens rarely, so what might be the possible reasons?

Our riak has 9 nodes, w=4, n_val=5, r=2.



Best,

Bozhidar

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Spiro N | 9 Sep 23:30 2014

Generic server memsup terminating Mountain Lion

Sorry, I am sure you have posted in regards to this topic before but I am at a stand still. It just started after doing a "get"
the video was about 20 MB. The beam.smp  spikes at 100 and riak crashes. I have done everything the Docs ask for and I provided all that I feel may be relevant below. However I don't know what I don't know and could use some help. Mountain Lion does not let you set the ulmit to unlimited. Thanks in advance for anything at all that may help.

Spiro


This is my limit, I am running Mountain Lion 10.8.5

server:riak gvs$ launchctl limit
    cpu         unlimited      unlimited     
    filesize    unlimited      unlimited     
    data        unlimited      unlimited     
    stack       8388608        67104768      
    core        0              unlimited     
    rss         unlimited      unlimited     
    memlock     unlimited      unlimited     
    maxproc     709            1064          
    maxfiles    65336          1000000   
----------------------------------------------------------
This is my Bitcask  content

server:lib gvs$ cd /usr/local/var/lib/riak/
server:riak gvs$ ls bitcask/*/* |wc -l
     206
--------------------------------------------------------------
This is the crash.log message


2014-09-09 14:34:51 =ERROR REPORT====
** Generic server memsup terminating
** Last message in was {'EXIT',<0.20807.0>,{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}}
** When Server state == {state,{unix,darwin},false,undefined,undefined,false,60000,30000,0.8,0.05,<0.20807.0>,#Ref<0.0.0.120573>,undefined,[reg],[]}
** Reason for termination ==
** {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
2014-09-09 14:34:51 =CRASH REPORT====
  crasher:
    initial call: memsup:init/1
    pid: <0.20806.0>
    registered_name: memsup
    exception exit: {{emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,747}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
    ancestors: [os_mon_sup,<0.96.0>]
    messages: []
    links: [<0.97.0>]
    dictionary: []
    trap_exit: true
    status: running
    heap_size: 377
    stack_size: 24
    reductions: 204
  neighbours:
2014-09-09 14:34:51 =SUPERVISOR REPORT====
     Supervisor: {local,os_mon_sup}
     Context:    child_terminated
     Reason:     {emfile,[{erlang,open_port,[{spawn,"/bin/sh -s unix:cmd 2>&1"},[stream]],[]},{os,start_port_srv_handle,1,[{file,"os.erl"},{line,254}]},{os,start_port_srv_loop,0,[{file,"os.erl"},{line,270}]}]}
     Offender:   [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]

2014-09-09 14:34:51 =SUPERVISOR REPORT====
     Supervisor: {local,os_mon_sup}
     Context:    shutdown
     Reason:     reached_max_restart_intensity
     Offender:   [{pid,<0.20806.0>},{name,memsup},{mfargs,{memsup,st
------------------------------------------------------------------------------------------------
This is the error.log message

server:riak gvs$ tail error.log
2014-09-09 17:00:25.907 [error] <0.439.1> gen_server memsup terminated with reason: maximum number of file descriptors exhausted, check ulimit -n
2014-09-09 17:00:25.908 [error] <0.439.1> CRASH REPORT Process memsup with 0 neighbours exited with reason: maximum number of file descriptors exhausted, check ulimit -n in gen_server:terminate/6 line 747
2014-09-09 17:00:25.908 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.439.1> exit with reason maximum number of file descriptors exhausted, check ulimit -n in context child_terminated
2014-09-09 17:00:25.908 [error] <0.442.1> gen_server memsup terminated with reason: maximum number of file descriptors exhausted, check ulimit -n
2014-09-09 17:00:25.908 [error] <0.442.1> CRASH REPORT Process memsup with 0 neighbours exited with reason: maximum number of file descriptors exhausted, check ulimit -n in gen_server:terminate/6 line 747
2014-09-09 17:00:25.909 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.442.1> exit with reason maximum number of file descriptors exhausted, check ulimit -n in context child_terminated
2014-09-09 17:00:25.909 [error] <0.445.1> gen_server memsup terminated with reason: maximum number of file descriptors exhausted, check ulimit -n
2014-09-09 17:00:25.909 [error] <0.445.1> CRASH REPORT Process memsup with 0 neighbours exited with reason: maximum number of file descriptors exhausted, check ulimit -n in gen_server:terminate/6 line 747
2014-09-09 17:00:25.909 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.445.1> exit with reason maximum number of file descriptors exhausted, check ulimit -n in context child_terminated
2014-09-09 17:00:25.910 [error] <0.97.0> Supervisor os_mon_sup had child memsup started with memsup:start_link() at <0.445.1> exit with reason reached_max_restart_intensity in context shutdown
--------------------------------------------------------------------------------------------------
This is the erlang.log

server:riak gvs$ tail erlang.log.1
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:4:4] [async-threads:64] [kernel-poll:true]

Eshell V5.9.1  (abort with ^G)
(riak <at> 172.16.205.254)1>
===== ALIVE Tue Sep  9 16:29:28 EDT 2014

===== ALIVE Tue Sep  9 16:44:28 EDT 2014

===== ALIVE Tue Sep  9 16:59:28 EDT 2014
{"Kernel pid terminated",application_controller,"{application_terminated,os_mon,shutdown}"}
server:riak gvs$

    This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error, notify the sender immediately by return email and delete the message and any attachments from your system.
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Mohan Radhakrishnan | 9 Sep 08:28 2014
Picon

Resources on Caching ?

Hi,

      Are there more such resources on caching ? Who uses Riak for caching ?





Thanks,
Mohan
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Gmane