Hristo Asenov | 29 Oct 18:28 2014
Picon

how to run riak_core (and consequently riak pg)

Hi everyone,

I stumbled upon an application, Riak PG (https://github.com/basho/webmachine.git), that I want to run on my boxes. It's main dependancy is riak_core, which needs to be started before riak_pg runs. When I make a release with "./rebar generate" and then try to run ",/riak_pg console" in the rel folder, I get the following error output when I try to create a new group: 

riak_pg hctrl-hsa01% rel/riak_pg/bin/riak_pg console -smp enable
Exec: /home/hasenov/riak_pg/rel/riak_pg/erts-5.10.4/bin/erlexec -boot /home/hasenov/riak_pg/rel/riak_pg/releases/1/riak_pg -embedded -config /home/hasenov/riak_pg/rel/riak_pg/etc/app.config -args_file /home/hasenov/riak_pg/rel/riak_pg/etc/vm.args -- console -smp enable
Root: /home/hasenov/riak_pg/rel/riak_pg
Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:1:1] [async-threads:5] [kernel-poll:true]

13:15:11.333 [info] Application lager started on node 'riak_pg <at> 127.0.0.1'
13:15:11.334 [info] Application crypto started on node 'riak_pg <at> 127.0.0.1'
13:15:11.339 [info] Application riak_sysmon started on node 'riak_pg <at> 127.0.0.1'
13:15:11.348 [info] Application os_mon started on node 'riak_pg <at> 127.0.0.1'
13:15:11.350 [info] Application basho_stats started on node 'riak_pg <at> 127.0.0.1'
13:15:11.350 [info] Application eleveldb started on node 'riak_pg <at> 127.0.0.1'
13:15:11.351 [info] alarm_handler: {set,{system_memory_high_watermark,[]}}
13:15:11.351 [info] Application pbkdf2 started on node 'riak_pg <at> 127.0.0.1'
13:15:11.351 [info] Application poolboy started on node 'riak_pg <at> 127.0.0.1'
13:15:11.581 [info] New capability: {riak_core,vnode_routing} = proxy
13:15:11.584 [info] New capability: {riak_core,staged_joins} = true
13:15:11.589 [info] New capability: {riak_core,resizable_ring} = true
13:15:11.591 [info] New capability: {riak_core,fold_req_version} = v2
13:15:11.593 [info] New capability: {riak_core,security} = true
13:15:11.595 [info] New capability: {riak_core,bucket_types} = true
13:15:11.597 [info] New capability: {riak_core,net_ticktime} = true
Eshell V5.10.4  (abort with ^G)
(riak_pg <at> 127.0.0.1)1> 13:15:21.525 [info] Waiting for application riak_pg_memberships to start (0 seconds).

(riak_pg <at> 127.0.0.1)1> riak_pg:create(newpg).
13:16:15.299 [error] riak_pg_memberships_vnode command failed {undef,[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{file,"src/riak_pg_memberships_vnode.erl"},{line,143}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
13:16:15.299 [error] riak_pg_memberships_vnode command failed {undef,[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{file,"src/riak_pg_memberships_vnode.erl"},{line,143}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
13:16:15.299 [error] riak_pg_memberships_vnode command failed {undef,[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{file,"src/riak_pg_memberships_vnode.erl"},{line,143}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
13:16:15.299 [error] gen_fsm <0.637.0> in state waiting terminated with reason: no function clause matching riak_pg_create_fsm:waiting({vnode_error,{undef,[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{...},...]},...]}}, {state,[{388211372416021087647853783690262677096107081728,'riak_pg <at> 127.0.0.1'},{411047335499316445744786359201454599278231027712,...},...],...}) line 119
13:16:15.300 [error] CRASH REPORT Process <0.637.0> with 0 neighbours exited with reason: no function clause matching riak_pg_create_fsm:waiting({vnode_error,{undef,[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{...},...]},...]}}, {state,[{388211372416021087647853783690262677096107081728,'riak_pg <at> 127.0.0.1'},{411047335499316445744786359201454599278231027712,...},...],...}) line 119 in gen_fsm:terminate/7 line 622
13:16:15.300 [error] Supervisor riak_pg_create_fsm_sup had child riak_pg_create_fsm started with {riak_pg_create_fsm,start_link,undefined} at <0.637.0> exit with reason no function clause matching riak_pg_create_fsm:waiting({vnode_error,{undef,[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{...},...]},...]}}, {state,[{388211372416021087647853783690262677096107081728,'riak_pg <at> 127.0.0.1'},{411047335499316445744786359201454599278231027712,...},...],...}) line 119 in context child_terminated
13:16:15.300 [error] gen_fsm <0.487.0> in state active terminated with reason: {'module could not be loaded',[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{file,"src/riak_pg_memberships_vnode.erl"},{line,143}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
13:16:15.301 [error] CRASH REPORT Process <0.487.0> with 0 neighbours exited with reason: call to undefined function riak_dt_vvorset:new() in gen_fsm:terminate/7 line 622
13:16:15.301 [error] gen_fsm <0.489.0> in state active terminated with reason: {'module could not be loaded',[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{file,"src/riak_pg_memberships_vnode.erl"},{line,143}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
13:16:15.301 [error] CRASH REPORT Process <0.489.0> with 0 neighbours exited with reason: call to undefined function riak_dt_vvorset:new() in gen_fsm:terminate/7 line 622
13:16:15.302 [error] gen_fsm <0.491.0> in state active terminated with reason: {'module could not be loaded',[{riak_dt_vvorset,new,[],[]},{riak_pg_memberships_vnode,handle_command,3,[{file,"src/riak_pg_memberships_vnode.erl"},{line,143}]},{riak_core_vnode,vnode_command,3,[{file,"src/riak_core_vnode.erl"},{line,345}]},{gen_fsm,handle_msg,7,[{file,"gen_fsm.erl"},{line,505}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
13:16:15.302 [error] CRASH REPORT Process <0.491.0> with 0 neighbours exited with reason: call to undefined function riak_dt_vvorset:new() in gen_fsm:terminate/7 line 622
13:16:15.302 [error] Supervisor riak_core_vnode_sup had child undefined started with {riak_core_vnode,start_link,undefined} at <0.487.0> exit with reason call to undefined function riak_dt_vvorset:new() in context child_terminated
13:16:15.302 [error] Supervisor riak_core_vnode_sup had child undefined started with {riak_core_vnode,start_link,undefined} at <0.489.0> exit with reason call to undefined function riak_dt_vvorset:new() in context child_terminated
13:16:15.302 [error] Supervisor riak_core_vnode_sup had child undefined started with {riak_core_vnode,start_link,undefined} at <0.491.0> exit with reason call to undefined function riak_dt_vvorset:new() in context child_terminated
{error,timeout}

It looks like riak_dt is not defined, however it is in the deps folder. I tried passing "-pa deps/riak_dt/ebin" in the command line arguments but that has no effect. I was wondering if someone knows how to go about getting this working, or have an idea where to post my problem to.

- Hristo
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Ebbinge | 29 Oct 08:20 2014
Picon

Can't set long node name. Ubuntu 14.04

Hi, I just finished setting up the riak installation on a Ubuntu 14.04 64 bit
Virtual Machine (VMWare Fusion for Mac OSX). After installing all the
dependencies, Erlang and all the things needed, the riak.conf file can be
found on /etc/riak. That file can be opened and edited using diakonos which
allows to edit as root the file from terminal. I changed the riak node name
from nodename = riak <at> 127.0.0.1 to nodename = riak <at> 198.162.0.41 which is my
static IP address. As well, I changed the IP for the HTTP internal listener,
HTTPs internal listener, and for the buffer protocol as well. When all is
saved, I proceed to go sudo on terminal, riak start... And then, it doesn't,
checking the riak console it says the following "Can't set long node name".
I don't know what else to do, I've tried everything, and as for this time is
the cleanest installation I have accomplished :(

PD: Installation was trough apt-get install riak, and riak 2.0.0 version was
installed.

--
View this message in context: http://riak-users.197444.n3.nabble.com/Can-t-set-long-node-name-Ubuntu-14-04-tp4031989.html
Sent from the Riak Users mailing list archive at Nabble.com.
Lukas Welte | 26 Oct 14:34 2014

Riak CS Authorization Problems

Hey folks,

I wanted to play around with Riak CS to build up a own S3.
The Riak cluster is running perfectly (saving and retrieving data and so on) and also Riak CS seems to be up, cause I was able to create a user. 
I then setted this user in the config to be the admin user and configured s3cmd to use this user.
s3cmd ls and retrieving the admin user works fine but I am not able to create a new bucket. 

How is this possible and where should I start debugging?
I have no real idea where the problem could be as Riak is there and Riak CS is able to communicate to Riak.

Would love to hear some ideas what I can try.

Best regards,
Lukas

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Andrew Zeneski | 26 Oct 03:53 2014

Riak Search Schema Question

Hi All, I'm trying to determine if  a use case is supported by Yokozuna or not. 

With a stored value that looks like:

{"id": 1, "stats": { "name": "stat1", "value": 1 }, {"name": "stat2", "value": 5}}
{"id": 2, "stats": { "name": "stat3", "value": 2 }, {"name": "stat1", "value": 3}}
{"id": 3, "stats": { "name": "stat2", "value": 3 }, {"name": "stat3", "value": 1}}

I want to find those who have stat1 > 2. The result should be id 2.

I believe Solr 4.8 would handle this as a block-join, but I don't think Yokozuna supports this feature. My first thought was a custom extractor; are there other options I'm not seeing? 

Thanks!

Andrew
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Oleksiy Krivoshey | 25 Oct 23:51 2014
Picon

riak_pipe_fitting error

Hi!

Once in a few days I got the following error in my Riak cluster:

2014-10-25 03:01:23.731 [error] <0.221.0> Supervisor riak_pipe_fitting_sup had child undefined started with riak_pipe_fitting:start_link() at <0.22692.2455> exit with reason noproc in context shutdown_err
or
2014-10-25 05:00:09.896 [error] <0.221.0> Supervisor riak_pipe_fitting_sup had child undefined started with riak_pipe_fitting:start_link() at <0.27111.2457> exit with reason noproc in context shutdown_err
or

The client application exits with a connection timeout error when hitting this error.

Please suggest what does this mean and how to fix it?

Thanks!

--
Oleksiy
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
David James | 24 Oct 05:48 2014
Picon

Adjust ulimit, as needed, start Riak on Mac OS X

I recently learned a nice way to bump maxfiles on Mac OS X. This works for me on Yosemite.

#!/usr/bin/env bash
MAXFILES=$(launchctl limit maxfiles | sed -e 's/[[:space:]]+/ /g' | xargs)
if [ "$MAXFILES" != "maxfiles 65536 65536" ]; then
  set -x
  sudo launchctl limit maxfiles 65536 65536
fi
set -x
ulimit -n 65536
riak start

I save the file to ~/bin/start-riak and use it instead of `riak start`.

Gist here:

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Alexander Popov | 22 Oct 20:18 2014
Picon

[SOLR] different number of result on same query

RIAK 2.0.1, 5 nodes on different hosts

query: comments_index?q=owner:6d87f18a3dca4a60b0fc385b1f46c165%20AND%20target:35975db44af44b2494751abddfcfe466&fl=id&wt=json&rows=15

RESULT1:
{
responseHeader: {
status: 0,
QTime: 3,
params: {
10.0.0.150:8093: "_yz_pn:56 OR _yz_pn:41 OR _yz_pn:26 OR _yz_pn:11",
fl: "id",
10.0.0.152:8093: "_yz_pn:63 OR _yz_pn:53 OR _yz_pn:38 OR _yz_pn:23 OR _yz_pn:8",
q: "owner:6d87f18a3dca4a60b0fc385b1f46c165 AND target:35975db44af44b2494751abddfcfe466",
10.0.0.218:8093: "(_yz_pn:60 AND (_yz_fpn:60)) OR _yz_pn:50 OR _yz_pn:35 OR _yz_pn:20 OR _yz_pn:5",
wt: "json",
10.0.0.153:8093: "_yz_pn:59 OR _yz_pn:44 OR _yz_pn:29 OR _yz_pn:14",
10.0.0.151:8093: "_yz_pn:47 OR _yz_pn:32 OR _yz_pn:17 OR _yz_pn:2",
rows: "15"
}
},
response: {
numFound: 12,
start: 0,
maxScore: 6.72534,
docs: [
.....


RESULT2:
{
responseHeader: {
status: 0,
QTime: 3,
params: {
10.0.0.150:8093: "_yz_pn:61 OR _yz_pn:46 OR _yz_pn:31 OR _yz_pn:16 OR _yz_pn:1",
fl: "id",
10.0.0.152:8093: "_yz_pn:58 OR _yz_pn:43 OR _yz_pn:28 OR _yz_pn:13",
q: "owner:6d87f18a3dca4a60b0fc385b1f46c165 AND target:35975db44af44b2494751abddfcfe466",
10.0.0.218:8093: "_yz_pn:55 OR _yz_pn:40 OR _yz_pn:25 OR _yz_pn:10",
wt: "json",
10.0.0.153:8093: "_yz_pn:49 OR _yz_pn:34 OR _yz_pn:19 OR _yz_pn:4",
10.0.0.151:8093: "(_yz_pn:62 AND (_yz_fpn:62)) OR _yz_pn:52 OR _yz_pn:37 OR _yz_pn:22 OR _yz_pn:7",
rows: "15"
}
},
response: {
numFound: 11,
start: 0,
maxScore: 6.72534,
docs: [
....


Does it not guaranteed that all records will be found each time?   
also, seems not random record missed, each time specific one
if I run query for this specific record with q=id:50473c1239934ef29f24b87f5a6d1ca2
it is random return or not return this record

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Seema Jethani | 22 Oct 07:56 2014

[ANN] Riak CS 1.5.2

Riak Users,Riak CS 1.5.2 has been released and can be downloaded from the downloads page[1]. This release contains a fix for protocol buffers connection pool leak and minor updates to improve logging and repair invalid garbage collection manifests. Please review the Release Notes [2] for more details before upgrading.
Seema Jethani
Director of Product Management, Basho

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Simon Rodriguez | 21 Oct 10:47 2014
Picon

Disk space in riak

Hi all,

I've read several threads about this question but I haven't found a clear answer yet, so here it is :

I'm working on a POC to store images in riak. In my test I've uploaded ~50K images so far. In NTFS they take ~1GB of disk space but riak goes up to 70GB!

If I use the standard disk space calculation, storage space in riak should be 1GB x 3 (n_val) = 3GB so there is a 25 factor I can't understand.

My riak is a vanilla v2.0.0 on a Debian 3.2.51 with the default dev conf (1 cluster, 5 nodes on a single box, n_val=3) and bitcask storage.

Thanks,
Simon
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Naveen Tamanam | 20 Oct 20:59 2014
Picon

Running MapReduce when deleting record is causing bad_utf8_character_code error

Hi,

Running map-reduce when deleting the record causing the bad_utf8_character_code
error. I mean doing both things simultaneously causing the error

Here is the traceback
Error running MapReduce operation. Headers: {'content-length': '1312', 'server': 'MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)', 'connection': 'close', 'date': 'Mon, 20 Oct 2014 18:23:08 GMT', 'content-type': 'text/html', 'http_code': 500} Body: '<html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,{exit,{ucs,{bad_utf8_character_code}},\n
 [{xmerl_ucs,from_utf8,1,[{file,"xmerl_ucs.erl"},{line,185}]},
 {mochijson2,json_encode_string,2,
 [{file,"src/mochijson2.erl"},{line,186}]},
{mochijson2,\'-json_encode_proplist/2-fun-0-\',3,
 [{file,"src/mochijson2.erl"},{line,167}]},
{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},
 {mochijson2,json_encode_proplist,2,
[{file,"src/mochijson2.erl"},{line,170}]},
 {riak_kv_wm_mapred,send_error,2,
[{file,"src/riak_kv_wm_mapred.erl"},\n {line,70}]},\n {riak_kv_wm_mapred,pipe_mapred_nonchunked,3,\n [{file,"src/riak_kv_wm_mapred.erl"},\n {line,214}]},\n {webmachine_resource,resource_call,3,\n [{file,"src/webmachine_resource.erl"},\n
{line,186}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>'

I have observed that, every time I do both things simultaneously casing the error. But later on after a while map-reduce query is working fine.
​Is this mean, can't I run the map-reduce queries while deleting keys from other client?​


--
Thanks & Regards,
Naveen Tamanam
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Lucas Grijander | 20 Oct 10:56 2014
Picon

Re: Memory-backend TTL

Hi Luke,

Indeed, when removed the thousands of requests, the memory is stabilized. However the memory consumption is still very high:

riak-admin status |grep memory
memory_total : 18494760128
memory_processes : 145363184
memory_processes_used : 142886424
memory_system : 18349396944
memory_atom : 561761
memory_atom_used : 554496
memory_binary : 7108243240
memory_code : 13917820
memory_ets : 11200328880

I have test also with Riak 1.4.10 and the performance is the same. 

Is it normal that the "memory_ets" has more than 10GB when we have a "ring_size" of 16 and a max_memory_per_vnode = 250MB?

2014-10-15 20:50 GMT+02:00 Lucas Grijander <lucasgrinjander69 <at> gmail.com>:
Hi Luke.

About the first issue:

- From the beginning, the servers are all running ntpd. They are Ubuntu 14.04 and the ntpd service is installed and running by default. 
- Anti-entropy was also disabled from the beginning:

{anti_entropy,{off,[]}},


About the second issue, I am perplex because, after 2 restarts of the Riak server, just now there is a big memory consumption but is not growing like the previous days. The only change was to remove this code (it was used thousands of times/s). It was a possible workaround about the previous problem with the TTL but this code now is useless because the TTL is working fine with this node alone:

self.db.delete((key)
self.db.get(key, r=1)


# riak-admin status|grep memory
memory_total : 18617871264
memory_processes : 224480232
memory_processes_used : 222700176
memory_system : 18393391032
memory_atom : 561761
memory_atom_used : 552862
memory_binary : 7135206080
memory_code : 13779729
memory_ets : 11209256232

The problem is that I don't remember if the code change was after or before the second restart. I am going to restart the riak server again and I will report you about if the "possible memory leak" is reproduced.

This is the props of the bucket:
{"props":{"allow_mult":false,"backend":"ttl_stg","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":true,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":1,"name":"ttl_stg","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":1,"rw":"quorum","small_vclock":50,"w":1,"young_vclock":20}}

About the data that we put into the bucket are all with this schema:

KEY: Alphanumeric with a length of 47
DATA: Long integer.

# riak-admin status|grep puts
vnode_puts : 84708
vnode_puts_total : 123127430
node_puts : 83169
node_puts_total : 123128062

# riak-admin status|grep gets
vnode_gets : 162314
vnode_gets_total : 240433213
node_gets : 162317
node_gets_total : 240433216

2014-10-14 16:26 GMT+02:00 Luke Bakken <lbakken <at> basho.com>:
Hi Lucas,

With regard to the mysterious key deletion / resurrection, please do
the following:

* Ensure your servers are all running ntpd and have their time
synchronized as closely as possible.
* Disable anti-entropy. I suspect this is causing the strange behavior
you're seeing with keys.

Your single node cluster memory consumption issue is a bit of a
puzzler. I'm assuming you're using default bucket settings and not
using bucket types based on your previous emails, and that allow_mult
is still false for your ttl_stg bucket. Can you tell me more about the
data you're putting into that bucket for testing? I'll try and
reproduce it with my single node cluster.

--
Luke Bakken
Engineer / CSE
lbakken <at> basho.com


On Mon, Oct 13, 2014 at 5:02 PM, Lucas Grijander
<lucasgrinjander69 <at> gmail.com> wrote:
> Hi Luke.
>
> I really appreciate your efforts to attempt to reproduce the problem. I
> think that the configs are right. I have been doing also a lot of tests and
> with 1 server/node, the memory bucket works flawlessly, as your test. The
> Riak cluster where we have the problem has a multi_backend with 1 memory
> backend, 2 bitcask backends and 2 leveldb backends. I have only changed the
> parameter connection of the memory backend in our production code to another
> new "cluster" with only 1 node, with the same config of Riak but with only 1
> memory backend under the multi configuration and, as I said, all fine, the
> problem vanished. I deduce that the problem appears only with more than 1
> node and with a lot of requests.
>
> In my tests with the production cluster with the problem ( 4 nodes), finally
> I realized that the TTL is working but, randomly and suddenly, KEYS already
> deleted appear, and KEYS with correct TTL disappear :-? (Maybe something
> related with the some ETS internal table? ) This is the moment when I can
> obtain KEYS already expired.
>
> In summary:
>
> - With cluster with 4 nodes (config below): All OK for a while and suddenly
> we lost the last 20 seconds approx. of keys and OLD keys appear in the list:
> curl -X GET http://localhost:8098/buckets/ttl_stg/keys?keys=true
>
> buckets.default.last_write_wins = true
> bitcask.io_mode = erlang
> multi_backend.ttl_stg.storage_backend = memory
> multi_backend.ttl_stg.memory_backend.ttl = 90s
> multi_backend.ttl_stg.memory_backend.max_memory_per_vnode = 25MB
> anti_entropy = passive
> ring_size = 256
>
> - With 1 node: All OK
>
> buckets.default.n_val = 1
> buckets.default.last_write_wins = true
> buckets.default.r = 1
> buckets.default.w = 1
> multi_backend. ttl_stg.storage_backend = memory
> multi_backend. ttl_stg.memory_backend.ttl = 90s
> multi_backend. ttl_stg.memory_backend.max_memory_per_vnode = 250MB
> ring_size = 16
>
>
>
> Another note: With this 1 node (32GB RAM) and only activated the memory
> backend I have realized than the memory consumption grows without control:
>
>
> # riak-admin  status|grep memory
> memory_total : 17323130960
> memory_processes : 235043016
> memory_processes_used : 233078456
> memory_system : 17088087944
> memory_atom : 561761
> memory_atom_used : 561127
> memory_binary : 6737787976
> memory_code : 14370908
> memory_ets : 10295224544
>
> # # riak-admin diag -d debug
> [debug] Local RPC: os:getpid([]) [5000]
> [debug] Running shell command: ps -o pmem,rss -p 17521
> [debug] Shell command output:
> %MEM   RSS
> 60.5 19863800
>
> Wow 18.9GB when the max_memory_per_vnode = 250MB. Is far away from the
> value,  250*16vnodes = 4000MB. Is it that correct?
>
> This is the riak-admin vnode-status of 1 vnode, the other 15 are with
> similar data:
>
> VNode: 1370157784997721485815954530671515330927436759040
> Backend: riak_kv_multi_backend
> Status:
> [{<<"ttl_stg">>,
>   [{mod,riak_kv_memory_backend},
>    {data_table_status,[{compressed,false},
>                        {memory,1156673},
>                        {owner,<8343.9466.104>},
>                        {heir,none},
>
> {name,riak_kv_1370157784997721485815954530671515330927436759040},
>                        {size,29656},
>                        {node,'riak <at> xxxxxxxx'},
>                        {named_table,false},
>                        {type,ordered_set},
>                        {keypos,1},
>                        {protection,protected}]},
>    {index_table_status,[{compressed,false},
>                         {memory,89},
>                         {owner,<8343.9466.104>},
>                         {heir,none},
>
> {name,riak_kv_1370157784997721485815954530671515330927436759040_i},
>                         {size,0},
>                         {node,'riak <at> xxxxxxxxx'},
>                         {named_table,false},
>                         {type,ordered_set},
>                         {keypos,1},
>                         {protection,protected}]},
>    {time_table_status,[{compressed,false},
>                        {memory,75968936},
>                        {owner,<8343.9466.104>},
>                        {heir,none},
>
> {name,riak_kv_1370157784997721485815954530671515330927436759040_t},
>                        {size,2813661},
>                        {node,'riak <at> xxxxxxxxx'},
>                        {named_table,false},
>                        {type,ordered_set},
>                        {keypos,1},
>                        {protection,protected}]}]}]
>
> Thanks!
>
> 2014-10-13 22:30 GMT+02:00 Luke Bakken <lbakken <at> basho.com>:
>>
>> Hi Lucas,
>>
>> I've tried reproducing this using a local Riak 2.0.1 node, however TTL
>> is working as expected.
>>
>> Here is the configuration I have in /etc/riak/riak.conf:
>>
>> storage_backend = multi
>> multi_backend.default = bc_default
>>
>> multi_backend.ttl_stg.storage_backend = memory
>> multi_backend.ttl_stg.memory_backend.ttl = 90s
>> multi_backend.ttl_stg.memory_backend.max_memory_per_vnode = 4MB
>>
>> multi_backend.bc_default.storage_backend = bitcask
>> multi_backend.bc_default.bitcask.data_root = /var/lib/riak/bc_default
>> multi_backend.bc_default.bitcask.io_mode = erlang
>>
>> This translates to the following in
>> /var/lib/riak/generated.configs/app.2014.10.13.13.13.29.config:
>>
>> {multi_backend_default,<<"bc_default">>},
>> {multi_backend,
>>     [{<<"ttl_stg">>,riak_kv_memory_backend,[{ttl,90},{max_memory,4}]},
>>     {<<"bc_default">>,riak_kv_bitcask_backend,
>>     [{io_mode,erlang},
>>         {expiry_grace_time,0},
>>         {small_file_threshold,10485760},
>>         {dead_bytes_threshold,134217728},
>>         {frag_threshold,40},
>>         {dead_bytes_merge_trigger,536870912},
>>         {frag_merge_trigger,60},
>>         {max_file_size,2147483648},
>>         {open_timeout,4},
>>         {data_root,"/var/lib/riak/bc_default"},
>>         {sync_strategy,none},
>>         {merge_window,always},
>>         {max_fold_age,-1},
>>         {max_fold_puts,0},
>>         {expiry_secs,-1},
>>         {require_hint_crc,true}]}]}]},
>>
>> I set the bucket properties to use the ttl_stg backend:
>>
>> root <at> UBUNTU-12-1:~# cat ttl_stg-props.json
>> {"props":{"name":"ttl_stg","backend":"ttl_stg"}}
>>
>> root <at> UBUNTU-12-1:~# curl -XPUT -H'Content-type: application/json'
>> localhost:8098/buckets/ttl_stg/props --data-ascii <at> ttl_stg-props.json
>>
>> root <at> UBUNTU-12-1:~# curl -XGET localhost:8098/buckets/ttl_stg/props
>>
>> {"props":{"allow_mult":false,"backend":"ttl_stg","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"ttl_stg","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}
>>
>>
>> And used the following statement to PUT test data:
>>
>> curl -XPUT localhost:8098/buckets/ttl_stg/keys/1 -d "TEST $(date)"
>>
>> After 90 seconds, this is the response I get from Riak:
>>
>> root <at> UBUNTU-12-1:~# curl -XGET localhost:8098/buckets/ttl_stg/keys/1
>> not found
>>
>> I would carefully check all of the app.config / riak.conf files in
>> your cluster, the output of "riak config effective" and the bucket
>> properties for those buckets you expect to be using the memory backend
>> with TTL. I also recommend using the localhost:8098/buckets/ endpoint
>> instead of the deprecated riak/ endpoint.
>>
>> Please let me know if you have additional questions.
>> --
>> Luke Bakken
>> Engineer / CSE
>> lbakken <at> basho.com
>>
>>
>> On Fri, Oct 3, 2014 at 11:32 AM, Lucas Grijander
>> <lucasgrinjander69 <at> gmail.com> wrote:
>> > Hello,
>> >
>> > I have a memory backend in production with Riak 2.0.1, 4 servers and 256
>> > vnodes. The servers have the same date and time.
>> >
>> > I have seen an odd performance with the ttl.
>> >
>> > This is the config:
>> >
>> >            {<<"ttl_stg">>,riak_kv_memory_backend,
>> >             [{ttl,90},{max_memory,25}]},
>> >
>> > For example, see this GET response in one of the riak servers:
>> >
>> > < HTTP/1.1 200 OK
>> > < X-Riak-Vclock: a85hYGBgzGDKBVIc4otdfgR/7bfIYEpkzGNlKI1efJYvCwA=
>> > < Vary: Accept-Encoding
>> > * Server MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) is
>> > not
>> > blacklisted
>> > < Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained)
>> > < Link: </riak/ttl_stg>; rel="up"
>> > < Last-Modified: Fri, 03 Oct 2014 17:40:05 GMT
>> > < ETag: "3c8bGoifWcOCSVn0otD5nI"
>> > < Date: Fri, 03 Oct 2014 17:47:50 GMT
>> > < Content-Type: application/json
>> > < Content-Length: 17
>> >
>> > If the TTL is 90 seconds, Why the GET doesn't return "not found" if the
>> > difference between "Last-Modified" and "Date" (of the curl request) is
>> > greater than the TTL?
>> >
>> > Thanks in advance!
>> >
>> >
>> > _______________________________________________
>> > riak-users mailing list
>> > riak-users <at> lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>
>
> _______________________________________________
> riak-users mailing list
> riak-users <at> lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Gmane