David James | 24 Oct 05:48 2014
Picon

Adjust ulimit, as needed, start Riak on Mac OS X

I recently learned a nice way to bump maxfiles on Mac OS X. This works for me on Yosemite.

#!/usr/bin/env bash
MAXFILES=$(launchctl limit maxfiles | sed -e 's/[[:space:]]+/ /g' | xargs)
if [ "$MAXFILES" != "maxfiles 65536 65536" ]; then
  set -x
  sudo launchctl limit maxfiles 65536 65536
fi
set -x
ulimit -n 65536
riak start

I save the file to ~/bin/start-riak and use it instead of `riak start`.

Gist here:

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Alexander Popov | 22 Oct 20:18 2014
Picon

[SOLR] different number of result on same query

RIAK 2.0.1, 5 nodes on different hosts

query: comments_index?q=owner:6d87f18a3dca4a60b0fc385b1f46c165%20AND%20target:35975db44af44b2494751abddfcfe466&fl=id&wt=json&rows=15

RESULT1:
{
responseHeader: {
status: 0,
QTime: 3,
params: {
10.0.0.150:8093: "_yz_pn:56 OR _yz_pn:41 OR _yz_pn:26 OR _yz_pn:11",
fl: "id",
10.0.0.152:8093: "_yz_pn:63 OR _yz_pn:53 OR _yz_pn:38 OR _yz_pn:23 OR _yz_pn:8",
q: "owner:6d87f18a3dca4a60b0fc385b1f46c165 AND target:35975db44af44b2494751abddfcfe466",
10.0.0.218:8093: "(_yz_pn:60 AND (_yz_fpn:60)) OR _yz_pn:50 OR _yz_pn:35 OR _yz_pn:20 OR _yz_pn:5",
wt: "json",
10.0.0.153:8093: "_yz_pn:59 OR _yz_pn:44 OR _yz_pn:29 OR _yz_pn:14",
10.0.0.151:8093: "_yz_pn:47 OR _yz_pn:32 OR _yz_pn:17 OR _yz_pn:2",
rows: "15"
}
},
response: {
numFound: 12,
start: 0,
maxScore: 6.72534,
docs: [
.....


RESULT2:
{
responseHeader: {
status: 0,
QTime: 3,
params: {
10.0.0.150:8093: "_yz_pn:61 OR _yz_pn:46 OR _yz_pn:31 OR _yz_pn:16 OR _yz_pn:1",
fl: "id",
10.0.0.152:8093: "_yz_pn:58 OR _yz_pn:43 OR _yz_pn:28 OR _yz_pn:13",
q: "owner:6d87f18a3dca4a60b0fc385b1f46c165 AND target:35975db44af44b2494751abddfcfe466",
10.0.0.218:8093: "_yz_pn:55 OR _yz_pn:40 OR _yz_pn:25 OR _yz_pn:10",
wt: "json",
10.0.0.153:8093: "_yz_pn:49 OR _yz_pn:34 OR _yz_pn:19 OR _yz_pn:4",
10.0.0.151:8093: "(_yz_pn:62 AND (_yz_fpn:62)) OR _yz_pn:52 OR _yz_pn:37 OR _yz_pn:22 OR _yz_pn:7",
rows: "15"
}
},
response: {
numFound: 11,
start: 0,
maxScore: 6.72534,
docs: [
....


Does it not guaranteed that all records will be found each time?   
also, seems not random record missed, each time specific one
if I run query for this specific record with q=id:50473c1239934ef29f24b87f5a6d1ca2
it is random return or not return this record

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Seema Jethani | 22 Oct 07:56 2014

[ANN] Riak CS 1.5.2

Riak Users,Riak CS 1.5.2 has been released and can be downloaded from the downloads page[1]. This release contains a fix for protocol buffers connection pool leak and minor updates to improve logging and repair invalid garbage collection manifests. Please review the Release Notes [2] for more details before upgrading.
Seema Jethani
Director of Product Management, Basho

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Simon Rodriguez | 21 Oct 10:47 2014
Picon

Disk space in riak

Hi all,

I've read several threads about this question but I haven't found a clear answer yet, so here it is :

I'm working on a POC to store images in riak. In my test I've uploaded ~50K images so far. In NTFS they take ~1GB of disk space but riak goes up to 70GB!

If I use the standard disk space calculation, storage space in riak should be 1GB x 3 (n_val) = 3GB so there is a 25 factor I can't understand.

My riak is a vanilla v2.0.0 on a Debian 3.2.51 with the default dev conf (1 cluster, 5 nodes on a single box, n_val=3) and bitcask storage.

Thanks,
Simon
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Naveen Tamanam | 20 Oct 20:59 2014
Picon

Running MapReduce when deleting record is causing bad_utf8_character_code error

Hi,

Running map-reduce when deleting the record causing the bad_utf8_character_code
error. I mean doing both things simultaneously causing the error

Here is the traceback
Error running MapReduce operation. Headers: {'content-length': '1312', 'server': 'MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)', 'connection': 'close', 'date': 'Mon, 20 Oct 2014 18:23:08 GMT', 'content-type': 'text/html', 'http_code': 500} Body: '<html><head><title>500 Internal Server Error</title></head><body><h1>Internal Server Error</h1>The server encountered an error while processing this request:<br><pre>{error,{exit,{ucs,{bad_utf8_character_code}},\n
 [{xmerl_ucs,from_utf8,1,[{file,"xmerl_ucs.erl"},{line,185}]},
 {mochijson2,json_encode_string,2,
 [{file,"src/mochijson2.erl"},{line,186}]},
{mochijson2,\'-json_encode_proplist/2-fun-0-\',3,
 [{file,"src/mochijson2.erl"},{line,167}]},
{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},
 {mochijson2,json_encode_proplist,2,
[{file,"src/mochijson2.erl"},{line,170}]},
 {riak_kv_wm_mapred,send_error,2,
[{file,"src/riak_kv_wm_mapred.erl"},\n {line,70}]},\n {riak_kv_wm_mapred,pipe_mapred_nonchunked,3,\n [{file,"src/riak_kv_wm_mapred.erl"},\n {line,214}]},\n {webmachine_resource,resource_call,3,\n [{file,"src/webmachine_resource.erl"},\n
{line,186}]}]}}</pre><P><HR><ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>'

I have observed that, every time I do both things simultaneously casing the error. But later on after a while map-reduce query is working fine.
​Is this mean, can't I run the map-reduce queries while deleting keys from other client?​


--
Thanks & Regards,
Naveen Tamanam
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Lucas Grijander | 20 Oct 10:56 2014
Picon

Re: Memory-backend TTL

Hi Luke,

Indeed, when removed the thousands of requests, the memory is stabilized. However the memory consumption is still very high:

riak-admin status |grep memory
memory_total : 18494760128
memory_processes : 145363184
memory_processes_used : 142886424
memory_system : 18349396944
memory_atom : 561761
memory_atom_used : 554496
memory_binary : 7108243240
memory_code : 13917820
memory_ets : 11200328880

I have test also with Riak 1.4.10 and the performance is the same. 

Is it normal that the "memory_ets" has more than 10GB when we have a "ring_size" of 16 and a max_memory_per_vnode = 250MB?

2014-10-15 20:50 GMT+02:00 Lucas Grijander <lucasgrinjander69 <at> gmail.com>:
Hi Luke.

About the first issue:

- From the beginning, the servers are all running ntpd. They are Ubuntu 14.04 and the ntpd service is installed and running by default. 
- Anti-entropy was also disabled from the beginning:

{anti_entropy,{off,[]}},


About the second issue, I am perplex because, after 2 restarts of the Riak server, just now there is a big memory consumption but is not growing like the previous days. The only change was to remove this code (it was used thousands of times/s). It was a possible workaround about the previous problem with the TTL but this code now is useless because the TTL is working fine with this node alone:

self.db.delete((key)
self.db.get(key, r=1)


# riak-admin status|grep memory
memory_total : 18617871264
memory_processes : 224480232
memory_processes_used : 222700176
memory_system : 18393391032
memory_atom : 561761
memory_atom_used : 552862
memory_binary : 7135206080
memory_code : 13779729
memory_ets : 11209256232

The problem is that I don't remember if the code change was after or before the second restart. I am going to restart the riak server again and I will report you about if the "possible memory leak" is reproduced.

This is the props of the bucket:
{"props":{"allow_mult":false,"backend":"ttl_stg","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":true,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":1,"name":"ttl_stg","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":1,"rw":"quorum","small_vclock":50,"w":1,"young_vclock":20}}

About the data that we put into the bucket are all with this schema:

KEY: Alphanumeric with a length of 47
DATA: Long integer.

# riak-admin status|grep puts
vnode_puts : 84708
vnode_puts_total : 123127430
node_puts : 83169
node_puts_total : 123128062

# riak-admin status|grep gets
vnode_gets : 162314
vnode_gets_total : 240433213
node_gets : 162317
node_gets_total : 240433216

2014-10-14 16:26 GMT+02:00 Luke Bakken <lbakken <at> basho.com>:
Hi Lucas,

With regard to the mysterious key deletion / resurrection, please do
the following:

* Ensure your servers are all running ntpd and have their time
synchronized as closely as possible.
* Disable anti-entropy. I suspect this is causing the strange behavior
you're seeing with keys.

Your single node cluster memory consumption issue is a bit of a
puzzler. I'm assuming you're using default bucket settings and not
using bucket types based on your previous emails, and that allow_mult
is still false for your ttl_stg bucket. Can you tell me more about the
data you're putting into that bucket for testing? I'll try and
reproduce it with my single node cluster.

--
Luke Bakken
Engineer / CSE
lbakken <at> basho.com


On Mon, Oct 13, 2014 at 5:02 PM, Lucas Grijander
<lucasgrinjander69 <at> gmail.com> wrote:
> Hi Luke.
>
> I really appreciate your efforts to attempt to reproduce the problem. I
> think that the configs are right. I have been doing also a lot of tests and
> with 1 server/node, the memory bucket works flawlessly, as your test. The
> Riak cluster where we have the problem has a multi_backend with 1 memory
> backend, 2 bitcask backends and 2 leveldb backends. I have only changed the
> parameter connection of the memory backend in our production code to another
> new "cluster" with only 1 node, with the same config of Riak but with only 1
> memory backend under the multi configuration and, as I said, all fine, the
> problem vanished. I deduce that the problem appears only with more than 1
> node and with a lot of requests.
>
> In my tests with the production cluster with the problem ( 4 nodes), finally
> I realized that the TTL is working but, randomly and suddenly, KEYS already
> deleted appear, and KEYS with correct TTL disappear :-? (Maybe something
> related with the some ETS internal table? ) This is the moment when I can
> obtain KEYS already expired.
>
> In summary:
>
> - With cluster with 4 nodes (config below): All OK for a while and suddenly
> we lost the last 20 seconds approx. of keys and OLD keys appear in the list:
> curl -X GET http://localhost:8098/buckets/ttl_stg/keys?keys=true
>
> buckets.default.last_write_wins = true
> bitcask.io_mode = erlang
> multi_backend.ttl_stg.storage_backend = memory
> multi_backend.ttl_stg.memory_backend.ttl = 90s
> multi_backend.ttl_stg.memory_backend.max_memory_per_vnode = 25MB
> anti_entropy = passive
> ring_size = 256
>
> - With 1 node: All OK
>
> buckets.default.n_val = 1
> buckets.default.last_write_wins = true
> buckets.default.r = 1
> buckets.default.w = 1
> multi_backend. ttl_stg.storage_backend = memory
> multi_backend. ttl_stg.memory_backend.ttl = 90s
> multi_backend. ttl_stg.memory_backend.max_memory_per_vnode = 250MB
> ring_size = 16
>
>
>
> Another note: With this 1 node (32GB RAM) and only activated the memory
> backend I have realized than the memory consumption grows without control:
>
>
> # riak-admin  status|grep memory
> memory_total : 17323130960
> memory_processes : 235043016
> memory_processes_used : 233078456
> memory_system : 17088087944
> memory_atom : 561761
> memory_atom_used : 561127
> memory_binary : 6737787976
> memory_code : 14370908
> memory_ets : 10295224544
>
> # # riak-admin diag -d debug
> [debug] Local RPC: os:getpid([]) [5000]
> [debug] Running shell command: ps -o pmem,rss -p 17521
> [debug] Shell command output:
> %MEM   RSS
> 60.5 19863800
>
> Wow 18.9GB when the max_memory_per_vnode = 250MB. Is far away from the
> value,  250*16vnodes = 4000MB. Is it that correct?
>
> This is the riak-admin vnode-status of 1 vnode, the other 15 are with
> similar data:
>
> VNode: 1370157784997721485815954530671515330927436759040
> Backend: riak_kv_multi_backend
> Status:
> [{<<"ttl_stg">>,
>   [{mod,riak_kv_memory_backend},
>    {data_table_status,[{compressed,false},
>                        {memory,1156673},
>                        {owner,<8343.9466.104>},
>                        {heir,none},
>
> {name,riak_kv_1370157784997721485815954530671515330927436759040},
>                        {size,29656},
>                        {node,'riak <at> xxxxxxxx'},
>                        {named_table,false},
>                        {type,ordered_set},
>                        {keypos,1},
>                        {protection,protected}]},
>    {index_table_status,[{compressed,false},
>                         {memory,89},
>                         {owner,<8343.9466.104>},
>                         {heir,none},
>
> {name,riak_kv_1370157784997721485815954530671515330927436759040_i},
>                         {size,0},
>                         {node,'riak <at> xxxxxxxxx'},
>                         {named_table,false},
>                         {type,ordered_set},
>                         {keypos,1},
>                         {protection,protected}]},
>    {time_table_status,[{compressed,false},
>                        {memory,75968936},
>                        {owner,<8343.9466.104>},
>                        {heir,none},
>
> {name,riak_kv_1370157784997721485815954530671515330927436759040_t},
>                        {size,2813661},
>                        {node,'riak <at> xxxxxxxxx'},
>                        {named_table,false},
>                        {type,ordered_set},
>                        {keypos,1},
>                        {protection,protected}]}]}]
>
> Thanks!
>
> 2014-10-13 22:30 GMT+02:00 Luke Bakken <lbakken <at> basho.com>:
>>
>> Hi Lucas,
>>
>> I've tried reproducing this using a local Riak 2.0.1 node, however TTL
>> is working as expected.
>>
>> Here is the configuration I have in /etc/riak/riak.conf:
>>
>> storage_backend = multi
>> multi_backend.default = bc_default
>>
>> multi_backend.ttl_stg.storage_backend = memory
>> multi_backend.ttl_stg.memory_backend.ttl = 90s
>> multi_backend.ttl_stg.memory_backend.max_memory_per_vnode = 4MB
>>
>> multi_backend.bc_default.storage_backend = bitcask
>> multi_backend.bc_default.bitcask.data_root = /var/lib/riak/bc_default
>> multi_backend.bc_default.bitcask.io_mode = erlang
>>
>> This translates to the following in
>> /var/lib/riak/generated.configs/app.2014.10.13.13.13.29.config:
>>
>> {multi_backend_default,<<"bc_default">>},
>> {multi_backend,
>>     [{<<"ttl_stg">>,riak_kv_memory_backend,[{ttl,90},{max_memory,4}]},
>>     {<<"bc_default">>,riak_kv_bitcask_backend,
>>     [{io_mode,erlang},
>>         {expiry_grace_time,0},
>>         {small_file_threshold,10485760},
>>         {dead_bytes_threshold,134217728},
>>         {frag_threshold,40},
>>         {dead_bytes_merge_trigger,536870912},
>>         {frag_merge_trigger,60},
>>         {max_file_size,2147483648},
>>         {open_timeout,4},
>>         {data_root,"/var/lib/riak/bc_default"},
>>         {sync_strategy,none},
>>         {merge_window,always},
>>         {max_fold_age,-1},
>>         {max_fold_puts,0},
>>         {expiry_secs,-1},
>>         {require_hint_crc,true}]}]}]},
>>
>> I set the bucket properties to use the ttl_stg backend:
>>
>> root <at> UBUNTU-12-1:~# cat ttl_stg-props.json
>> {"props":{"name":"ttl_stg","backend":"ttl_stg"}}
>>
>> root <at> UBUNTU-12-1:~# curl -XPUT -H'Content-type: application/json'
>> localhost:8098/buckets/ttl_stg/props --data-ascii <at> ttl_stg-props.json
>>
>> root <at> UBUNTU-12-1:~# curl -XGET localhost:8098/buckets/ttl_stg/props
>>
>> {"props":{"allow_mult":false,"backend":"ttl_stg","basic_quorum":false,"big_vclock":50,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"dvv_enabled":false,"dw":"quorum","last_write_wins":false,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"n_val":3,"name":"ttl_stg","notfound_ok":true,"old_vclock":86400,"postcommit":[],"pr":0,"precommit":[],"pw":0,"r":"quorum","rw":"quorum","small_vclock":50,"w":"quorum","young_vclock":20}}
>>
>>
>> And used the following statement to PUT test data:
>>
>> curl -XPUT localhost:8098/buckets/ttl_stg/keys/1 -d "TEST $(date)"
>>
>> After 90 seconds, this is the response I get from Riak:
>>
>> root <at> UBUNTU-12-1:~# curl -XGET localhost:8098/buckets/ttl_stg/keys/1
>> not found
>>
>> I would carefully check all of the app.config / riak.conf files in
>> your cluster, the output of "riak config effective" and the bucket
>> properties for those buckets you expect to be using the memory backend
>> with TTL. I also recommend using the localhost:8098/buckets/ endpoint
>> instead of the deprecated riak/ endpoint.
>>
>> Please let me know if you have additional questions.
>> --
>> Luke Bakken
>> Engineer / CSE
>> lbakken <at> basho.com
>>
>>
>> On Fri, Oct 3, 2014 at 11:32 AM, Lucas Grijander
>> <lucasgrinjander69 <at> gmail.com> wrote:
>> > Hello,
>> >
>> > I have a memory backend in production with Riak 2.0.1, 4 servers and 256
>> > vnodes. The servers have the same date and time.
>> >
>> > I have seen an odd performance with the ttl.
>> >
>> > This is the config:
>> >
>> >            {<<"ttl_stg">>,riak_kv_memory_backend,
>> >             [{ttl,90},{max_memory,25}]},
>> >
>> > For example, see this GET response in one of the riak servers:
>> >
>> > < HTTP/1.1 200 OK
>> > < X-Riak-Vclock: a85hYGBgzGDKBVIc4otdfgR/7bfIYEpkzGNlKI1efJYvCwA=
>> > < Vary: Accept-Encoding
>> > * Server MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained) is
>> > not
>> > blacklisted
>> > < Server: MochiWeb/1.1 WebMachine/1.10.5 (jokes are better explained)
>> > < Link: </riak/ttl_stg>; rel="up"
>> > < Last-Modified: Fri, 03 Oct 2014 17:40:05 GMT
>> > < ETag: "3c8bGoifWcOCSVn0otD5nI"
>> > < Date: Fri, 03 Oct 2014 17:47:50 GMT
>> > < Content-Type: application/json
>> > < Content-Length: 17
>> >
>> > If the TTL is 90 seconds, Why the GET doesn't return "not found" if the
>> > difference between "Last-Modified" and "Date" (of the curl request) is
>> > greater than the TTL?
>> >
>> > Thanks in advance!
>> >
>> >
>> > _______________________________________________
>> > riak-users mailing list
>> > riak-users <at> lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>
>
>
> _______________________________________________
> riak-users mailing list
> riak-users <at> lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Seth Thomas | 18 Oct 17:43 2014

[ANN] Riak Chef Cookbook for Riak 2.0!

It's been a long time coming but we've finally released the Chef cookbook[1] with support for Riak 2.0 (2.0.1). The cookbook can be grabbed by its git tag [2] or from Chef's Supermarket site [3].

If you're still on Riak 1.x, we also have an update [4] and have cut a branch for any continuing 1.4.x based releases [5].

Feedback, bug reports, and pull requests are always welcome.

Cheers,

Seth Thomas

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Seth Thomas | 18 Oct 17:42 2014

[ANN] Riak CS Chef Cookbook Updated (1.5.1 support)

An update for the Riak CS Chef cookbook has been released, bringing support for Riak CS 1.5.1 along with updates to dependences and some bug fixes.

You can grab it from github [1] or the Chef Supermarket site [2]


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Mike Carlson | 17 Oct 00:39 2014
Picon

FreeBSD 10 packages?

I've found the Riak 2.0.1 (pkgng-based package) for FreeBSD 10, but I was wondering if the other Riak tools will also be available?

Riak-CS, Stanchion, and Riak-CS-Control all have 9.x packages, and it would be excellent if 10.x packages were published

Also, the Riak 2.0.1 package seems to require the FreeBSD 9 compat package, however, I wonder if it it actually needs gcc (4.8 seemed to work fine).

If it uses libstdc++ from the compat package, Riak 2.0 will not run at all, it must use a gcc provided libstdc++

I can open up issues in github if that is helpful, I just don't know which project top open the issue with

Thank you very much!

Mike C

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
ayush mishra | 16 Oct 08:26 2014
Picon

Re: Not able to store data in original riak bucket

Hi Luke,

Sorry I misunderstood what you were saying.

You can reproduce this by following below steps:

* Install Riak 1.4
* put some data into this environment.
* then upgrade to 2.0.1 and you would be able to see the search hook being added to buckets.


You can reproduce this by following above steps.
 
Today I deployed code on Codeship  and got same issue. I can't setup my temporary fix on codeship.

Could you please suggest any permanent fix for the same?


Regards,
Ayush



On Wed, Oct 15, 2014 at 1:29 AM, Luke Bakken <lbakken <at> basho.com> wrote:
I haven't reproduced it yet, but I'll let you know when I do :-)
Thanks for providing all this information.
--
Luke Bakken
Engineer / CSE
lbakken <at> basho.com


On Tue, Oct 14, 2014 at 10:10 AM, ayush mishra
<ayushmishra2005 <at> gmail.com> wrote:
> Ohhh This is the reason. Thanks Luke for finding root cause. I will take
> care of this in future. Since I am new to Riak, so I am not aware of this.
>
> Regards,
> Ayush
>
> On Tue, Oct 14, 2014 at 10:23 PM, Luke Bakken <lbakken <at> basho.com> wrote:
>>
>> Hi Ayush,
>>
>> Just so I'm clear and can reproduce this:
>>
>> * You were using Riak 1.4 and had Riak Search enabled.
>> * You put some data into this environment.
>> * You then upgraded to 2.0.1 and saw the search hook being added to
>> buckets.
>>
>> Thanks!
>>
>> --
>> Luke Bakken
>> Engineer / CSE
>> lbakken <at> basho.com
>>
>>
>> On Tue, Oct 14, 2014 at 9:45 AM, ayush mishra <ayushmishra2005 <at> gmail.com>
>> wrote:
>> > Hi Luke,
>> >
>> > You are right. This is just a workaround not a fix. I had some urgency
>> > in my
>> > project. That's why I followed this approach. I updated it from previous
>> > version to 2.0.1.
>> >
>> > Regards,
>> > Ayush
>> >
>> >
>> > On Tue, Oct 14, 2014 at 8:05 PM, Luke Bakken <lbakken <at> basho.com> wrote:
>> >>
>> >> Hi Ayush,
>> >>
>> >> By making that change, you are enabling Riak Search 1.0, which I
>> >> assume you don't want to do. This is a "workaround" but not a fix.
>> >>
>> >> * Did you upgrade from a previous version of Riak to 2.0.1?
>> >>
>> >> * What client library and version of that library are you using?
>> >>
>> >> * Can you run this command and send me the output? tar czf
>> >> /tmp/riak-config-$(hostname).tgz /var/lib/riak/generated.configs
>> >>
>> >> Thanks!
>> >>
>> >> --
>> >> Luke Bakken
>> >> Engineer / CSE
>> >> lbakken <at> basho.com
>> >>
>> >>
>> >> On Tue, Oct 14, 2014 at 12:52 AM, ayush mishra
>> >> <ayushmishra2005 <at> gmail.com> wrote:
>> >> >
>> >> >
>> >> > http://www.dzone.com/links/r/solution_for_riak_500_internal_server_error.html
>
>

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Luke Bakken | 14 Oct 18:53 2014

Re: Not able to store data in original riak bucket

Hi Ayush,

Just so I'm clear and can reproduce this:

* You were using Riak 1.4 and had Riak Search enabled.
* You put some data into this environment.
* You then upgraded to 2.0.1 and saw the search hook being added to buckets.

Thanks!

--
Luke Bakken
Engineer / CSE
lbakken <at> basho.com

On Tue, Oct 14, 2014 at 9:45 AM, ayush mishra <ayushmishra2005 <at> gmail.com> wrote:
> Hi Luke,
>
> You are right. This is just a workaround not a fix. I had some urgency in my
> project. That's why I followed this approach. I updated it from previous
> version to 2.0.1.
>
> Regards,
> Ayush
>
>
> On Tue, Oct 14, 2014 at 8:05 PM, Luke Bakken <lbakken <at> basho.com> wrote:
>>
>> Hi Ayush,
>>
>> By making that change, you are enabling Riak Search 1.0, which I
>> assume you don't want to do. This is a "workaround" but not a fix.
>>
>> * Did you upgrade from a previous version of Riak to 2.0.1?
>>
>> * What client library and version of that library are you using?
>>
>> * Can you run this command and send me the output? tar czf
>> /tmp/riak-config-$(hostname).tgz /var/lib/riak/generated.configs
>>
>> Thanks!
>>
>> --
>> Luke Bakken
>> Engineer / CSE
>> lbakken <at> basho.com
>>
>>
>> On Tue, Oct 14, 2014 at 12:52 AM, ayush mishra
>> <ayushmishra2005 <at> gmail.com> wrote:
>> >
>> > http://www.dzone.com/links/r/solution_for_riak_500_internal_server_error.html

Gmane