Alfonso Hooker | 23 Jul 21:21 2014

Riak backup and restore for ubuntu

I am looking for some information regarding the backup and restore of a Riak instance. I am currently using version riak_2.0.0beta1-1 and whenever we run the riak-admin backup command I receive the following error below. If this is not the proper way to backup an instance then I need to know the process of restoring the data once I have a full backup of the leveldb, ring, and configuration files.

command sytax:

riak-admin backup riak <at> 10.XXX.XXX.XXX cookiename /tmp/riak-bkup.bkup all

Error: 

Riak backup error

...from ['riak <at> 10.XXX.XXX.XXX']

{"init terminating in do_boot",{{badmatch,{error,{not_a_log_file,"/tmp/riak-bkup.bkup"}}},[{riak_kv_backup,backup,3,[{file,"src/riak_kv_backup.erl"},{line,62}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,569}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}

Crash dump was written to: erl_crash.dump

init terminating in do_boot ()




-Alfonso


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
David James | 23 Jul 20:31 2014
Picon

[ANN] Kria 0.1.14, an async Clojure Client for Riak 2.0.0.rc1

The latest version of Kria, 0.1.14, supports Riak 2.0.0rc1. Kria (a right rotation of "Riak") is an open source asynchronous Clojure driver for Riak 2.0 built on top of Java 7's NIO.2. It uses the Riak protocol buffer interface.


There are, of course, several Riak drivers for Java and Clojure. I hope some people find this one useful. I have a section in the README about why I made it. To summarize, I wanted async support, and the Java driver wasn't quite what I wanted.

Please kick the tires.

In my work projects, Clojure's core.async works great as a layer on top of Kria. Just create a core.async channel in advance and have the callback put the desired return value in the core.async channel. (You could also use Clojure atoms or promises; Kria doesn't care.)

-David
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Charles Bijon | 23 Jul 11:55 2014
Picon

riak random block missing on GET after some hour

hello,

we have a problem since 4 days.

The story :

We did a migration of our 38 old machines  to 45 new servers to increase 
our production capacity. To do this, we make update of RIAK (1.4.8 -> 
1.4.9 -> 1.4.10). Today we have Riak (1.4.10) riakcs (1.4.5) and 
stanchion (1.4.3).

Now something strange appeared our storage : New data put on the Riak 
cluster become corrupted over time (  AAE enabled and was enabled during 
the migration ) .

We have this message in log : error] 
<0.13320.0> <at> riak_cs_get_fsm:waiting_chunks:311 riak_cs_get_fsm: Cannot 
get S3 <<"blabla">> <<"blabla/blabla/blabla/blabla.foo">> block#
{<<94,144,214,192,123,131,68,132,142,55,30,108,189,81,242,106>>,0}: 
{error,notfound}

Yesterday I deactivated AAE to test if the problem continues and we put 
the dataagain to rebuild storage partialy.

The riakdiag is ok and also the ring-status

Is someone already had this trouble ?

Is it advisable to go back to the 1.4.8 version ?

Is what I have to restore AAE? And under what conditions?

If i should back to the 1.4.8 version of Riak, how to without loose 
something ? Is this the right approach? Is that corrupt data will become 
viable after the rear back?

It's a little hell right now, I really need a helping hand.

Regards,

Charles
Alexander Popov | 23 Jul 11:22 2014
Picon

Yokozuna search

Will queries support masks at beging and 1 char mask like  *lala and a*

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Charles Bijon | 22 Jul 14:48 2014
Picon

block not found

Since  4 days we have an issue on our cluster and we don't know how 
correct this problem.

We have this problem after migrating 38 old node to 45 new nodes cluster.

Someone have an idea about this ?

2014-07-22 14:41:11 =CRASH REPORT====
   crasher:
     initial call: mochiweb_acceptor:init/3
     pid: <0.1957.2>
     registered_name: []
     exception exit: 
{{normal,{gen_fsm,sync_send_event,[<0.1998.2>,get_next_chunk,infinity]}},[{gen_fsm,sync_send_event,3,[{file,"gen_fsm.erl"},{line,214}]},{riak_cs_wm_utils,streaming_get,4,[{file,"src/riak_cs_wm_utils.erl"},{line,272}]},{webmachine_decision_core,'-make_encoder_str
eam/3-fun-0-',3,[{file,"src/webmachine_decision_core.erl"},{line,667}]},{webmachine_request,send_stream_body_no_chunk,2,[{file,"src/webmachine_request.erl"},{line,334}]},{webmachine_request,send_response,3,[{file,"src/webmachine_request.erl"},{line,398}]},{webmachine_request,call,2
,[{file,"src/webmachine_request.erl"},{line,251}]},{webmachine_decision_core,wrcall,1,[{file,"src/webmachine_decision_core.erl"},{line,42}]},{webmachine_decision_core,finish_response,3,[{file,"src/webmachine_decision_core.erl"},{line,92}]}]}
     ancestors: [object_web_mochiweb,riak_cs_sup,<0.141.0>]

.....

,{moss_bucket_v1,"les-publications-conde-nast-sa",created,"2013-11-05T15:46:29.000Z",{1383,666389,334639},undefined},{moss_bucket_v1,"les-quatre-chemins",created,"2014-03-10T22:30:54.000Z",{1394,490654,643513},undefined},{moss_bucket_v1,...},...],...},...},...}}]
     trap_exit: false
     status: running
     heap_size: 317811
     stack_size: 24
     reductions: 472810
   neighbours:
2014-07-22 14:41:43 =CRASH REPORT====

2014-07-22 14:45:31.290 [error] 
<0.6203.2> <at> riak_cs_get_fsm:waiting_chunks:311 riak_cs_get_fsm: Cannot 
get S3 <<"daily-news">> 
<<"daily-news/bw/2014-07-22/release/x_ml_pdf/merged/download.complete">> 
block# {<<0,46,70,239,21,65,71,20,134,105,153,120,80,55,165,110>>,0}: 
{error,notfound}
2014-07-22 14:45:31.292 [error] <0.6182.2> CRASH REPORT Process 
<0.6182.2> with 0 neighbours exited with reason: 
{normal,{gen_fsm,sync_send_event,[<0.6203.2>,get_next_chunk,infinity]}} 
in gen_fsm:sync_send_event/3 line 214
2014-07-22 14:45:37.557 [error] 
<0.6270.2> <at> riak_cs_wm_common:maybe_create_user:223 Retrieval of user 
record for s3 failed. Reason: no_user_key
2014-07-22 14:45:42.327 [error] 
<0.6280.2> <at> riak_cs_wm_common:maybe_create_user:223 Retrieval of user 
record for s3 failed. Reason: no_user_key
2014-07-22 14:45:43.948 [error] 
<0.6367.2> <at> riak_cs_wm_common:maybe_create_user:223 Retrieval of user 
record for s3 failed. Reason: no_user_key
2014-07-22 14:45:43.971 [error] 
<0.6395.2> <at> riak_cs_get_fsm:waiting_chunks:311 riak_cs_get_fsm: Cannot 
get S3 <<"groupe-nice-matin">>

<<"corse-matin/corse-matin/2014-07-22/release/reader/pages/jpeg/ld/0016.d04e5b20-706e-4f6c-a0fd-469447d5ac33.jpeg">> 
block# {<<5,216,166,27,75,67,78,254,180,245,135,148,87,243,135,231>>,0}: 
{error,notfound}
2014-07-22 14:45:43.973 [error] <0.6367.2> CRASH REPORT Process 
<0.6367.2> with 0 neighbours exited with reason: 
{normal,{gen_fsm,sync_send_event,[<0.6395.2>,get_next_chunk,infinity]}} 
in gen_fsm:sync_send_event/3 line 214

Regards,

Charles
Charles Bijon | 22 Jul 10:57 2014
Picon

can't retreive from riakcs ( 503 )

Hi,

We have 45 nodes cluster and since our migration on new servesr we have 
this problem :

2014-07-22 10:51:09.017 [error] 
<0.25603.28> <at> riak_cs_get_fsm:waiting_chunks:311 riak_cs_get_fsm: Cannot 
get S3 <<"l-humanite">> 
<<"l-humanite/l-humanite/2014-07-22/cover/cover.ppm74,86,159,38,110,43,45,185,98,160>>,0}: 
{error,notfound}
2014-07-22 10:51:09.020 [error] <0.25556.28> CRASH REPORT Process 
<0.25556.28> with 0 neighbours exited with reason: 
{normal,{gen_fsm,sync_send_event,[<0.25603.28>,get_next_chunk,ent/3 line 214
2014-07-22 10:51:24.593 [error] 
<0.25820.28> <at> riak_cs_wm_common:maybe_create_user:223 Retrieval of user 
record for s3 failed. Reason: no_user_key
2014-07-22 10:51:24.619 [error] 
<0.25861.28> <at> riak_cs_get_fsm:waiting_chunks:311 riak_cs_get_fsm: Cannot 
get S3 <<"le-parisien">>

<<"aujourd-hui-en-france/aujourd-hui-en-france/201d/0001.2c9c2f56-e156-43d9-862e-63322a1364d8.jpeg">> 
block# {<<76,199,23,132,221,93,79,48,190,129,151,207,78,236,111,29>>,0}: 
{error,notfound}
2014-07-22 10:51:24.621 [error] <0.25820.28> CRASH REPORT Process 
<0.25820.28> with 0 neighbours exited with reason: 
{normal,{gen_fsm,sync_send_event,[<0.25861.28>,get_next_chunk,ent/3 line 214

When we ask a file, the wget fail and we can't get the integrality of 
file. And we have this issue in riak-cs error log file.

Regards,

Charles
Alex De la rosa | 22 Jul 00:03 2014
Picon

Re: Riak 2.0.0 RC1

Cool! thanks, can't wait to have the updated Python library running :) we have a new project incoming at work and would be awesome to be able to start development with 2.0 and the Python client.

Thanks,
Alex


On Mon, Jul 21, 2014 at 11:58 PM, Jared Morrow <jared <at> basho.com> wrote:
The clients are still being finalized.  Word on the street is the Java client RC will be very soon, and the other clients will follow.  The person who would know best has today off, but stay tuned.

Thanks,
Jared


On Mon, Jul 21, 2014 at 3:38 PM, Alex De la rosa <alex.rosa.box <at> gmail.com> wrote:
Awesome! Can't wait to try it! What about the python client for Riak 2.0? I remember testing the actual version and still have many issues and non-workable new features (like counters, sets, maps, etc.)

Thanks!
Alex


On Mon, Jul 21, 2014 at 11:34 PM, Jared Morrow <jared <at> basho.com> wrote:

Riak-Users,

Everyone at Basho is extremely happy to announce the public availability of Riak 2.0.0 RC. As a release candidate, we do not recommend using it in production, but it should be considered feature and API complete for Riak 2.0 final. We reserve the right to make changes for any data-loss or severe performance issues found post-RC, but these will not affect the API.

This is our most ambitious release, with major new features throughout the product. With that in mind, please take plenty of time to read through the Release Notes. There are some important considerations regarding upgrades, known issues, and packaging. The Release Notes are still a work-in-progress and will be continually updated as we march towards 2.0 final. We have made a lot of progress on our docs site for 2.0, but still have work to do before final release. We appreciate your patience.

Thanks to everyone in our community,
-Team Basho


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Jared Morrow | 21 Jul 23:34 2014

Riak 2.0.0 RC1

Riak-Users,

Everyone at Basho is extremely happy to announce the public availability of Riak 2.0.0 RC. As a release candidate, we do not recommend using it in production, but it should be considered feature and API complete for Riak 2.0 final. We reserve the right to make changes for any data-loss or severe performance issues found post-RC, but these will not affect the API.

This is our most ambitious release, with major new features throughout the product. With that in mind, please take plenty of time to read through the Release Notes. There are some important considerations regarding upgrades, known issues, and packaging. The Release Notes are still a work-in-progress and will be continually updated as we march towards 2.0 final. We have made a lot of progress on our docs site for 2.0, but still have work to do before final release. We appreciate your patience.

Thanks to everyone in our community,
-Team Basho

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Luke Bakken | 21 Jul 18:40 2014

Re: Indexing using the Solr Interface, add after delete will not applied on cluster

Hi Alexander -

Can you give the *exact* commands you're running to allow me to reproduce this?

Thanks

--
Luke Bakken
CSE
lbakken <at> basho.com


On Wed, May 21, 2014 at 9:48 AM, Alexander Popov <mogadanez <at> gmail.com> wrote:
I've delete index using 
http://docs.basho.com/riak/latest/dev/references/search-indexing/#Deleting-using-the-Solr-Interface
and than immediately add it by
http://docs.basho.com/riak/latest/dev/references/search-indexing/#Indexing-using-the-Solr-Interface

Item will not appears in search results in cluster setup.
On single node not seen this issue.


 

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Matthew Eernisse | 21 Jul 15:48 2014

ring_creation_size with `riak-admin status`

Riak folks,

Hi, I'm new to Riak, and confused by something I've seen after setting up a cluster. (Currently a 4-machine test cluster in EC2.)

After installing Riak on Ubuntu via the riak_1.4.9-1_amd64.deb, I very meticulously set ring_creation_size to 256 in each app.config before starting the service and adding it to the cluster.

But when I do `riak-admin status`, it displays a ring_creation_size value of 64. Any idea what's going on here?

Thanks in advance for the help.


Matthew
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Mohan Radhakrishnan | 20 Jul 06:04 2014
Picon

Fwd: Riak CS

Thanks Kelly. Could you point out some practical use cases that Riak supports in production systems ? I have viewed webinars and podcasts that describe distributed computing patterns in detail. 


Mohan


On Wed, Jul 16, 2014 at 3:00 AM, Kelly McLaughlin <kelly <at> basho.com> wrote:
Mohan,

I might be a bit confused on what your intent is, but it sounds like your task is to download a large group of files from S3 for processing and you are considering Riak CS for that processing work. If that is the case I am not sure Riak CS is the right fit for that job. Riak itself has a map reduce system built into it, but that is not exposed by Riak CS. Currently, Riak CS is strictly for storage and not data processing. If that is your need you might be better off looking at tools built for data processing like Spark or Hadoop. 

Kelly

On July 11, 2014 at 10:13:35 PM, Mohan Radhakrishnan (radhakrishnan.mohan <at> gmail.com) wrote:

I thought the general idea floating in my ignorant mind can help. We have other storage systems apart from S3 like FTPS, HTTPS etc. I thought the task of crawling remote storage systems and processing files naturally lent itself to distributed MR jobs and a DFS.

That is when I came across Riak CS.

Thanks,
Mohan


On Sat, Jul 12, 2014 at 9:32 AM, Mohan Radhakrishnan <radhakrishnan.mohan <at> gmail.com> wrote:
Hi,
             I came across the storage system discussion thread. We have a requirement to download thousands of files from S3 for processing. Ours is not a cloud storage system but a cloud access system.

Are there  qualities of Riak CS that can help us ? We want to download part of a huge file and checkpoint if the connection breaks. File downloads should be fault-tolerant when nodes go down.

Please bear with me if my question is basic. I don't work with distributed cloud storage systems.

Thanks,
Mohan

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Gmane