Jhonny Everson | 5 May 04:34 2016
Picon
Gravatar

Riak-cs / stanchion won't find credentials

Hi,

I am setting up a new cluster. I followed all the setup instructions (I think). I created admin user as doc says, then updated riak-cs.conf and stanchion.conf with generated keys. I get the following when starting:

2016-05-05 01:15:01.167 [error] <0.149.0> <at> riak_cs_app:fetch_and_cache_admin_creds:96 Couldn't get admin user (LMTLWU8QZ_UZZJ4Y541) record: {error,notfound}
2016-05-05 01:15:01.199 [error] <0.149.0> <at> riak_cs_app:sanity_check:129 Admin credentials are not properly set: notfound.

If I revert back to default ('admin.key = admin-key'), then it starts OK. If I try to create the user again, it says the email already exists. So it's there.

I looked at the logs and didn't find anything that seems relevant other these entries I just posted. Can someone please help me dig into this issue?

--
Jhonny Everson
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Ricardo Mayerhofer | 2 May 16:25 2016
Picon

Riak server crashing

Hi all,
I've a riak server that was running fine for 1 month, now it keep crashing. Restart has no effect. Any idea?

Riak version 2.13

error.log

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.2386.190>,stop]}} in context shutdown_error

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.19017.322>,stop]}} in context shutdown_error

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.18706.190>,stop]}} in context shutdown_error

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.19822.324>,stop]}} in context shutdown_error

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.2857.0>,stop]}} in context shutdown_error

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.2786.0>,stop]}} in context shutdown_error

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.2626.0>,stop]}} in context shutdown_error

2016-05-02 14:07:54.871 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.5208.180>,stop]}} in context shutdown_error

2016-05-02 14:07:54.872 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason {timeout,{gen_server,call,[<0.2562.0>,stop]}} in context shutdown_error

2016-05-02 14:07:54.872 [error] <0.170.0> Supervisor riak_core_vnode_sup had child undefined started with riak_core_vnode:start_link() at undefined exit with reason bad argument in call to ets:lookup(riak_core_node_watcher, {by_node,'riak <at> 127.0.0.1'}) in riak_core_node_watcher:internal_get_services/1 line 548 in context shutdown_error

2016-05-02 14:17:27.739 [error] <0.4518.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.739 [error] <0.4518.0> CRASH REPORT Process <0.4518.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.739 [error] <0.308.0> {mochiweb_socket_server,320,{acceptor_error,{error,accept_failed}}}

2016-05-02 14:17:27.739 [error] <0.5153.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.739 [error] <0.5153.0> CRASH REPORT Process <0.5153.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.739 [error] <0.5156.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.739 [error] <0.5156.0> CRASH REPORT Process <0.5156.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.739 [error] <0.5216.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.739 [error] <0.5216.0> CRASH REPORT Process <0.5216.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.5090.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.5090.0> CRASH REPORT Process <0.5090.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.5165.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.5165.0> CRASH REPORT Process <0.5165.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.5167.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.5167.0> CRASH REPORT Process <0.5167.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.5103.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.5103.0> CRASH REPORT Process <0.5103.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.5141.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.5141.0> CRASH REPORT Process <0.5141.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.5332.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.5332.0> CRASH REPORT Process <0.5332.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.5334.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.5334.0> CRASH REPORT Process <0.5334.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33

2016-05-02 14:17:27.740 [error] <0.4460.0> application: mochiweb, "Accept failed error", "{error,emfile}"

2016-05-02 14:17:27.740 [error] <0.4460.0> CRASH REPORT Process <0.4460.0> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 33




--
Ricardo Mayerhofer
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Guillaume Boddaert | 2 May 13:45 2016
Picon

Very slow acquisition time (99 percentile) while fast median times

Hi,

I'm trying to setup a production environment with Riak as backend. 
Unfortunately I have very slow write times that bottleneck my whole system.

Here is a sample of one of my node (riak-admin status | grep -e 
'^node_put_fsm_time'):
node_put_fsm_time_100 : 3305516
node_put_fsm_time_95 : 230589
node_put_fsm_time_99 : 1694593
node_put_fsm_time_mean : 79864
node_put_fsm_time_median : 14973

As you can see, I have really good times for most of my writes, yet the 
mean time is not so good because a few writes are taking long (Up to 3 
seconds)
How can I get rid of those slow insert ? Is that intended/normal ?

My setup is the following:
5 hosts (2CPU, %Cpu(s): 47,1 us,  1,3 sy,  0,0 ni, 51,3 id,  0,0 wa,  
0,0 hi,  0,2 si,  0,0 st) , ring_size: 128, aae disabled.
Writes are w=1 dw=0
each host has 32go of ram, that is almost used for system caching only.
My data are stored on an openstack volume that support up to 3000IOPS.

Here is an iostat sample for 1 minute:
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           38.00    0.00    1.81    0.03    0.08   60.07

Device:            tps    kB_read/s    kB_wrtn/s    kB_read kB_wrtn
vda               0.37         0.00         2.27          0 136
vdb               9.60         0.00       294.53          0 17672

Thanks,

Guilaume BODDAERT
Sanket Agrawal | 28 Apr 18:02 2016
Picon

Getting key of the map in erlang

Not sure if this has been asked before - given a map, how does one go about retrieving the key of the map? 

For example, in Riak example for map, a map is created with "ahmed_info" key. 

If we were to write a commit hook in Erlang where we want to do some kind of action based on the key, it will be helpful to have a way to extract the key.

I looked in basho erlang client documentation here for map, but don't see any function to extract the key. Perhaps we have to do pattern match to extract the key? 

I also see erlang libraries under riak installation (one of them "riak_object" is called in "commit hook" example in documentation) - I can check there as well if there is online documentation somewhere for them.

I am thinking of storing user info as immutable maps, something like <username>_<info>_<timestamp>, and have an erlang commit hook that updates <username>_<info>_<latest> map with the latest entry. For that, we need to extract the map key. 

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Russell Brown | 28 Apr 09:55 2016

Riak DT refresh

Hi,
Riak DT[1] is in need of some love. I know that some of you on this list (Sargun, are you here? Heinz?) have
expressed opinions on the work that needs doing. Here is my short list, I would love to hear opinions on
priority, and any additions to this list:

1. merger smaller map branch
2. deltas
3. new data types (we have a range register and some and Multi-Value-Register to add, any more?)
4. Internal state as records or maps (but not these messy tuples)
5. update to rebar3
6. update to latest erlang

I’m pretty sure there is plenty more. Would greatly appreciate your feedback.

Many thanks

Russell

[1] https://github.com/basho/riak_dt/
Alexander Popov | 26 Apr 22:36 2016
Picon
Gravatar

Solr http endpoint and POST

Does it possible? 
if yes - which encoding? form-data or multipart?

if no - does it possible to enlarge GET query length? (looks like  maximum is around 6500)

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Fasil K | 26 Apr 17:57 2016
Gravatar

solrconfig.xml in riak

Hello Everyone,

Do any one know how we can update the solrconfig.xml file for solr in riak search? 

The issue is I need to change a date format from 'YYYY-MM-DD' format to 'YYYY-MM-DDThh:mm:ssZ' for indexing . ( Cant index 'YYY-MM-DD' format directly to solr.TrieDateField ) . 

I am planning to use  ParseDateFieldUpdateProcessorFactory to convert the format. But I have to change solrconfig.xml file for that.

Please help.

Thanks in advance,

Fasil K
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
psterk | 23 Apr 00:13 2016
Picon

Unable to use hadoop distcp with Riak

Hi all,

I am trying to copy a file out of Riak using the s3 protocol to HDFS.  I
have the following file:

I created the following file: /etc/hadoop/conf/jets3t.properties

s3service.s3-endpoint=myhost
s3service.s3-endpoint-http-port=8080
s3service.disable-dns-buckets=true
s3service.s3-endpoint-virtual-path=/

s3service.max-thread-count=10
threaded-service.max-thread-count=10
s3service.https-only=false
httpclient.proxy-autodetect=false
httpclient.proxy-host=myhost
httpclient.proxy-port=8080
httpclient.retry-max=11

hadoop distcp  s3://<access key>:<secret key> <at> test/test
hdfs://localhost/tmp/test

I get this stack trace:

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException:
Request Error. -- ResponseCode: 404, ResponseStatus: Object Not Found
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:175)
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy25.retrieveINode(Unknown Source)
	at
org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
	at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
	at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1655)
	at
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
	at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
	at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
	at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
	at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
	at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
Caused by: org.jets3t.service.S3ServiceException: Request Error. --
ResponseCode: 404, ResponseStatus: Object Not Found
	at org.jets3t.service.S3Service.getObject(S3Service.java:1379)
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:163)
	... 20 more
Caused by: org.jets3t.service.impl.rest.HttpException
	at
org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:519)
	at
org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:281)
	at
org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestGet(RestStorageService.java:981)
	at
org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2150)
	at
org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2087)
	at org.jets3t.service.StorageService.getObject(StorageService.java:1140)
	at org.jets3t.service.S3Service.getObject(S3Service.java:2583)
	at org.jets3t.service.S3Service.getObject(S3Service.java:84)
	at org.jets3t.service.StorageService.getObject(StorageService.java:525)
	at org.jets3t.service.S3Service.getObject(S3Service.java:1377)

However, with a local .s3cfg file that points to a Riak cluster, I can do
this:

[hdfs <at> dsg01 ~]$ s3cmd ls s3://test
                       DIR   s3://test/home/
                       DIR   s3://test/setup/
                       DIR   s3://test/test/
                       DIR   s3://test/tmp/

So, s3://test/test does exist and is in Riak, not AWS.

Now, if I comment out s3service.s3-endpoint-virtual-path and run:

hadoop distcp  s3://<access key>:<secret key> <at> test/test
hdfs://localhost/tmp/test

I see:

java.io.IOException: /test doesn't exist
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy25.retrieveINode(Unknown Source)
	at
org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
	at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
	at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1655)
	at
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
	at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
	at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
	at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
	at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
	at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)

Using  <at> test/test/ produces the same exception as above.

Using: hadoop distcp  s3://<access key>:<secret key> <at> test
hdfs://localhost/tmp/test

java.io.IOException: /user/hdfs doesn't exist
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy25.retrieveINode(Unknown Source)
	at
org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
	at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
	at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1655)
	at
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
	at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
	at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
	at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
	at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
	at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)

I am the user 'hdfs'.

If I comment out these properties

#s3service.s3-endpoint=myhost
#s3service.s3-endpoint-http-port=8080
#s3service.disable-dns-buckets=true
#s3service.s3-endpoint-virtual-path=/

and run: hadoop distcp  s3://<access key>:<secret key> <at> test/test
hdfs://localhost/tmp/test

I get fresh, new exception:

16/04/22 21:53:34 ERROR tools.DistCp: Exception encountered
org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException:
S3 Error Message. -- ResponseCode: 403, ResponseStatus: Forbidden, XML Error
Message: <?xml version="1.0"
encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access
Denied</Message><Resource>/%2Ftest</Resource><RequestId></RequestId></Error>
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:175)
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy25.retrieveINode(Unknown Source)
	at
org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
	at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
	at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1655)
	at
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
	at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
	at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
	at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:181)
	at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
	at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.tools.DistCp.main(DistCp.java:430)
Caused by: org.jets3t.service.S3ServiceException: S3 Error Message. --
ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml
version="1.0"
encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access
Denied</Message><Resource>/%2Ftest</Resource><RequestId></RequestId></Error>
	at org.jets3t.service.S3Service.getObject(S3Service.java:1379)
	at
org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:163)
	... 20 more

It's odd to see "/%2Ftest"  which is a URL encoding for '/'.  Why is that
there?

Note: 'myhost' is just a placeholder for the actual hostname which does
resolve.

What am I missing?  

--
View this message in context: http://riak-users.197444.n3.nabble.com/Unable-to-use-hadoop-distcp-with-Riak-tp4034185.html
Sent from the Riak Users mailing list archive at Nabble.com.
David Byron | 22 Apr 20:28 2016

how to determine riak version

I'm pondering upgrading riak from 2.1.3 to 2.1.4 and got myself confused 
confirming that I really am running 2.1.3 at the moment.

I installed riak from here: 
https://packagecloud.io/basho/riak/packages/ubuntu/trusty/riak_2.1.3-1_amd64.deb.

and all of this looks promising:

$ riak version
2.1.3

$ ls /usr/lib/riak/releases/
2.1.3  RELEASES  start_erl.data

$ dpkg -l | grep riak
ii  riak 2.1.3-1      amd64        Riak is a distributed data store

but then there's also this:

$ sudo riak-admin status | grep riak_kv_version
riak_kv_version : <<"2.1.2-0-gf969bba">>

I really wanted riak_kv_version to say 2.1.3-<something>.

I'm clearly paranoid, but can someone help me feel better about this?

Thanks much.

-DB
Alexander Sicular | 22 Apr 19:01 2016

Riak Recap April 22nd, 2016

Hello All,


Here’s what’s been going on over the last few weeks. A bug fix release, a new product release, a number of talks, a bunch of questions and an open position.


Hey you! Ya, you! We want to hear from you. Are you working on or speaking about something Riak related and would like to be highlighted? Send me a note, let me know what you’re up to and I’ll get your talk or blog post in the next Riak Recap.



## Announcements


  • Riak KV 2.1.4 has been released and is available for download [1]. This release contains fixes for the Riak init file Product Advisory and the leveldb segfault Product Advisory. Please review the Release Notes [2] for more details before upgrading.

  • Redis Add-on 1.1 is now available for Riak Enterprise [3]. Redis Add-on allows you to integrate Redis as a read-through or write-around cache into your data pipeline.


## Community Events


  • Seema Jethani, Director of Product, <at> seemaj, will be speaking at Data By the Bay in San Francisco on May 19th on working with time series data from the London Air Quality Network in Riak TS and other projects [4].

  • Yours truly (me!) will be speaking in Dallas on May 5th about Riak TS architecture at the North Texas DAMA meetup [5]. Come say hello.

  • Basho Engineer Jason Voegele, <at> jvoegele, will be giving a talk at LambdaConf in Boulder, CO entitled “Dialyzer: Optimistic Type Checking for Erlang and Elixir” [6].


## Recently Answered


  • Saran was able to fix his Solr not starting issue [7] by correcting the internal ip address in the configuration file [8].

  • Because Riak is written in Erlang you are able to modify a number of config parameters without bouncing the VM. Nevertheless, there are a few parameters that can not be changed without a bump. Luke confirms Edgar’s concerns that, at the moment, changing an ip address requires a reboot of the Erlang virtual machine [9].

  • Fasil is looking for ways to remotely configure bucket types [10]. Vitaly confirms that certain operations are only available via the command line and not in the API [11].

  • Jim is experiencing a particular flavor of Solr timeouts [12] which Fred pinpoints to certain areas in the code and recommends making changes to hard coded timeouts [13]. Jim verifies the issue but is unhappy about making those changes, testing continues with a new JVM version [14].

  • Alex was having some issues testing against his single node cluster [15]. Luke verified that the ring file needed to be reset [16].

  • In case you were wondering, Riak S2 is the same as Riak CS [17]. Riak CS has been rebranded Riak S2.

  • Luke helps Satish work through some ulimit issues [18]. Riak noms fd’s… feed it!

  • Surajit found a broken link in the docs [19]. The documentation team always welcomes community feedback. If you see something - say something! [20].

  • Jared is looking for enhancements to the .NET client [21]. Luke drops him an example and lets us know that enhancements are on the way in the next version [22].

  • Shifeng is looking for details on how to delete and reclaim used disk in Riak [23]. Alexander (me!) answers with additional food for thought [24].

  • Harjot has some requirements for his time series related project and is looking for some feedback on the Riak TS product roadmap [25]. Seema, Director of Product at Basho lets us know what’s coming up next [26]. If you have a project that needs certain features - let us know!


## Open Discussions


  • In this long running thread Fred drops some Erlang which gets the build times of individual partition hash trees to help debug Oleksiy’s inconsistent Solr search results [27].

  • Alex is looking for advice regarding whether or not Riak TS would be better suited in certain use cases over Riak KV for his social network project [28].

  • Fred asks Anil for more details on his solr duplicate records issue [29].

  • Luke is looking for more information from Joe on his indexing design question [30].

  • Michael is looking for some guidance on properly sizing a Riak S2 cluster [31].


## Jobs at Basho


Interested in working on distributed computing related problems? Perhaps these open positions at Basho may be of interest:


  • Client Services Engineer (USA) [32]




Till next time,


-Alexander Sicular

Solution Architect, Basho

<at> siculars



[1] http://docs.basho.com/riak/kv/2.1.4/downloads/

[2] http://docs.basho.com/riak/kv/2.1.4/release-notes/

[3] http://docs.basho.com/riak/kv/2.1.4/add-ons/redis/redis-add-on-features/

[4] https://databythebay2016.sched.org/event/6ERC/know-the-air-you-are-breathing

[5] https://www.eventbrite.com/e/north-texas-dama-chapter-meeting-may-2016-tickets-24381313164

[6] http://lambdaconf.us/#schedule

[7] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018250.html

[8] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018259.html

[9] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018271.html

[10] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018270.html

[11] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018269.html

[12] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018272.html

[13] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018273.html

[14] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018273.html

[15] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018277.html

[16] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018293.html

[17] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018289.html

[18] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018297.html

[19] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018299.html

[20] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018300.html

[21] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018301.html

[22] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018302.html

[23] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018304.html

[24] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018307.html

[25] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018306.html

[26] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018314.html

[27] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018261.html

[28] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-April/018279.html

[29] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-March/018230.html

[30] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-March/018241.html

[31] http://lists.basho.com/pipermail/riak-users_lists.basho.com/2016-March/018242.html

[32] http://bashojobs.theresumator.com/apply/0CTNKU/Client-Services-Engineer-Remote


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Anil Chandgude(HO | 22 Apr 07:48 2016

How to index List<Class> in riak so that it will available for solr



Hi all,

    We use    *_s for String type, *_i for Integers and so on for single value fields.
*_ss for List<String>, *_ls for List<Long> nd so on.
Now I have condition where I want to store List of class like this List<SampleClass> . How to do this ?
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Gmane