Matthew Brender | 27 Feb 18:26 2015

Riak Recap - Feb 27, 2015

Hi Riakators,

Here's your regular Recap of what's been happening in the last two three weeks.

Fresh release

Other code

  • The Elixir client now includes connection pooling! Thanks Drew Kerrigan!
  • While still in POC, Jon Glick has a native OS X App for Riak in the works
  • We're glad to see a project called nkbase is leveraging riak_core

Recently answered

Still open

Event news

Meetups are back in season!

  • Seattle: Mar 3 New Stack Meetup has great speakers set up
  • Denver: Mar 4 at Distributed Computing Denver, our own Sargun Dhillon is speaking
  • Boston:Mar 5 we're talking Intro to Coding with Riak
  • NYC: Mar 19 we're also talking Intro to Coding with Riak (and it's the same week as GigaOM)

For the weekend

Your turn

  • What did I miss? What are you up to? Sharing is caring and you can always share here

Thanks for reading,

Matt 

Matt Brender | Developer Advocacy Lead
Basho Technologies

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Alexander Popov | 27 Feb 11:46 2015
Picon

2.0.5 compiled from source SOLR failed to start

2015-02-27 10:38:27.960 [info] <0.563.0> <at> yz_solr_proc:ensure_data_dir:219 No solr config found, creating a new one
2015-02-27 10:38:27.961 [info] <0.563.0> <at> yz_solr_proc:init:96 Starting solr: "/usr/bin/java" ["-Djava.awt.headless=true","-Djetty.home=/var/riak-dist/riak1/bin/../lib/yokozuna-2.0.0-34-g122659d/priv/solr","
-Djetty.port=18093","-Dsolr.solr.home=/var/lib/riak1/yz","-DhostContext=/internal_solr","-cp","/var/riak-dist/riak1/bin/../lib/yokozuna-2.0.0-34-g122659d/priv/solr/start.jar","-Dlog4j.configuration=file:///
var/riak-dist/riak1/etc/solr-log4j.properties","-Dyz.lib.dir=/var/riak-dist/riak1/bin/../lib/yokozuna-2.0.0-34-g122659d/priv/java_lib","-d64","-Xms1g","-Xmx3g","-XX:+UseStringCache","-XX:+UseCompressedOops"
,"-Dcom.sun.management.jmxremote.port=18985","-Dcom.sun.management.jmxremote.authenticate=false","-Dcom.sun.management.jmxremote.ssl=false","org.eclipse.jetty.start.Main"]
2015-02-27 10:38:28.004 [info] <0.7.0> Application yokozuna started on node 'riak1 <at> 10.0.0.133'
2015-02-27 10:38:28.007 [info] <0.7.0> Application cluster_info started on node 'riak1 <at> 10.0.0.133'
2015-02-27 10:38:28.033 [info] <0.198.0> <at> riak_core_capability:process_capability_changes:555 New capability: {riak_control,member_info_version} = v1
2015-02-27 10:38:28.035 [info] <0.7.0> Application riak_control started on node 'riak1 <at> 10.0.0.133'
2015-02-27 10:38:28.035 [info] <0.7.0> Application erlydtl started on node 'riak1 <at> 10.0.0.133'
2015-02-27 10:38:28.043 [info] <0.7.0> Application riak_auth_mods started on node 'riak1 <at> 10.0.0.133'
2015-02-27 10:38:28.307 [info] <0.563.0> <at> yz_solr_proc:handle_info:135 solr stdout/err: java.io.FileNotFoundException: No XML configuration files specified in start.config or command line.
        
2015-02-27 10:38:28.308 [info] <0.563.0> <at> yz_solr_proc:handle_info:135 solr stdout/err:  at org.eclipse.jetty.start.Main.start(Main.java:502)
        at org.eclipse.jetty.start.Main.main(Main.java:96)
       
Usage: java -jar start.jar [options] [properties] [configs]
       java -jar start.jar --help  # for more information
2015-02-27 10:38:28.625 [error] <0.563.0> gen_server yz_solr_proc terminated with reason: {"solr OS process exited",251}
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Matt Brooks | 26 Feb 23:41 2015
Picon

Using Riak for Data with many Entities and Relationships

I am designing a web application that, for the purpose of this conversation, deals with three main entities: 
  • Users
  • Groups
  • Tasks
Users are members of groups, and tasks belong to groups. 

Early in the development of the application, Neo4j was used to store the data. Users would have a MEMBER_OF relationship to a group, and tasks would have a BELONGS_TO relationship to a group. Neo4j was nice for access control because I could add permissions to the MEMBER_OF relationship. It was also nice for the simple BELONGS_TO relationship. Neo4j separates entites and relationships nicely. 

After reading about Riak and reminiscing about my use of MongoDB in the past, I began to think about using Riak to store my data instead of Neo4j. Storing the users, groups, and tasks seems trivial enough. But storing the relationships seems a bit tougher.

 I am planning on storing the entities in three buckets:
  • user
  • group
  • task
...where each of the buckets has the entity's ID as the key and a map of the relevant information as the value. 

What I am struggling with now is modeling the relationships I so easily modeled in Neo4j, in Riak. I have a few ideas:
  1. Store both user IDs and task IDs in lists inside of the group information. The user ID list would also include permissions for the users. 
  2. Store group IDs in a list inside of the user information and task IDs in a list inside of the group information.
  3. Use a user-group bucket and a group-task bucket. The user-group bucket will have user IDs as the keys and a list of maps as the value. The maps in question would hold a group ID and permission information for the group. The group-task bucket would be similar to the user-group bucket, but instead of a list of maps, it would simply have a list of task IDs.
  4. Use Riak's links for both user membership and tasks belonging to groups. A given user would have member links to groups, and a given group would have task links to tasks. Permissions for a given user ID would be stored in the group somewhere.
None of the four entirely satisfy me.. 

Number one makes it really hard and inefficient to ask the DB for the groups that a user is a member of (I would have to go through every single group and check if the user ID is in the member list). The same issue occurs with tasks.

Number two makes it really easy to go from user to group to tasks, but makes it difficult to go from group back to users. What if I wanted to ask "what users are members of group X?". 

Number three works in a way similar to relational databases, and does a good job of separating relationships from entities. This has the same issues mentioned in number two.

Number four seems to be the one that might be considered idiomatic Riak usage, but we completely separate permissions from the member relationship a user has with a group due to links not supporting complex properties. 

What do you think about the 4 models mentioned? Any ideas about how I can model this data in Riak effectively?

--
Matt Brooks
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Jason Greathouse | 26 Feb 22:09 2015

repair-2i fails: Error: index_scan_timeout

We have a 5 node cluster running 2.0.0beta1. Our 2i indexes seem to return different responses depending on which node you hit, so I'm trying to rebuild them.

I'm trying to run riak-admin repair-2i  and it fails with Error: index_scan_timeout

An example:
# riak-admin repair-2i 0
Will repair 2i on these partitions:
0
Watch the logs for 2i repair progress reports

console.log:
2015-02-26 14:55:32.099 [info] <0.7489.15> <at> riak_kv_2i_aae:init:139 Starting 2i repair at speed 100 for partitions [0]
2015-02-26 14:55:32.100 [info] <0.7491.15> <at> riak_kv_2i_aae:repair_partition:259 Acquired lock on partition 0
2015-02-26 14:55:32.100 [info] <0.7491.15> <at> riak_kv_2i_aae:repair_partition:261 Repairing indexes in partition 0
2015-02-26 14:55:32.100 [info] <0.7491.15> <at> riak_kv_2i_aae:create_index_data_db:326 Creating temporary database of 2i data in /data/riak/anti_entropy/2i/tmp_db
2015-02-26 14:55:32.114 [info] <0.7491.15> <at> riak_kv_2i_aae:create_index_data_db:363 Grabbing all index data for partition 0
2015-02-26 15:00:32.118 [error] <0.2701.0> gen_server <0.2701.0> terminated with reason: bad argument in call to eleveldb:async_get(#Ref<0.0.77.168699>, <<>>, <<131,104,2,109,0,0,0,12,80,68,45,101,118,101,110,116,98,97,115,101,109,0,0,0,22,48,48,48,48,48,...>>, []) in eleveldb:get/3 line 150
2015-02-26 15:00:32.119 [error] <0.2701.0> CRASH REPORT Process <0.2701.0> with 0 neighbours exited with reason: bad argument in call to eleveldb:async_get(#Ref<0.0.77.168699>, <<>>, <<131,104,2,109,0,0,0,12,80,68,45,101,118,101,110,116,98,97,115,101,109,0,0,0,22,48,48,48,48,48,...>>, []) in eleveldb:get/3 line 150 in gen_server:terminate/6 line 744
2015-02-26 15:00:32.121 [error] <0.2696.0> Supervisor {<0.2696.0>,poolboy_sup} had child riak_core_vnode_worker started with {riak_core_vnode_worker,start_link,undefined} at <0.2701.0> exit with reason bad argument in call to eleveldb:async_get(#Ref<0.0.77.168699>, <<>>, <<131,104,2,109,0,0,0,12,80,68,45,101,118,101,110,116,98,97,115,101,109,0,0,0,22,48,48,48,48,48,...>>, []) in eleveldb:get/3 line 150 in context child_terminated
2015-02-26 15:00:32.237 [info] <0.7489.15> <at> riak_kv_2i_aae:next_partition:160 Finished 2i repair:
Total partitions: 1
Finished partitions: 1
Speed: 100
Total 2i items scanned: 0
Total tree objects: 0
Total objects fixed: 0
With errors:
Partition: 0
Error: index_scan_timeout

I found a previous post about this:

I tried to set the aae_2i_batch_size, 10 in the advanced.conf, the server started, but I'm not sure how to confirm this setting took.
If it did take It didn't seem to help.

Any other suggestions? 
Let me know if you need more details.

Thanks,

Jason Greathouse
Sr. Systems Engineer

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Patrick F. Marques | 26 Feb 00:34 2015
Picon

Riak-CS Node.js client

Hi everyone,

I'm trying to use AWS SDK as a S3 client for Riack CS to upload large objects that I usually don't know the its size, for that propose I'm trying to use the multipart upload like in the SDK example https://github.com/aws/aws-sdk-js/blob/master/doc-src/guide/node-examples.md#amazon-s3-uploading-an-arbitrarily-sized-stream-upload.
The problem is that I'm always getting Access Denied.

I've been trying some other clients but also without success.

Best regards,
Patrick Marques


_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Corentin Jechoux | 25 Feb 23:43 2015
Picon

Riak on Heroku

Hello,

 I have setup a project to deploy one small Riak instance into Heroku plateform with one click.

You can see more details here :



Regards

Corentin
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Ricardo Mayerhofer | 25 Feb 22:54 2015
Picon

NoNodesAvailableException

Hi all,
We're deploying a new application using Riak to store user cart during purchase flow.

The application runs fine, however after a few hours all Riak operation fails on the client side, even if the cluster is up and running ok.

The full stack exception is pasted at the end of this e-mail (com.basho.riak.client.core.NoNodesAvailableException)

If the application is restarted it gets back working. 

We're using Riak Client 2.0 along with Riak 1.4.10. We're using protocol buffer with a TPC Load Balancer in front of Riak Cluster.

The load balancer has a Idle Period Time, so after that time it closes connection (60 seconds).

It seem some sort of connection leak.

Any help is appreciated. Thanks 

<25-02-2015 19:12:09> <Thread : http-nio-8080-exec-150> <tid:CKT-66147652-c34e-44f8-858a-cc67bf183293> <customerid: > <[ERROR] [com.b2winc.cart.riak.ShoppingCartRiakRepository] [Error] 
---Stack : com.basho.riak.client.core.NoNodesAvailableException : java.util.concurrent.ExecutionException at com.basho.riak.client.core.FutureOperation.get(FutureOperation.java:260)
  at com.basho.riak.client.api.commands.CoreFutureAdapter.get(CoreFutureAdapter.java:52)
  at com.basho.riak.client.api.RiakCommand.execute(RiakCommand.java:89)
  at com.basho.riak.client.api.RiakClient.execute(RiakClient.java:293)
  at com.b2winc.cart.riak.ShoppingCartRiakRepository.isDependencyWorking(ShoppingCartRiakRepository.java:123)
  at com.b2winc.cart.health.HealthService.getDependencies(HealthService.java:21)
  at com.b2winc.cart.controller.HealthController.health(HealthController.java:24)
  at sun.reflect.GeneratedMethodAccessor474.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:483)
  at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215)
  at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132)
  at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104)
  at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749)
  at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:689)
  at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83)
  at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:938)
  at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:870)
  at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961)
  at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:618)
  at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at com.ocpsoft.pretty.PrettyFilter.doFilter(PrettyFilter.java:145)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at com.b2winc.checkout.web.ServerErrorFilter.doFilter(ServerErrorFilter.java:24)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at com.b2winc.checkout.web.TransactionIdFilter.doFilter(TransactionIdFilter.java:27)
  at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:344)
  at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:261)
  at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
  at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
  at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
  at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
  at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
  at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:142)
  at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
  at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:676)
  at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:610)
  at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
  at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:534)
  at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1081)
  at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:658)
  at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:222)
  at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1566)
  at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1523)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
  at java.lang.Thread.run(Thread.java:745)
---Stack :  : com.basho.riak.client.core.NoNodesAvailableException  at com.basho.riak.client.core.DefaultNodeManager.executeOnNode(DefaultNodeManager.java:95)
  at com.basho.riak.client.core.RiakCluster.execute(RiakCluster.java:197)
  at com.basho.riak.client.core.RiakCluster.retryOperation(RiakCluster.java:328)
  at com.basho.riak.client.core.RiakCluster.access$800(RiakCluster.java:44)
  at com.basho.riak.client.core.RiakCluster$RetryTask.run(RiakCluster.java:340)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
  at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)


We're using 
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Michael Martin | 24 Feb 15:35 2015
Picon

YZ search schema question

Hi all,

I have a need to index on two separate fields in a json document - rather, I need to concatenate the two and index on that.

Short of duplicating both into a single new item in my json document, how would I go about doing that?

Example:

Given this JSON:

{ "parent": "/path/to/parent",
  "self": "myname"
}

How would I build a schema that would search on "/path/to/parent/myname" without doing something like:

{ "parent": "/path/to/parent",
  "self": "myname",
  "fullpath": "/path/to/parent/myname"
}

Thanks,
Michael Martin
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Shawn Debnath | 23 Feb 21:41 2015
Picon

ACLs not being set correctly for riak-cs

Hi there,

I can't seem to be able to get ACLs set properly on newly created buckets in riak-cs. I am using s3curl to push the payload up  via PUT /?acl and it returns 200 OK. However, a GET /?acl returns an xml payload with missing IDs. Without manually pushing new ACLs, the default ACLs correctly gives access to the owner, but as soon as I push a custom ACL set, it screws up the grants for both the owner and the other users.

NOTE: The keys below are for a private test environment so substitute your values accordingly.

Any help appreciated on pointing me to the right direction!

Thanks,
Shawn



Here are the three user IDs, keys and secrets. I want the owner to retain full control while I want to grant WRITE privileges to publisher and READ privileges to reader.


    admin_id: feab26c2fec623a34e7d60e620b42a7786eca3223b5e2faebc5d248a34f3239e
    admin_key: 1049V_JJHPH7TO_QPWVC
    admin_secret: lMQsnn3Cukk1UR28FAtoZiap9KEOjBRgYKiVVg==
    publisher_id: 5efc8fb59754a6d11eb1a36c501a8ef7b1be44b0300fbe3df354423b7a115ac5
    publisher_key: D-YBO-QHCHU9MEHNZR1D
    publisher_secret: nin5LA4WHEuJeTuzN-qCWBXsOvTyUbdPuDQ3eg==
    reader_id: de6831d6da88df325d474f7f6c1f708596998c54fc0817685f8c67f1d8cab239
    reader_key: _QOKYEHYM6S-YDDHGSYF
    reader_secret: sFc1HBhjQzfr70Yda-ke257LHkVCPNAN0chs9A==

<!-- 
  INPUT ACL XML
-->
<AccessControlPolicy xmlns="http://data.basho.com/doc/2012-04-05/">
  <Owner>
    <ID>feab26c2fec623a34e7d60e620b42a7786eca3223b5e2faebc5d248a34f3239e</ID>
  </Owner>
  <AccessControlList>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>feab26c2fec623a34e7d60e620b42a7786eca3223b5e2faebc5d248a34f3239e</ID>
     </Grantee>
     <Permission>FULL_CONTROL</Permission>
    </Grant>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>5efc8fb59754a6d11eb1a36c501a8ef7b1be44b0300fbe3df354423b7a115ac5</ID>
     </Grantee>
     <Permission>WRITE</Permission>
    </Grant>
    <Grant>
      <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
        <ID>de6831d6da88df325d474f7f6c1f708596998c54fc0817685f8c67f1d8cab239</ID>
     </Grantee>
     <Permission>READ</Permission>
    </Grant>
  </AccessControlList>
</AccessControlPolicy>

<!--
  CREATE BUCKET social-media VIA s3curl
  
  NOTE
  NOTE If you are using non-standard domains, in the case below, edit the s3curl.pl file and modify the <at> endpoints to contain the correct set of domains
  NOTE
-->
$ bin/s3curl.pl --debug --id ${RIAK_ADMIN_KEY} --key ${RIAK_ADMIN_SECRET} --acl private -- -s -v -x localhost:50201 -X PUT http://social-media.cs.domain.com/

s3curl: Found the url: host=social-media.cs.domain.com; port=; uri=/; query=;
s3curl: vanity endpoint signing case
s3curl: StringToSign='PUT\n\n\nMon, 23 Feb 2015 20:03:15 +0000\nx-amz-acl:private\n/social-media/'
s3curl: signature='v48ovqQBnqfEcBZ7kPedpbs1Xt4='
s3curl: exec curl -H Date: Mon, 23 Feb 2015 20:03:15 +0000 -H Authorization: AWS 1049V_JJHPH7TO_QPWVC:v48ovqQBnqfEcBZ7kPedpbs1Xt4= -H x-amz-acl: private -L -s -v -x localhost:50201 -X PUT http://social-media.cs.domain.com/
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 50201 (#0)
> PUT http://social-media.cs.domain.com/ HTTP/1.1
> User-Agent: curl/7.37.1
> Host: social-media.cs.domain.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> Date: Mon, 23 Feb 2015 20:03:15 +0000
> Authorization: AWS 1049V_JJHPH7TO_QPWVC:v48ovqQBnqfEcBZ7kPedpbs1Xt4=
> x-amz-acl: private
< HTTP/1.1 200 OK
* Server Riak CS is not blacklisted
< Server: Riak CS
< Date: Mon, 23 Feb 2015 20:03:16 GMT
< Content-Type: application/xml
< Content-Length: 0
* Connection #0 to host localhost left intact


<!--
  SET ACLs ON BUCKET social-media VIA s3curl
  
  NOTE
  NOTE If you are using non-standard domains, in the case below, edit the s3curl.pl file and modify the <at> endpoints to contain the correct set of domains
  NOTE
-->
$  bin/s3curl.pl --debug --id ${RIAK_ADMIN_KEY} --key ${RIAK_ADMIN_SECRET} --put /tmp/riak-cs-bucket-policy.xml -- -s -v -x localhost:50201 -X PUT http://social-media.cs.domain.com/?acl

s3curl: Found the url: host=social-media.cs.domain.com; port=; uri=/; query=acl;
s3curl: vanity endpoint signing case
s3curl: StringToSign='PUT\n\n\nMon, 23 Feb 2015 20:03:21 +0000\n/social-media/?acl'
s3curl: signature='QAcPGgB1tZO2+U4M0TvP4Q4uyxQ='
s3curl: exec curl -H Date: Mon, 23 Feb 2015 20:03:21 +0000 -H Authorization: AWS 1049V_JJHPH7TO_QPWVC:QAcPGgB1tZO2+U4M0TvP4Q4uyxQ= -L -T /tmp/riak-cs-bucket-policy.xml -s -v -x localhost:50201 -X PUT http://social-media.cs.domain.com/?acl
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 50201 (#0)
> PUT http://social-media.cs.domain.com/?acl HTTP/1.1
> User-Agent: curl/7.37.1
> Host: social-media.cs.domain.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> Date: Mon, 23 Feb 2015 20:03:21 +0000
> Authorization: AWS 1049V_JJHPH7TO_QPWVC:QAcPGgB1tZO2+U4M0TvP4Q4uyxQ=
> Content-Length: 1003
> Expect: 100-continue
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
* Server Riak CS is not blacklisted
< Server: Riak CS
< Date: Mon, 23 Feb 2015 20:03:21 GMT
< Content-Type: application/xml
< Content-Length: 0
* Connection #0 to host localhost left intact


<!--
  VERIFY ACLs USING ADMIN KEY/SECRET

  As you can see, IDs in the grants are missing, and even the owner now cannot put/get files.
-->
bin/s3curl.pl --debug --id ${RIAK_ADMIN_KEY} --key ${RIAK_ADMIN_SECRET}  -- -s -v -x localhost:50201 -X GET http://social-media.cs.domain.com/?acl

<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy>
    <Owner>
        <ID>feab26c2fec623a34e7d60e620b42a7786eca3223b5e2faebc5d248a34f3239e</ID>
        <DisplayName>riak-cs-admin</DisplayName>
    </Owner>
    <AccessControlList>
        <Grant>
            <Grantee
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
                <ID></ID>
                <DisplayName></DisplayName>
            </Grantee>
            <Permission>FULL_CONTROL</Permission>
        </Grant>
        <Grant>
            <Grantee
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
                <ID></ID>
                <DisplayName></DisplayName>
            </Grantee>
            <Permission>READ</Permission>
        </Grant>
        <Grant>
            <Grantee
                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
                <ID></ID>
                <DisplayName></DisplayName>
            </Grantee>
            <Permission>WRITE</Permission>
        </Grant>
    </AccessControlList>
</AccessControlPolicy>

<!-- 
  DUMP USERS TO VERIFY
-->
s3curl: Found the url: host=riak-cs.cs.domain.com; port=; uri=/users; query=;
s3curl: vanity endpoint signing case
s3curl: StringToSign='GET\n\n\nMon, 23 Feb 2015 20:30:30 +0000\n/riak-cs/users'
s3curl: signature='mOcYNLzS/3PFkXhU8tnM14HQVoI='
s3curl: exec curl -H Date: Mon, 23 Feb 2015 20:30:30 +0000 -H Authorization: AWS 1049V_JJHPH7TO_QPWVC:mOcYNLzS/3PFkXhU8tnM14HQVoI= -L -s -v -x localhost:50201 -X GET http://riak-cs.cs.domain.com/users
* Hostname was NOT found in DNS cache
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 50201 (#0)
> GET http://riak-cs.cs.domain.com/users HTTP/1.1
> User-Agent: curl/7.37.1
> Host: riak-cs.cs.domain.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> Date: Mon, 23 Feb 2015 20:30:30 +0000
> Authorization: AWS 1049V_JJHPH7TO_QPWVC:mOcYNLzS/3PFkXhU8tnM14HQVoI=
< HTTP/1.1 200 OK
< Vary: Accept
< Transfer-Encoding: chunked
* Server Riak CS is not blacklisted
< Server: Riak CS
< Date: Mon, 23 Feb 2015 20:30:30 GMT
< Content-Type: multipart/mixed; boundary=TCW5KE8FRZPTJ9HK2PL896Q8A5V2F9O
--TCW5KE8FRZPTJ9HK2PL896Q8A5V2F9O
Content-Type: application/xml


<?xml version="1.0" encoding="UTF-8"?>
<Users>
    <User>
        <Email>riak-cs-publisher <at> domain.com</Email>
        <DisplayName>riak-cs-publisher</DisplayName>
        <Name>publisher</Name>
        <KeyId>D-YBO-QHCHU9MEHNZR1D</KeyId>
        <KeySecret>nin5LA4WHEuJeTuzN-qCWBXsOvTyUbdPuDQ3eg==</KeySecret>
        <Id>5efc8fb59754a6d11eb1a36c501a8ef7b1be44b0300fbe3df354423b7a115ac5</Id>
        <Status>enabled</Status>
    </User>
    <User>
        <Email>riak-cs-reader <at> domain.com</Email>
        <DisplayName>riak-cs-reader</DisplayName>
        <Name>reader</Name>
        <KeyId>_QOKYEHYM6S-YDDHGSYF</KeyId>
        <KeySecret>sFc1HBhjQzfr70Yda-ke257LHkVCPNAN0chs9A==</KeySecret>
        <Id>de6831d6da88df325d474f7f6c1f708596998c54fc0817685f8c67f1d8cab239</Id>
        <Status>enabled</Status>
    </User>
</Users>
--TCW5KE8FRZPTJ9HK2PL896Q8A5V2F9O
Content-Type: application/xml


<?xml version="1.0" encoding="UTF-8"?>
<Users>
    <User>
        <Email>riak-cs-admin <at> domain.com</Email>
        <DisplayName>riak-cs-admin</DisplayName>
        <Name>admin</Name>
        <KeyId>1049V_JJHPH7TO_QPWVC</KeyId>
        <KeySecret>lMQsnn3Cukk1UR28FAtoZiap9KEOjBRgYKiVVg==</KeySecret>
        <Id>feab26c2fec623a34e7d60e620b42a7786eca3223b5e2faebc5d248a34f3239e</Id>
        <Status>enabled</Status>
    </User>
</Users>
--TCW5KE8FRZPTJ9HK2PL896Q8A5V2F9O
Content-Type: application/xml


<?xml version="1.0" encoding="UTF-8"?>
<Users/>
* Connection #0 to host localhost left intact
--TCW5KE8FRZPTJ9HK2PL896Q8A5V2F9O--

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Karsten Hauser | 23 Feb 11:12 2015

Starting riak with init.d-script on Debian 8 fails

Hi together,

 

when I try to start my riak installation with the init-script, I run into the following error message:

 

·         root <at> unity-backend-dev:~# /etc/init.d/riak start

·         [....] Starting riak (via systemctl): riak.serviceFailed to start riak.service: Unit riak.service failed to load: No such file or directory.

·         failed!

 

So “riak.service” seems to be missing, but I don’t know where.

 

My system is “Debian GNU/Linux 8” and I have installed “riak_2.0.4-1_amd64.deb”.

 

“riak start” without init.d-script just works well.

 

Can somebody please help me with this?

 

Regards

Karsten

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
AM | 20 Feb 23:35 2015
Picon

Data modelling questions

Hi All.

I am currently looking at using Riak as a data store for time series 
data. Currently we get about 1.5T of data in JSON format that I intend 
to persist in Riak. I am having some difficulty figuring out how to 
model it such that I can fulfill the use cases I have been handed.

The data is provided in several types of log formats with some common 
fields:

- timestamp
- geo
- s/w build #
- location #

- .... whole bunch of other key value pairs.

For the most part I will need to provide aggregated views based on geo. 
There are some views based on s/w build # and location #. The 
aggregation will be on an hourly basis.

The model that I came up with:

<log-format-type>[<hour>][<timestamp>-<msg-id>]: <json-body>

with indices on geo, s/w build # and location #.

I /think/ this will satisfy most of what I want to do, but I was 
wondering if someone else has had to solve this sort of a problem and 
what their solution was?

I would also be interested in hearing about alternate structures or bad 
assumptions I am making here.

Thanks.
AM

Gmane