Buri Arslon | 20 Apr 15:57 2014
Picon

how to set eDisMax? Solr start and rows not working properly

Hi guys!

I searched the docs and the source code but wasn't able to find any info about using edismax.

I have 2 questions:

1. How to set edismax parser?

2. Why are "start" and "rows" not working properly?

I have 4 records in riak bucket:

   B
   C
   A
   D


{sort, <<"field asc">>} returns correctly sorted records (A,B,C,D).

But [{sort, <<"field asc">>},{start,0},{rows,2}] is returning (C,D) while I'm expecting (A,B).

Maybe I'm doing something wrong. Any hints?


Thanks,
-- Buriwoy
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Adam Leko | 18 Apr 18:22 2014

Riak HTTPS listener hangs

We have a 5-node Riak cluster and we're having problems keeping the HTTPS listener running properly. The problem typically manifests itself a few hours after Riak is started. When it happens, the HTTPS listener on a Riak node will accept new connections but will never respond to them. Connections made via curl or OpenSSL's s_client show the client sending the SSL hello but never getting a response. When this happens, the OS does show pending data for the socket that isn't being processed (trimmed output):

# ss -lt
State      Recv-Q Send-Q    Local Address:Port    Peer Address:Port  
LISTEN     129    128        1.2.3.4:8098         *:*       

One of the times the Erlang VM was in this state I grabbed a crash dump via SIGUSR1. The Mochiweb process shows up in a "Waiting" state:

=proc:<0.190.0>
State: Waiting
Name: 'https_1.2.3.4:8098_mochiweb'
Spawned as: proc_lib:init_p/5
Spawned by: <0.150.0>
Started: Thu Apr 17 21:01:12 2014
Message queue length: 0
Number of heap fragments: 0
Heap fragment data: 0
Link list: [#Port<0.3873>, <0.4203.0>, <0.150.0>, <0.5788.48>, <0.12559.49>, <0.4819.45>, <0.17031.51>, <0.19186.51>, <0.18428.51>, <0.25106.51>, <0.20568.51>, <0.16399.51>, <0.25307.51>, <0.25382.51>, <0.31884.51>, <0.30289.51>, <0.29247.51>, <0.25168.51>]
Reductions: 50203
Stack+heap: 1597
OldHeap: 0
Heap unused: 495
OldHeap unused: 0
Program counter: 0x00007fb03fea4de8 (gen_server:loop/6 + 264)
CP: 0x0000000000000000 (invalid)
arity = 0

All the processes linked from the main Mochiweb process are also in a "Waiting" state. If I connect to the riak console and manually kill the mochiweb process (via exit(pid(…), kill)), its supervisor restarts it and the node starts servicing HTTPS requests again.

We do have the Erlang cluster behind haproxy but the SSL connections hang even if you try to connect locally from the machine running the RIak service. We're using a lightly modified config from what is suggested in the docs (http://docs.basho.com/riak/1.3.1/cookbooks/Load-Balancing-and-Proxy-Configuration/) with a much lower max connections setting. When the hangs happen, netstat only shows a handful of open connections to the haproxy front end.

It's also worth pointing out that when the hangs happen, there are no messages that show up in the log files that indicate any errors. The rest of the services on the Riak node don't appear to be affected as well - we still get periodic anti-"entropy" exchange log messages and all the usual suspects in riak-admin status check out.

We are using a pretty standard OS configuration - Ubuntu 12.04 LTS with the Basho apt repo, riak 1.4.8-1, erts-5.9.1 that comes bundled with the Riak packages.

Are there any known issues with accessing Riak over its HTTPS interface or any known problems with erts' SSL implementation? As of now we're forced to use periodic rolling restarts on the nodes in our production cluster to keep the HTTPS listeners functional, which is a pretty disgusting workaround.

Thanks for taking the time to read this. I'd appreciate any insight or guidance on how to address/track down this problem.

-Adam Leko
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
chuang | 18 Apr 10:06 2014
Picon

Maybe a bug of riak_core?

In the source of riak_core_gossip(version 1.4.8),there is a function source below:

451 attempt_simple_transfer(Seed, Ring, [{_, N}|Rest], TargetN, Exit, Idx, Last) ->
452     %% just keep track of seeing this node
453     attempt_simple_transfer(Seed, Ring, Rest, TargetN, Exit, Idx+1,
454                             lists:keyreplace(N, 1, Last, {N, Idx}));

I think the source
451 attempt_simple_transfer(Seed, Ring, [{_, N}|Rest], TargetN, Exit, Idx, Last) ->

should be:
451 attempt_simple_transfer(Seed, Ring, [{N, _}|Rest], TargetN, Exit, Idx, Last) ->

since in line 454,the lists:keyreplace function operate key on position 1.
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Alexander Popov | 17 Apr 11:29 2014
Picon

Riak.filterNotFound and Riak.reduceSort in reduce phase dramaticaly decrease performance

query1.sh:
=====
curl -w %{time_connect}:%{time_starttransfer}:%{time_total}   -X POST \
-H "content-type: application/json" \
-d <at> - \
<<EOF
{"inputs": {
                       "bucket": "timeline",
                       "index": "owner_bin",
                       "key": "6d87f18a3dca4a60b0fc385b1f46c165"
                   },
"query": [
 {"map": {"language": "javascript", "name": "Riak.mapValuesJson" }},
{"reduce": {"language": "javascript", "name": "Riak.filterNotFound" }},
{"reduce": {"language": "javascript", "name": "Riak.reduceSlice",  "arg":[0,5] }},

            ]}
EOF
======
=> 0.005:1.468:1.468


query2.sh:
========
curl -w %{time_connect}:%{time_starttransfer}:%{time_total} -X POST \
-H "content-type: application/json" \
-d <at> - \
<<EOF
{"inputs": {
                       "bucket": "timeline",
                       "index": "owner_bin",
                       "key": "6d87f18a3dca4a60b0fc385b1f46c165"
                   },
"query": [
 {"map": {"language": "javascript", "name": "Riak.mapValuesJson" }},
 {"reduce": {"language": "javascript", "name": "Riak.reduceSort", "arg": "function(a,b){ return a.event_timestamp-b.event_timestamp }" }},
{"reduce": {"language": "javascript", "name": "Riak.reduceSlice",  "arg":[0,5] }},
            ]}
EOF
========
=>0.005:1.439:1.439



query3.sh:
========
curl -w %{time_connect}:%{time_starttransfer}:%{time_total} -X POST \
-H "content-type: application/json" \
-d <at> - \
<<EOF
{"inputs": {
                       "bucket": "timeline",
                       "index": "owner_bin",
                       "key": "6d87f18a3dca4a60b0fc385b1f46c165"
                   },
"query": [
 {"map": {"language": "javascript", "name": "Riak.mapValuesJson" }},
{"reduce": {"language": "javascript", "name": "Riak.reduceSlice",  "arg":[0,5] }},
            ]}
EOF
========
=> 0.005:0.218:0.218

As for me, this is not so fast too. because currently it is very simple and  if I add some logic or  users data is grow, all things will be much slower. 



Total number of keys in bucket: 3703
Keys matched with index: 299 
 
Environment: cluster of 3 ec2 c3.xlarge instances(debian).
storage_backend : riak_kv_eleveldb_backend
erlydtl_version : <<"0.7.0">>
riak_control_version : <<"1.4.4-0-g9a74e57">>
cluster_info_version : <<"1.2.4">>
riak_search_version : <<"1.4.8-0-gbe6e4ed">>
merge_index_version : <<"1.3.2-0-gcb38ee7">>
riak_kv_version : <<"1.4.8-0-g7545390">>
sidejob_version : <<"0.2.0">>
riak_api_version : <<"1.4.4-0-g395e6fd">>
riak_pipe_version : <<"1.4.4-0-g7f390f3">>
riak_core_version : <<"1.4.4">>
bitcask_version : <<"1.6.6-0-g230b6d6">>
basho_stats_version : <<"1.0.3">>
webmachine_version : <<"1.10.4-0-gfcff795">>
mochiweb_version : <<"1.5.1p6">>
inets_version : <<"5.9">>
erlang_js_version : <<"1.2.2">>
runtime_tools_version : <<"1.8.8">>
os_mon_version : <<"2.2.9">>
riak_sysmon_version : <<"1.1.3">>
ssl_version : <<"5.0.1">>
public_key_version : <<"0.15">>
crypto_version : <<"2.1">>
sasl_version : <<"2.2.1">>
lager_version : <<"2.0.1">>
goldrush_version : <<"0.1.5">>
compiler_version : <<"4.8.1">>
syntax_tools_version : <<"1.6.8">>
stdlib_version : <<"1.18.1">>
kernel_version : <<"2.15.1">>






_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Jason Wang | 16 Apr 01:34 2014

Socket returned short packet length 0 - expected 4

HI all,

In production, we are experiencing "Socket returned short packet length 0 - expected 4" exceptions whenever we try to store an object >20K in size. In addition, the exception typically takes over 60 seconds to manifest. The content of each object is an bytearray.

Any idea what could be causing this exception? 

Other details: 
Library: Python
Version:  riak==2.0.1, riak-pb==1.4.1.1
Protocol: pbc
Steps to reproduce: N/A. This only happens in production, not on dev machines.

Thanks in advance,

Jason
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Oleksiy Krivoshey | 16 Apr 21:03 2014
Picon

ykGetIndex: Encountered unknown message tag

Hi!

I'm trying to update yokozuna code (javascript, protocol buffers) which worked with pre11 for pre20 and I'm getting the following response when issuing RpbYokozunaIndexGetReq:

error: undefined
reply: { index: [ [Error: Encountered unknown message tag] ] }

First problem is that error is returned in place of 'index' and the second is the error itself. What does it mean?


--
Oleksiy
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Tom Santero | 16 Apr 18:51 2014

[ANN] Riak 2.0.0beta1

Hello Riak Users,

The Basho Team is happy to release Riak 2.0.0-beta1. The source tarball and pre-built packages are available for download [0]. Note that we've added support for FreeBSD 10, and removed support for SmartOS 1.6.

By calling this version "beta" we consider this release feature frozen, as opposed to the previous pre-release builds which were still moving targets. Similar to the pre-releases, 2.0.0beta1 is not intended for production use, and may contain bugs. Over the coming weeks the Basho engineering team will continue testing Riak rigorously, to identify and squash any bugs that may have been introduced with the number of features we're excited to deliver.

As always, feedback from the community is invaluable. We strongly encourage you to download beta1, test it for yourself and report any issues you encounter via the mailing list or on GitHub. 

You'll find a getting started with Riak 2.0 guide on the docs [1]. If you're completely new to 2.0 and the features we've added--such as bucket types, Solr, strong consistency and data types--you'll find links to the relevant documentation, supplementary reading materials and conference videos there. 

Also note that as we head toward a GA release of Riak 2.0 the docs site is being updated on an almost-daily basis. Some features may not yet be fully documented. If you come across anything in the documentation that is incorrect, or even poorly worded, please feel free to open an issue or submit a pull request[2]. 

Regards,
Tom

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Oleksiy Krivoshey | 16 Apr 15:52 2014
Picon

Bitcask: Hintfile is invalid

Hi!

I'm getting quite a lot of the errors like this:
2014-04-16 16:45:46.838 [error] <0.2110.0> Hintfile './data/fs_chunks/1370157784997721485815954530671515330927436759040/3.bitcask.hint' invalid

running riak_2.0.0_pre20

What can be the reason and does it mean my data is corrupted?

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Allen Landsidel | 15 Apr 18:28 2014
Picon

Re: reversing node removal?

Luke,

I already do use nagios for that, but the disk space was fine before I 
told one of the nodes to leave the cluster.  That's my problem -- there 
was not enough free space in the cluster for it to move all that nodes 
data.  It accepted the leave and then ran me out of disk space on all 
the other nodes, with no way to abort or recover.

My only option was to add more space to the other nodes (as you said, 
adding new nodes will not work until the leave is done), which is easy 
enough in a virtualized environment but requires downtime.  In a bare 
metal environment, it could be catastrophic to the cluster.

On 4/15/2014 12:19, Luke Bakken wrote:
> Hi Allen,
>
> Cluster leave does not check for disk space and in general, Riak is not
> aware of how much space it has available to itself (most db systems
> don't monitor disk space I think). I'll send a note to product
> management about this. We recommend using a monitoring solution (like
> collectd + graphite) to keep an eye on available disk space.
>
>
> --
> Luke Bakken
> CSE
> lbakken <at> basho.com <mailto:lbakken <at> basho.com>
>
>
> On Mon, Apr 14, 2014 at 10:12 AM, Allen Landsidel
> <landsidel.allen <at> gmail.com <mailto:landsidel.allen <at> gmail.com>> wrote:
>
>     Luke,
>
>     As I said in the private email, I ended up doing just that.  The
>     cluster is virtualized (I am aware of the potential performance
>     issues) so I just shut it all down, grew the drive allocated to
>     riak's data dir, and brought them back up.  The extra space (or
>     something?) caused them to start going heavily into swap, killing
>     performance, so I shut down again and gave them more memory.
>
>     For now though the cluster remains off.  While it was on, our SAN
>     performance was getting murdered.  I'm having problems with one of
>     the arrays and I'm dealing with that right now; when it's fixed, I
>     can go back to figuring out how to fix the issue with the riak
>     cluster.  I don't know right now if it was riak or the array issues
>     that killed the SAN performance.
>
>     I do have a few more questions though.
>
>     1. Is the cluster leave supposed to check that the remaining nodes
>     in the cluster have enough space to move all the data to?  If not,
>     that's something that would be nice to have in a future version.
>
>     2. Can I tell it through the config files which filesystem(s) to
>     check for available space?  Being FreeBSD, I have the normal mounts
>     (/, /usr, /var, /tmp) as well as one dedicated to riak data.  If
>     it's just checking the space on the server as a whole, it will get a
>     false sense of how much space is available for it.
>
>
>     On 4/14/2014 12:28, Luke Bakken wrote:
>
>         Hi Allen,
>
>         There's no way to abort a cluster operation that is in progress. In
>         addition, data won't transfer to the node you added until the
>         previous
>         cluster transition completes.
>
>         Is it possible to add disk space to your three running nodes?
>         --
>         Luke Bakken
>         CSE
>         lbakken <at> basho.com <mailto:lbakken <at> basho.com>
>
>
>         On Fri, Apr 11, 2014 at 4:48 AM, Allen Landsidel
>         <landsidel.allen <at> gmail.com <mailto:landsidel.allen <at> gmail.com>>
>         wrote:
>
>             I have a 5-node cluster (riak 1.4.0, freebsd9) that is being
>             used in
>             production and miscalculated the disk space being used by
>             the cluster as a
>             whole.  Yesterday I told the cluster to remove two nodes,
>             leaving just
>             three, but I need four active to cover the usage.
>
>             One node left successfully before I became aware of the
>             problem, and disk
>             filled up completely on the other three.  I added the one
>             that left back to
>             the cluster, but data is not being moved to it.
>
>             Is there any way to 'abort' the cluster leave issued to the
>             node that is
>             still trying to leave, or some other way to straighten this
>             out without
>             losing (much) data?
>
>             _________________________________________________
>             riak-users mailing list
>             riak-users <at> lists.basho.com <mailto:riak-users <at> lists.basho.com>
>             http://lists.basho.com/__mailman/listinfo/riak-users___lists.basho.com
>             <http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com>
>
>
Web Developer | 15 Apr 15:05 2014
Picon

Batch Insert

Hello,

How we can batch insert using riak and php client

Thanks in advance
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Jie Lu | 15 Apr 04:10 2014
Picon

Fwd: Performance slows down with write heavy use


I also have a problem in performance test of Riak Cluster.
~~~~~~~~~~~~
Riak version: 1.4.7
OS: openSUSE 11.3
RAM: 4G
ring size is 64
backend: leveldb
Nodes in cluster: 6 nodes

~~~~~~~~~~~~~

I write a key/value with value is 1K bytes, and 25 concurrent  threads on one client nodes. The test result only 20 ops/s performance. 

Is there any performance benchmark to compare with?






On Tue, Apr 15, 2014 at 6:26 AM, Luke Bakken <lbakken <at> basho.com> wrote:
Hi Matthew -

Some suggestions:

* Upgrade to Riak 1.4.8

* Test with a ring size of 64

* Use staggered merge windows in your cluster (http://docs.basho.com/riak/latest/ops/advanced/backends/bitcask/)

* Since you're on dedicated hardware RAID, use the noop scheduler for your Riak data volumes:

cat /sys/block/sd*/queue/scheduler
noop anticipatory deadline [cfq]

* Increase +zdbbl in /etc/riak/vm.args to 96000

Thanks
--
Luke Bakken
CSE
lbakken <at> basho.com


On Mon, Apr 14, 2014 at 2:33 PM, Matthew MacClary <macclary <at> lifetime.oregonstate.edu> wrote:
I have a persistent issue I am trying to diagnose. In our use of Riak we have multiple data creators writing into a 7 node cluster. The value size is a bit large at around 2MB. The behavior I am seeing is that if I delete all data out of bitcask, then test performance I get fast writes. As I keep doing the same work of writing to the cluster, then the Riak write times will start tailing off and getting really bad.

Initial write times seen by my application: 0.5 seconds for 100MB worth of values (~200MB/s)
Subsequent write times: 11 seconds for 100MB worth of values (~9MB/s)

This slow down can happen over roughly 20-40 minutes of writing or about 200GB worth of key/value pairs written.

I can reset the cluster to get fast performance again by stopping Riak and deleting the bitcask directories, then starting Riak again. This step is not feasible for production, but during testing at least the write speed goes up by 20x.

Watching iostat I see that every few seconds the disk io jumps to ~11%. It doesn't seem that highly loaded from my cursory look. Watching top I see that beam.smp runs at around 100 for CPU% or less when heavily loaded. I am not sure how to tell what it is doing though :-)

Thanks for any suggestions!!

-Matt



================
System Description


avg value size = 2MB
Riak version = 1.4.1
n_val = 2
client threads total = 105
backend = bitcask
ring_creation_size = 128
node count = 7
node OS = RHEL 6.2
server RAM = 128GB
RAID = RAID0 across 8 SAS drives
FS = ext4
FS options = /dev/mapper/vg0-lv0 / ext4 rw,noatime,barrier=0,stripe=512,data=ordered 0 0
bitcask size on one server = 133GB
AAE = off
interface = protobuf
client library = riak java client
file-max = 65536

_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




--
Best Regards.
Lu Jie



--
Best Regards.
Lu Jie
_______________________________________________
riak-users mailing list
riak-users <at> lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Gmane