Mohammad Kermani | 30 May 13:41 2016
Picon

Cassandra data modeling for a social network

We are using Cassandra for our social network and we are designing/data modeling tables we need, it is confusing for us and we don't know how to design some tables and we have some little problems!  

As we understood for every query we have to have different tables, and for example user A is following user C and B. 

Now, in Cassandra we have a table that is posts_by_user:

user_id | post_id | text | created_on | deleted | view_count likes_count | comments_count | user_full_name

And we have a table according to the followers of users, we insert the post's info to the table called user_timeline that when the follower users are visiting the first web page we get the post from database from user_timeline table.

And here is user_timeline table:
follower_id | post_id | user_id (who posted) | likes_count | comments_count | location_name | user_full_name

First, Is this data modeling correct for follow base (follower, following actions) social network?

And now we want to count likes of a post, as you see we have number of likes in both tables(user_timeline, posts_by_user), and imagine one user has 1000 followers then by each like action we have to update all 1000 rows in user_timeline and 1 row in posts_by_users; And this is not logical!

Then, my second question is How should it be? I mean how should like (favorite) table be?

Thank you
I wish I can get answer  
Bhuvan Rawal | 29 May 12:18 2016
Picon
Gravatar

Node Stuck while restarting

Hi,

We are running a 6 Node cluster in 2 DC on DSC 3.0.3, with 3 Node each. One of the node was showing UNREACHABLE on other nodes in nodetool describecluster  and on that node it was showing all others UNREACHABLE and as a measure we restarted the node.

But on doing that it is stuck possibly at with these messages in system.log:

DEBUG [SlabPoolCleaner] 2016-05-29 14:07:28,156 ColumnFamilyStore.java:829 - Enqueuing flush of batches: 226784704 (11%) on-heap, 0 (0%) off-heap
DEBUG [main] 2016-05-29 14:07:28,576 CommitLogReplayer.java:415 - Replaying /commitlog/data/CommitLog-6-1464508993391.log (CL version 6, messaging version 10, compression null)
DEBUG [main] 2016-05-29 14:07:28,781 ColumnFamilyStore.java:829 - Enqueuing flush of batches: 207333510 (10%) on-heap, 0 (0%) off-heap

MemtablePostFlush / MemtableFlushWriter stages where it is stuck with pending messages.  
This has been the status of them as per nodetool tpstats for long.
MemtablePostFlush                 Active - 1        pending - 52           completed - 16         
MemtableFlushWriter               Active - 2        pending - 13           completed - 15       


We restarted the node by setting log level to TRACE but in vain. What could be a possible contingency plan in such a scenario?

Best Regards,
Bhuvan

Anuj Wadehra | 28 May 21:24 2016
Picon

Evict Tombstones with STCS

Hi,

We are using C* 2.0.x . What options are available if disk space is too full to do compaction on huge sstables formed by STCS (created around long ago but not getting compacted due to min_compaction_threshold being 4).

We suspect that huge space will be released when 2 largest sstables get compacted together such that tombstone eviction is possible. But there is not enough space for compacting them together assuming that compaction would need at least free disk=size of sstable1 + size of sstable 2 ??

I read STCS code and if no sstables are available for compactions, it should pick individual sstable for compaction. But somehow, huge sstables are not participating in individual compaction.. is it due to default 20% tombstone threshold?? And if it so, forceUserdefinedcompaction or setting unchecked_tombstone_compactions to true wont help either as tombstones are less than 20% and not much disk would be recovered.

It is not possible to add additional disks too.

We see huge difference in disk utilization of different nodes. May be some nodes were able to get away with tombstones while others didnt manage to evict tombstones.


Would be good to know more alternatives from community.


Thanks
Anuj







Max C | 28 May 03:55 2016

NPE during schema upgrade from 2.2.6 -> 3.0.6

Hi Everyone,

I’m getting a NullPointerException when I start up 3.0.6 for the first time with data from 2.2.6.  Any
ideas for how to fix this, or other troubleshooting strategies?  

This is just a single-node development box.  

Originally I tried upgrading from 2.1.13 to 3.0.6, but I ran into the same error; so I then went from 2.1.13 to
2.2.6, ran “nodetool upgradestables" (which succeeded without issue) and then to 3.0.6.

Thanks for any assistance.  :-)

- Max

INFO  [main] 2016-05-27 16:02:01,169 ColumnFamilyStore.java:381 - Initializing system.peers
INFO  [main] 2016-05-27 16:02:01,178 ColumnFamilyStore.java:381 - Initializing system.peer_events
INFO  [main] 2016-05-27 16:02:01,182 ColumnFamilyStore.java:381 - Initializing system.range_xfers
INFO  [main] 2016-05-27 16:02:01,188 ColumnFamilyStore.java:381 - Initializing system.compaction_history
INFO  [main] 2016-05-27 16:02:01,200 ColumnFamilyStore.java:381 - Initializing system.sstable_activity
INFO  [main] 2016-05-27 16:02:01,210 ColumnFamilyStore.java:381 - Initializing system.size_estimates
INFO  [main] 2016-05-27 16:02:01,218 ColumnFamilyStore.java:381 - Initializing system.available_ranges
INFO  [main] 2016-05-27 16:02:01,223 ColumnFamilyStore.java:381 - Initializing system.views_builds_in_progress
INFO  [main] 2016-05-27 16:02:01,227 ColumnFamilyStore.java:381 - Initializing system.built_views
INFO  [main] 2016-05-27 16:02:01,230 ColumnFamilyStore.java:381 - Initializing system.hints
INFO  [main] 2016-05-27 16:02:01,234 ColumnFamilyStore.java:381 - Initializing system.batchlog
INFO  [main] 2016-05-27 16:02:01,238 ColumnFamilyStore.java:381 - Initializing system.schema_keyspaces
INFO  [main] 2016-05-27 16:02:01,245 ColumnFamilyStore.java:381 - Initializing system.schema_columnfamilies
INFO  [main] 2016-05-27 16:02:01,252 ColumnFamilyStore.java:381 - Initializing system.schema_columns
INFO  [main] 2016-05-27 16:02:01,260 ColumnFamilyStore.java:381 - Initializing system.schema_triggers
INFO  [main] 2016-05-27 16:02:01,268 ColumnFamilyStore.java:381 - Initializing system.schema_usertypes
INFO  [main] 2016-05-27 16:02:01,275 ColumnFamilyStore.java:381 - Initializing system.schema_functions
INFO  [main] 2016-05-27 16:02:01,282 ColumnFamilyStore.java:381 - Initializing system.schema_aggregates
INFO  [main] 2016-05-27 16:02:01,421 SystemKeyspace.java:1284 - Detected version upgrade from 2.2.6 to
3.0.6, snapshotting system keyspace
WARN  [main] 2016-05-27 16:02:01,711 CompressionParams.java:382 - The sstable_compression option has
been deprecated. You should use class instead
ERROR [main] 2016-05-27 16:02:01,833 CassandraDaemon.java:692 - Exception encountered during startup
java.lang.NullPointerException: null
    at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:156) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:41) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:28) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:114) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:267) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.isEmptyCompactValueColumn(LegacySchemaMigrator.java:553) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.createColumnsFromColumnRows(LegacySchemaMigrator.java:638) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:316) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$243(LegacySchemaMigrator.java:237) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$240(LegacySchemaMigrator.java:177) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
    at
org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77) ~[apache-cassandra-3.0.6.jar:3.0.6]
    at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223) [apache-cassandra-3.0.6.jar:3.0.6]
    at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) [apache-cassandra-3.0.6.jar:3.0.6]
    at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) [apache-cassandra-3.0.6.jar:3.0.6]

Drew Davis | 27 May 23:30 2016
Picon

C* working with Open JDK but not Oracle JDK

I am running C* 2.1.4 on an Ubuntu vagrant VM. When I use OpenJDK 8, C* works fine. However using Oracle JDK 8, I get the errors listed below. I have read that it is not recommended to use Open JDK so I would like to be able to use the Oracle JDK. I am stumped on this issue and haven’t been able to find much help online. If anyone could point me in the right direction it would help a lot. 

vagrant <at> vagrant-ubuntu-wily-64:/specx$ rake cassandra:create
/usr/local/rvm/gems/ruby-2.2.1/gems/activesupport-4.1.0/lib/active_support/values/time_zone.rb:285: warning: circular argument reference - now
/usr/local/rvm/gems/ruby-2.2.1/gems/activerecord-4.1.0/lib/active_record/associations/has_many_association.rb:74: warning: circular argument reference - reflection
/usr/local/rvm/gems/ruby-2.2.1/gems/activerecord-4.1.0/lib/active_record/associations/has_many_association.rb:78: warning: circular argument reference - reflection
/usr/local/rvm/gems/ruby-2.2.1/gems/activerecord-4.1.0/lib/active_record/associations/has_many_association.rb:82: warning: circular argument reference - reflection
/usr/local/rvm/gems/ruby-2.2.1/gems/activerecord-4.1.0/lib/active_record/associations/has_many_association.rb:101: warning: circular argument reference - reflection
Keyspace 'specx_dev' does not exist : Keyspace 'specx_dev' does not exist
rake aborted!
CassandraMigrations::Errors::UnexistingKeyspaceError: Keyspace specx_dev does not exist. Run rake cassandra:create. 
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:30:in `rescue in drop_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:27:in `drop_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:20:in `rescue in create_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:10:in `create_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:15:in `rescue in block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:11:in `block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `eval'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `<main>'
Cassandra::Errors::ConfigurationError: Cannot drop non existing keyspace 'specx_dev'.
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/future.rb:570:in `get'
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/future.rb:363:in `get'
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/session.rb:118:in `execute'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cql-rb-wrapper.rb:69:in `execute'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra.rb:63:in `execute'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:28:in `drop_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:20:in `rescue in create_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:10:in `create_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:15:in `rescue in block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:11:in `block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `eval'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `<main>'
Cassandra::Errors::NoHostsAvailable: All attempted hosts failed: 127.0.0.1 (Cassandra::Errors::ServerError: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError)
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/future.rb:570:in `get'
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/future.rb:363:in `get'
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/session.rb:118:in `execute'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cql-rb-wrapper.rb:69:in `execute'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra.rb:63:in `execute'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra/keyspace_operations.rb:11:in `create_keyspace!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:15:in `rescue in block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:11:in `block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `eval'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `<main>'
CassandraMigrations::Errors::UnexistingKeyspaceError: Keyspace specx_dev does not exist. Run rake cassandra:create. 
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra.rb:56:in `rescue in use'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra.rb:52:in `use'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra.rb:20:in `start!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:12:in `block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `eval'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `<main>'
Cassandra::Errors::InvalidError: Keyspace 'specx_dev' does not exist
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/future.rb:570:in `get'
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/future.rb:363:in `get'
/usr/local/rvm/gems/ruby-2.2.1/gems/cassandra-driver-2.1.4/lib/cassandra/cluster.rb:204:in `connect'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cql-rb-wrapper.rb:51:in `use'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra.rb:53:in `use'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/cassandra.rb:20:in `start!'
/usr/local/rvm/gems/ruby-2.2.1/bundler/gems/cassandra_migrations-f5f35feeb972/lib/cassandra_migrations/railtie/tasks.rake:12:in `block (2 levels) in <top (required)>'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `eval'
/usr/local/rvm/gems/ruby-2.2.1/bin/ruby_executable_hooks:15:in `<main>'
Tasks: TOP => cassandra:create
(See full trace by running task with --trace)
vagrant <at> vagrant-ubuntu-wily-64:/specx
Anshu Vajpayee | 27 May 21:52 2016
Picon

Per node limit for Disk Space

Hi All,
I have question regarding max disk space limit  on a node.

As per Data stax, We can have 1TB max disk space for rotational disks and up to 5 TB for SSDs on a node.

Could you please suggest per your experience what would be limit for space on a single node with out causing so much stress on a  node? 




​Thanks,​


Paulo Motta | 27 May 16:56 2016
Picon
Gravatar

Re: Error while rebuilding a node: Stream failed

I'm afraid raising streaming_socket_timeout_in_ms won't help much in this case because the incoming connection on the source node is timing out on the network layer, and streaming_socket_timeout_in_ms controls the socket timeout in the app layer and throws SocketTimeoutException (not java.io.IOException: Connection timed out). So you should probably use more aggressive tcp keep-alive settings (net.ipv4.tcp_keepalive_*) on both hosts, did you try tuning that? Even that might not be sufficient as some routers tend to ignore tcp keep-alives and just kill idle connections.

As said before, this will ultimately be fixed by adding keep-alive to the app layer on CASSANDRA-11841. If tuning tcp keep-alives does not help, one extreme approach would be to backport this to 2.1 (unless some experienced operator out there has a more creative approach).

<at> eevans, I'm not sure he is using a mixed version cluster, it seem he finished the upgrade from 2.1.13 to 2.1.14 before performing the rebuild.

2016-05-27 11:39 GMT-03:00 Eric Evans <john.eric.evans <at> gmail.com>:
From the various stacktraces in this thread, it's obvious you are
mixing versions 2.1.13 and 2.1.14.  Topology changes like this aren't
supported with mixed Cassandra versions.  Sometimes it will work,
sometimes it won't (and it will definitely not work in this instance).

You should either upgrade your 2.1.13 nodes to 2.1.14 first, or add
the new nodes using 2.1.13, and upgrade after.

On Fri, May 27, 2016 at 8:41 AM, George Sigletos <sigletos <at> textkernel.nl> wrote:

>>>> ERROR [STREAM-IN-/192.168.1.141] 2016-05-26 09:08:05,027
>>>> StreamSession.java:505 - [Stream #74c57bc0-231a-11e6-a698-1b05ac77baf9]
>>>> Streaming error occurred
>>>> java.lang.RuntimeException: Outgoing stream handler has been closed
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:138)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:568)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:457)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:263)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>>>>
>>>> And this is from the source node:
>>>>
>>>> ERROR [STREAM-OUT-/172.31.22.104] 2016-05-26 11:08:05,097
>>>> StreamSession.java:505 - [Stream #74c57bc0-231a-11e6-a698-1b05ac77baf9]
>>>> Streaming error occurred
>>>> java.io.IOException: Broken pipe
>>>>         at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>>>> ~[na:1.7.0_79]
>>>>         at sun.nio.ch.FileChannelImpl.transferToDirectly(Unknown Source)
>>>> ~[na:1.7.0_79]
>>>>         at sun.nio.ch.FileChannelImpl.transferTo(Unknown Source)
>>>> ~[na:1.7.0_79]
>>>>         at
>>>> org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:84)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage.serialize(OutgoingFileMessage.java:88)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:49)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:358)
>>>> [apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:330)
>>>> [apache-cassandra-2.1.14.jar:2.1.14]


>>>>>>>>>>> ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
>>>>>>>>>>> StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Remote peer 192.168.1.140 failed stream session.
>>>>>>>>>>> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
>>>>>>>>>>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Streaming error occurred
>>>>>>>>>>> java.io.IOException: Connection timed out
>>>>>>>>>>>         at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketDispatcher.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown
>>>>>>>>>>> Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:323)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>>>>>>>>>>> INFO  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,625
>>>>>>>>>>> StreamResultFuture.java:180 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Session with /192.168.1.140 is complete
>>>>>>>>>>> WARN  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,627
>>>>>>>>>>> StreamResultFuture.java:207 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Stream failed
>>>>>>>>>>> ERROR [RMI TCP Connection(24)-127.0.0.1] 2016-05-24 22:44:58,628
>>>>>>>>>>> StorageService.java:1075 - Error while rebuilding node
>>>>>>>>>>> org.apache.cassandra.streaming.StreamException: Stream failed
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:208)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:415)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamSession.sessionFailed(StreamSession.java:621)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:475)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:256)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_79]
>>>>>>>>>>> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:58,629
>>>>>>>>>>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Streaming error occurred
>>>>>>>>>>> java.io.IOException: Broken pipe
>>>>>>>>>>>         at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketDispatcher.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown
>>>>>>>>>>> Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]



--
Eric Evans
john.eric.evans <at> gmail.com

George Sigletos | 27 May 16:44 2016
Picon

Re: Error while rebuilding a node: Stream failed

Hello,

No there is no version mix. The first stack traces were indeed from 2.1.13. Then I upgraded all nodes to 2.1.14. Still getting the same errors


On Fri, May 27, 2016 at 4:39 PM, Eric Evans <john.eric.evans <at> gmail.com> wrote:
From the various stacktraces in this thread, it's obvious you are
mixing versions 2.1.13 and 2.1.14.  Topology changes like this aren't
supported with mixed Cassandra versions.  Sometimes it will work,
sometimes it won't (and it will definitely not work in this instance).

You should either upgrade your 2.1.13 nodes to 2.1.14 first, or add
the new nodes using 2.1.13, and upgrade after.

On Fri, May 27, 2016 at 8:41 AM, George Sigletos <sigletos <at> textkernel.nl> wrote:

>>>> ERROR [STREAM-IN-/192.168.1.141] 2016-05-26 09:08:05,027
>>>> StreamSession.java:505 - [Stream #74c57bc0-231a-11e6-a698-1b05ac77baf9]
>>>> Streaming error occurred
>>>> java.lang.RuntimeException: Outgoing stream handler has been closed
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:138)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:568)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:457)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:263)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>>>>
>>>> And this is from the source node:
>>>>
>>>> ERROR [STREAM-OUT-/172.31.22.104] 2016-05-26 11:08:05,097
>>>> StreamSession.java:505 - [Stream #74c57bc0-231a-11e6-a698-1b05ac77baf9]
>>>> Streaming error occurred
>>>> java.io.IOException: Broken pipe
>>>>         at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
>>>> ~[na:1.7.0_79]
>>>>         at sun.nio.ch.FileChannelImpl.transferToDirectly(Unknown Source)
>>>> ~[na:1.7.0_79]
>>>>         at sun.nio.ch.FileChannelImpl.transferTo(Unknown Source)
>>>> ~[na:1.7.0_79]
>>>>         at
>>>> org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:84)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage.serialize(OutgoingFileMessage.java:88)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:49)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:41)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
>>>> ~[apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:358)
>>>> [apache-cassandra-2.1.14.jar:2.1.14]
>>>>         at
>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:330)
>>>> [apache-cassandra-2.1.14.jar:2.1.14]


>>>>>>>>>>> ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
>>>>>>>>>>> StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Remote peer 192.168.1.140 failed stream session.
>>>>>>>>>>> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
>>>>>>>>>>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Streaming error occurred
>>>>>>>>>>> java.io.IOException: Connection timed out
>>>>>>>>>>>         at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketDispatcher.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown
>>>>>>>>>>> Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:323)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
>>>>>>>>>>> INFO  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,625
>>>>>>>>>>> StreamResultFuture.java:180 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Session with /192.168.1.140 is complete
>>>>>>>>>>> WARN  [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:58,627
>>>>>>>>>>> StreamResultFuture.java:207 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Stream failed
>>>>>>>>>>> ERROR [RMI TCP Connection(24)-127.0.0.1] 2016-05-24 22:44:58,628
>>>>>>>>>>> StorageService.java:1075 - Error while rebuilding node
>>>>>>>>>>> org.apache.cassandra.streaming.StreamException: Stream failed
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.Futures$4.run(Futures.java:1172)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
>>>>>>>>>>> ~[guava-16.0.jar:na]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:208)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:184)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:415)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamSession.sessionFailed(StreamSession.java:621)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:475)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:256)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_79]
>>>>>>>>>>> ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:58,629
>>>>>>>>>>> StreamSession.java:505 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
>>>>>>>>>>> Streaming error occurred
>>>>>>>>>>> java.io.IOException: Broken pipe
>>>>>>>>>>>         at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketDispatcher.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(Unknown
>>>>>>>>>>> Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.IOUtil.write(Unknown Source) ~[na:1.7.0_79]
>>>>>>>>>>>         at sun.nio.ch.SocketChannelImpl.write(Unknown Source)
>>>>>>>>>>> ~[na:1.7.0_79]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.io.util.DataOutputStreamAndChannel.write(DataOutputStreamAndChannel.java:48)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
>>>>>>>>>>> ~[apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:351)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at
>>>>>>>>>>> org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
>>>>>>>>>>> [apache-cassandra-2.1.13.jar:2.1.13]
>>>>>>>>>>>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]



--
Eric Evans
john.eric.evans <at> gmail.com

Paolo Crosato | 26 May 18:39 2016

Out of memory issues

Hi,

we are running a cluster of 4 nodes, each one has the same sizing: 2 cores, 16G ram and 1TB of disk space.

On every node we are running cassandra 2.0.17, oracle java version "1.7.0_45", centos 6 with this kernel version 2.6.32-431.17.1.el6.x86_64

Two nodes are running just fine, the other two have started to go OOM at every start.

This is the error we get:

INFO [ScheduledTasks:1] 2016-05-26 18:15:58,460 StatusLogger.java (line 70) ReadRepairStage                   0         0            116         0                 0
 INFO [ScheduledTasks:1] 2016-05-26 18:15:58,462 StatusLogger.java (line 70) MutationStage                    31      1369          20526         0                 0
 INFO [ScheduledTasks:1] 2016-05-26 18:15:58,590 StatusLogger.java (line 70) ReplicateOnWriteStage             0         0              0         0                 0
 INFO [ScheduledTasks:1] 2016-05-26 18:15:58,591 StatusLogger.java (line 70) GossipStage                       0         0            335         0                 0
 INFO [ScheduledTasks:1] 2016-05-26 18:16:04,195 StatusLogger.java (line 70) CacheCleanupExecutor              0         0              0         0                 0
 INFO [ScheduledTasks:1] 2016-05-26 18:16:06,526 StatusLogger.java (line 70) MigrationStage                    0         0              0         0                 0
 INFO [ScheduledTasks:1] 2016-05-26 18:16:06,527 StatusLogger.java (line 70) MemoryMeter                       1         4             26         0                 0
 INFO [ScheduledTasks:1] 2016-05-26 18:16:06,527 StatusLogger.java (line 70) ValidationExecutor                0         0              0         0                 0
DEBUG [MessagingService-Outgoing-/10.255.235.19] 2016-05-26 18:16:06,518 OutboundTcpConnection.java (line 290) attempting to connect to /10.255.235.19
 INFO [GossipTasks:1] 2016-05-26 18:16:22,912 Gossiper.java (line 992) InetAddress /10.255.235.28 is now DOWN
 INFO [ScheduledTasks:1] 2016-05-26 18:16:22,952 StatusLogger.java (line 70) FlushWriter                       1         5             47         0                25
 INFO [ScheduledTasks:1] 2016-05-26 18:16:22,953 StatusLogger.java (line 70) InternalResponseStage             0         0              0         0                 0
ERROR [ReadStage:27] 2016-05-26 18:16:29,250 CassandraDaemon.java (line 258) Exception in thread Thread[ReadStage:27,5,main]
java.lang.OutOfMemoryError: Java heap space
    at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
    at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
    at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
    at org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
    at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
    at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at com.google.common.collect.AbstractIterator.next(AbstractIterator.java:153)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:434)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMoreData(IndexedSliceReader.java:387)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:145)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:45)
    at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
    at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
    at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
    at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
    at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
    at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
    at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
    at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
    at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
    at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
    at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
    at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1619)
    at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1438)
    at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:340)
    at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:89)
    at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47)
ERROR [ReadStage:32] 2016-05-26 18:16:29,357 CassandraDaemon.java (line 258) Exception in thread Thread[ReadStage:32,5,main]
java.lang.OutOfMemoryError: Java heap space
    at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
    at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
    at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
    at org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
    at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
    at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
    at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at com.google.common.collect.AbstractIterator.next(AbstractIterator.java:153)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:434)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMoreData(IndexedSliceReader.java:387)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:145)
    at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:45)
    at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
    at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
    at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
    at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
    at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
    at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
    at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
    at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
    at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
    at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
    at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
    at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
    at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
    at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1619)
    at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1438)
    at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:340)
    at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:89)
    at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47)

We are observing that the heap is never flushed, it keeps increasing until reaching the limit, then the OOM errors appear and after a short while the node crashes.

These are the relevant settings in cassandra_env for one of the crashing nodes:

MAX_HEAP_SIZE="6G"
HEAP_NEWSIZE="200M"

This is the complete error log http://pastebin.com/QGaACyhR

This is cassandra_env http://pastebin.com/6SLeVmtv

This is cassandra.yaml http://pastebin.com/wb1axHtV

Can anyone help?

Regards,

Paolo Crosato
-- Paolo Crosato Software engineer/Custom Solutions e-mail: paolo.crosato <at> targaubiest.com
Siddharth Verma | 26 May 08:34 2016

Get clustering column in Custom cassandra trigger

hi,
I am creating a trigger in cassandra
-----------------------------------------------------------------------------------------------------------------------
public class GenericAuditTrigger implements ITrigger
{
   
    private static SimpleDateFormat dateFormatter = new SimpleDateFormat ("yyyy/MM/dd");

    public Collection<Mutation> augment(Partition update)
    {
        String auditKeyspace = "test";
        String auditTable = "audit";

        RowUpdateBuilder audit = new RowUpdateBuilder(Schema.instance.getCFMetaData(auditKeyspace, auditTable),
                FBUtilities.timestampMicros(),
                UUIDGen.getTimeUUID())
                .clustering(dateFormatter.format(new Date()),update.metadata().ksName,update.metadata().cfName,UUID.randomUUID());

        audit.add("primary_key",update.metadata().getKeyValidator().getString(update.partitionKey().getKey()));

        UnfilteredRowIterator unfilteredRowIterator = update.unfilteredIterator();
        StringBuilder next=new StringBuilder();
        while(unfilteredRowIterator.hasNext()){
            next.append(unfilteredRowIterator.next().toString()+"\001");
        }

        audit.add("values", next.length()==0?null:next.deleteCharAt(next.length()-1).toString()+";"+update.columns().toString());

        return Collections.singletonList(audit.build());
    }
}

-----------------------------------------------------------------------------------------------------------------------
CREATE TABLE test.test (pk1 text, pk2 text, ck1 text, ck2 text, v1 text, v2 text, PRIMARY KEY((pk1,pk2),ck1,ck2);
-----------------------------------------------------------------------------------------------------------------------
CREATE TABLE test.audit (
    timeuuid timeuuid,
    date text,
    keyspace_name text,
    table_name text,
    uuid UUID,
    primary_key text,
    values text,
    PRIMARY KEY (timeuuid, date, keyspace_name, table_name, uuid));
-----------------------------------------------------------------------------------------------------------------------

How to get clustering column values in trigger?

insert into test(pk1 , pk2 , ck1 , ck2 , v1 , v2 ) VALUES ('pk1','pk2','ck1','ck2_del','v1','v2');

select * from audit;

timeuuid              | 0d117390-227e-11e6-9d80-dd871f2f22d2
date                    | 2016/05/25
keyspace_name  | test
table_name         | test
uuid                    | df274fc0-4362-42b1-a3bf-0030f8d2062f
primary_key        | pk1:pk2
values                | [[v1=v1 ts=1464184100769315], [v2=v2 ts=1464184100769315]]


How to audit ck1 and ck2 also?

Thanks,
Siddharth Verma
Tony Anecito | 26 May 04:26 2016
Picon

Lattest driver and netty issues...

Hi All,

I downloaded the latest cassandra driver but when used I get an error about class io.netty.util.timer (netty-3.9.0.Final) not being found during runtime. If I get the latest netty-alll-4.0.46.Final.jar during runtime I get an exception about not having a java.security.cert.x509Certificate class.

So what to do?

Thanks!

Gmane