Michał Łowicki | 30 Jan 09:36 2015
Picon

Timeouts but returned consistency level is invalid

Hi,

We're using C* 2.1.2, django-cassandra-engine which in turn uses cqlengine. LOCAL_QUROUM is set as default consistency level. From time to time we get timeouts while talking to the database but what is strange returned consistency level is not LOCAL_QUROUM:

code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 3 responses." info={'received_responses': 3, 'required_responses': 4, 'consistency': 'ALL'}

code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 1 responses." info={'received_responses': 1, 'required_responses': 2, 'consistency': 'LOCAL_QUORUM'}

code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

Any idea why it might happen?

--
BR,
Michał Łowicki
PRANEESH KUMAR | 30 Jan 06:59 2015
Picon

Bootstrap stopped due to Corrupt (negative) value length in ColumnIterator.deserializeNext

Hi,

We have Cassandra 1.2.16. While adding a new node, during bootstrap has encountered  java.io.IOException: Corrupt (negative) value length in ColumnSortedMap. After this error streaming has been stopped and the node is still in joining  mode.  

Has anyone came across a similar kind of error. Any help is appreciated. 
 
This is the complete trace.

ERROR [Thread-256] 2015-01-29 08:27:38,677 CassandraDaemon.java (line 191) Exception in thread Thread[Thread-256,5,main]
java.io.IOError: java.io.IOException: Corrupt (negative) value length encountered
        at org.apache.cassandra.io.util.ColumnIterator.deserializeNext(ColumnSortedMap.java:255)
        at org.apache.cassandra.io.util.ColumnIterator.next(ColumnSortedMap.java:271)
        at org.apache.cassandra.io.util.ColumnIterator.next(ColumnSortedMap.java:228)
        at edu.stanford.ppl.concurrent.SnapTreeMap.<init>(SnapTreeMap.java:453)
        at org.apache.cassandra.db.AtomicSortedColumns$Holder.<init>(AtomicSortedColumns.java:331)
        at org.apache.cassandra.db.AtomicSortedColumns.<init>(AtomicSortedColumns.java:79)
        at org.apache.cassandra.db.AtomicSortedColumns.<init>(AtomicSortedColumns.java:50)
        at org.apache.cassandra.db.AtomicSortedColumns$1.fromSorted(AtomicSortedColumns.java:63)
        at org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:423)
        at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:80)
        at org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:250)
        at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:185)
        at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
        at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:243)
        at org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:183)
        at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:79)
Caused by: java.io.IOException: Corrupt (negative) value length encountered
        at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:352)
        at org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:108)
        at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:85)
        at org.apache.cassandra.io.util.ColumnIterator.deserializeNext(ColumnSortedMap.java:251)
        ... 15 more



Thanks,
Praneesh
Jan | 30 Jan 00:27 2015
Picon

Syntax for using JMX term to connect to Cassandra

HI Folks; 

I am trying to use JMXterm,  a command line based tool to script & monitor C* cluster. 
Would anyone on this forum know the exact syntax to connect to Cassandra Domain using JMXterm  ?
Please give me an example. 

I do 'not'   intend to use OpsCenter or any other UI based tool.

thanks
Jan
 
Batranut Bogdan | 29 Jan 17:11 2015
Picon

Opscenter served reads / second

Hello,

Is there a metric that will show how many reads per second C* serves? Read requests shows how many requests are issued to cassandra, but I want to know how many the cluster can actualy serve .
Paul Nickerson | 29 Jan 17:03 2015
Picon

Repairing OpsCenter rollups60 Results in Snapshot Errors

I am running a 6 node cluster using Apache Cassandra 2.1.2 with DataStax OpsCenter 5.0.2 from the AWS EC2 AMI "DataStax Auto-Clustering AMI 2.5.1-hvm" (DataStax Community AMI). When I try to run a repair on the rollups60 column family in the OpsCenter keyspace, I get errors about failed snapshot creation in the Cassandra system log. The repair seems to continue, and then finishes with errors.

I am wondering whether this is making the repair ineffectual.

I am running the command

    nodetool repair OpsCenter rollups60

on one of the nodes (10.63.74.70). From the command, I get this output:

    [2015-01-23 19:36:06,261] Starting repair command #9, repairing 511 ranges for keyspace OpsCenter (seq=true, full=true)
    [2015-01-23 21:08:16,242] Repair session 67772db0-a337-11e4-9e78-37e5027a626b for range (5848435723460298978,5868916338423419522] failed with error java.io.IOException: Failed during snapshot creation.

The error is repeated many times, and they all appear right at the end. Here is an example of what I see in the log on that same system (the one that I'm running the command from, and the one that's trying to snapshot):

    INFO  [AntiEntropyStage:1] 2015-01-23 19:38:28,235 RepairSession.java:171 - [repair #138b42e0-a337-11e4-9e78-37e5027a626b] Received merkle tree for rollups60 from /10.63.74.70
    INFO  [AntiEntropySessions:9] 2015-01-23 19:38:28,236 RepairSession.java:260 - [repair #67772db0-a337-11e4-9e78-37e5027a626b] new session: will sync /10.63.74.70, /10.51.180.16 on range (5848435723460298978,5868916338423419522] for OpsCenter.[rollups60]
    INFO  [RepairJobTask:3] 2015-01-23 19:38:28,237 Differencer.java:74 - [repair #138b42e0-a337-11e4-9e78-37e5027a626b] Endpoints /10.13.157.190 and /10.63.74.70 have 1 range(s) out of sync for rollups60
    INFO  [AntiEntropyStage:1] 2015-01-23 19:38:28,237 ColumnFamilyStore.java:840 - Enqueuing flush of rollups60: 465365 (0%) on-heap, 0 (0%) off-heap
    INFO  [MemtableFlushWriter:25] 2015-01-23 19:38:28,238 Memtable.java:325 - Writing Memtable-rollups60 <at> 204861223(51960 serialized bytes, 1395 ops, 0%/0% of on/off-heap limit)
    INFO  [RepairJobTask:3] 2015-01-23 19:38:28,239 StreamingRepairTask.java:68 - [streaming task #138b42e0-a337-11e4-9e78-37e5027a626b] Performing streaming repair of 1 ranges with /10.13.157.190
    INFO  [MemtableFlushWriter:25] 2015-01-23 19:38:28,262 Memtable.java:364 - Completed flushing /raid0/cassandra/data/OpsCenter/rollups60-445613507ca411e4bd3f1927a2a71193/OpsCenter-rollups60-ka-331933-Data.db (29998 bytes) for commitlog position ReplayPosition(segmentId=1422038939094, position=31047766)
    ERROR [RepairJobTask:2] 2015-01-23 19:38:39,067 RepairJob.java:127 - Error occurred during snapshot phase
    java.lang.RuntimeException: Could not create snapshot at /10.63.74.70
            at org.apache.cassandra.repair.SnapshotTask$SnapshotCallback.onFailure(SnapshotTask.java:77) ~[apache-cassandra-2.1.2.jar:2.1.2]
            at org.apache.cassandra.net.MessagingService$5$1.run(MessagingService.java:347) ~[apache-cassandra-2.1.2.jar:2.1.2]
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_51]
            at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51]
            at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
    INFO  [AntiEntropySessions:10] 2015-01-23 19:38:39,068 RepairSession.java:260 - [repair #6dec29c0-a337-11e4-9e78-37e5027a626b] new session: will sync /10.63.74.70, /10.51.180.16 on range (-6918744323658665195,-6916171087863528821] for OpsCenter.[rollups60]
    ERROR [AntiEntropySessions:9] 2015-01-23 19:38:39,068 RepairSession.java:303 - [repair #67772db0-a337-11e4-9e78-37e5027a626b] session completed with the following error
    java.io.IOException: Failed during snapshot creation.
            at org.apache.cassandra.repair.RepairSession.failedSnapshot(RepairSession.java:344) ~[apache-cassandra-2.1.2.jar:2.1.2]
            at org.apache.cassandra.repair.RepairJob$2.onFailure(RepairJob.java:128) ~[apache-cassandra-2.1.2.jar:2.1.2]
            at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172) ~[guava-16.0.jar:na]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_51]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51]
            at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
    ERROR [AntiEntropySessions:9] 2015-01-23 19:38:39,070 CassandraDaemon.java:153 - Exception in thread Thread[AntiEntropySessions:9,5,RMI Runtime]
    java.lang.RuntimeException: java.io.IOException: Failed during snapshot creation.
            at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.jar:na]
            at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) ~[apache-cassandra-2.1.2.jar:2.1.2]
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_51]
            at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_51]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_51]
            at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
    Caused by: java.io.IOException: Failed during snapshot creation.
            at org.apache.cassandra.repair.RepairSession.failedSnapshot(RepairSession.java:344) ~[apache-cassandra-2.1.2.jar:2.1.2]
            at org.apache.cassandra.repair.RepairJob$2.onFailure(RepairJob.java:128) ~[apache-cassandra-2.1.2.jar:2.1.2]
            at com.google.common.util.concurrent.Futures$4.run(Futures.java:1172) ~[guava-16.0.jar:na]
            ... 3 common frames omitted

The errors are repeated many times. The IP Address 10.63.74.70 in the log is the node I'm running the repair from. I am able to repair the rest of the OpsCenter column families, and they complete pretty quickly without error.

I have tried creating my own snapshot, and it completes successfully with nothing logged.

    nodetool snapshot OpsCenter

The disk has plenty of space left. Are these errors problematic? Should I just let the repair process continue for however long it takes? The cluster is currently not in use by any application, yet it has some load while it's trying this repair, so it's not sitting idle (it has no load when I'm not repairing).

Thanks for any help.

And if this is the wrong place to ask about a DataStax Community thing, could someone point me in the right direction?

 ~ Paul Nickerson
Sibbald, Charles | 29 Jan 15:15 2015

Upgrading from Cassandra 1.2.14 to Cassandra 2.10

Hi All,

I am looking into the possibility of upgrading from Cassandra 1.2.14 to Cassandra 2.1 in the following manor.

I have a large Cassandra cluster with dozens of nodes, and would like to build new instances at version 2.1 to join the cluster and once they have successfully joined the rink these should then stream data in.

Once they have fully joined the cluster I would like to decommission a single Cassandra 1.2.14 instance, and repeat.

Due to the fact that our 2.1 installations have a different directory layout we would like to go with this ‘streaming’ option for the upgrade rather than an inplace upgrade.

Does anyone foresee any issues with this.

Thanks in advance.

Regards

Charles
Information in this email including any attachments may be privileged, confidential and is intended exclusively for the addressee. The views expressed may not be official policy, but the personal views of the originator. If you have received it in error, please notify the sender by return e-mail and delete it from your system. You should not reproduce, distribute, store, retransmit, use or disclose its contents to anyone. Please note we reserve the right to monitor all e-mail communication through our internal and external networks. SKY and the SKY marks are trademarks of British Sky Broadcasting Group plc and Sky International AG and are used under licence. British Sky Broadcasting Limited (Registration No. 2906991), Sky-In-Home Service Limited (Registration No. 2067075) and Sky Subscribers Services Limited (Registration No. 2340150) are direct or indirect subsidiaries of British Sky Broadcasting Group plc (Registration No. 2247735). All of the companies mentioned in this paragraph are incorporated in England and Wales and share the same registered office at Grant Way, Isleworth, Middlesex TW7 5QD.
José Guilherme Vanz | 29 Jan 11:31 2015
Picon

Database schema migration

Hello

I am studying Cassandra for while and to practice the libraries and concepts I will implement a simple Cassandra client. During my research I faced a doubt about schema migrations. What the common/best practice in production clusters? I mean, who actually make the schema migration? The application or the cluster mananger have to update the schema before update the application?

All the best
Vanz

Ajay | 29 Jan 07:50 2015
Picon

Performance difference between Regular Statement Vs PreparedStatement

Hi All,

I tried both insert and select query (using QueryBuilder) in Regular statement and PreparedStatement in a multithreaded code to do the query say 10k to 50k times. But I don't see any visible improvement using the PreparedStatement. What could be the reason?

Note : I am using the same Session object in multiple threads.

Cassandra version : 2.0.11
Driver version : 2.1.4

Thanks
Ajay
Roland Etzenhammer | 29 Jan 07:44 2015
Picon

incremential repairs - again

Hi,

a short question about the new incremental repairs again. I am running 
2.1.2 (for testing). Marcus pointed me that 2.1.2 should do incremental 
repairs automatically, so I rolled back all steps taken. I expect that 
routine repair times will decrease when I do not put many new data on 
the cluster.

But they dont - they are constant at about 1000 minutes  per node, so I 
extracted all "Repaired at" with sstablemetadata and I cant see any 
recent date. I put several GB of data into the cluster in 2015 and I run 
"nodetool repair -pr" on every node regularly.

Am I still missing something? Or is this one of the issues with 2.1.2 
(CASSANDRA-8316)?

Thanks for hints,
Jan

Rahul Bhardwaj | 29 Jan 06:28 2015

error while bulk loading using copy command

Hi All,

We need to upload 18 lacs rows into a table which consist columns with data type "counter".

on uploading using copy command , we are getting below error:

Bad Request: INSERT statement are not allowed on counter tables, use UPDATE instead

we need counter data type because after loading this data we want to use functionality of counter data type.

Kindly help is there any way to do this.


Regards:
Rahul Bhardwaj 


Follow IndiaMART.com for latest updates on this and more: Mobile Channel:

Watch how Irrfan Khan gets his work done in no time on IndiaMART, kyunki Kaam Yahin Banta Hai!!!
Saurabh Sethi | 29 Jan 03:19 2015

Unable to create a keyspace

I have a 3 node Cassandra 2.1.0 cluster and I am using datastax 2.1.4 driver to create a keyspace followed by creating a column family within that keyspace from my unit test.

But I do not see the keyspace getting created and the code for creating column family fails because it cannot find the keyspace. I see the following in the system.log file:

INFO  [SharedPool-Worker-1] 2015-01-28 17:59:08,472 MigrationManager.java:229 - Create new Keyspace: KSMetaData{name=testmaxcolumnskeyspace, strategyClass=SimpleStrategy, strategyOptions={replication_factor=1}, cfMetaData={}, durableWrites=true, userTypes=org.apache.cassandra.config.UTMetaData <at> 370ad1d3}
INFO  [MigrationStage:1] 2015-01-28 17:59:08,476 ColumnFamilyStore.java:856 - Enqueuing flush of schema_keyspaces: 512 (0%) on-heap, 0 (0%) off-heap
INFO  [MemtableFlushWriter:22] 2015-01-28 17:59:08,477 Memtable.java:326 - Writing Memtable-schema_keyspaces <at> 1664717092(138 serialized bytes, 3 ops, 0%/0% of on/off-heap limit)
INFO  [MemtableFlushWriter:22] 2015-01-28 17:59:08,486 Memtable.java:360 - Completed flushing /usr/share/apache-cassandra-2.1.0/bin/../data/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-schema_keyspaces-ka-118-Data.db (175 bytes) for commitlog position ReplayPosition(segmentId=1422485457803, position=10514)

This issue doesn’t happen always. My test runs fine sometimes but once it gets into this state, it remains there for a while and I can constantly reproduce this.

Also, when this issue happens for the first time, I also see the following error message in system.log file:

ERROR [SharedPool-Worker-1] 2015-01-28 15:08:24,286 ErrorMessage.java:218 - Unexpected exception during request java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_05] at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.8.0_05] at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[na:1.8.0_05] at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.8.0_05] at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375) ~[na:1.8.0_05] at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:878) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:225) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:114) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.20.Final.jar:4.0.20.Final] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]

Anyone has any idea what might be going on here?

Thanks,
Saurabh

Gmane