鄢来琼 | 31 Jul 03:33 2015

答复: query statement return empty

Using java/C# rewrite the test case, the results are consistency.

Is there any problem for the python driver?

 

发件人: 鄢来琼
发送时间: Friday, July 31, 2015 9:03 AM
收件人: 'user <at> cassandra.apache.org'
主题: query statement return empty

 

Hi ALL

 

The result of “select * from t_test where id = 1” statement is not consistency,

Could you tell me why?

 

test case,

I = 0;

While I < 5:

    result = cassandra_session.execute(“select ratio from t_test where id = 1”)

print result

 

testing result:

[Row(ratio=Decimal('0.000'))]

[]

[Row(ratio=Decimal('0.000'))]

[Row(ratio=Decimal('0.000'))]

[Row(ratio=Decimal('0.000'))]

 

Cassandra cluster:

My Cassandra version is 2.12,

the Cassandra cluster has 9 nodes.

The python driver version is 2.6

 

I have tested both the AsyncoreConnection and LibevConnection, the results are in consistency.

 

Thanks a lot.

 

Peter

鄢来琼 | 31 Jul 03:02 2015

query statement return empty

Hi ALL

 

The result of “select * from t_test where id = 1” statement is not consistency,

Could you tell me why?

 

test case,

I = 0;

While I < 5:

    result = cassandra_session.execute(“select ratio from t_test where id = 1”)

print result

 

testing result:

[Row(ratio=Decimal('0.000'))]

[]

[Row(ratio=Decimal('0.000'))]

[Row(ratio=Decimal('0.000'))]

[Row(ratio=Decimal('0.000'))]

 

Cassandra cluster:

My Cassandra version is 2.12,

the Cassandra cluster has 9 nodes.

The python driver version is 2.6

 

I have tested both the AsyncoreConnection and LibevConnection, the results are in consistency.

 

Thanks a lot.

 

Peter

noah chanmala | 30 Jul 19:05 2015
Picon

Timeout/Crash when insert more than 500 bytes chunks

All,

Would you please point me to location where I can adjust/reconfig, so that I can insert into Blob field more than 500 bytes chunks without Cluster crash on me.

I read from the user forum and saw people were able to insert 50MB chunks.  There must be some where that I can adjust. 

Thanks,

Noah
James Vanns | 30 Jul 18:23 2015
Picon

AWS multi-region DCs fail to rebuild

Hi. First, some details;

* Ubuntu 14.04.2 LTS.
* Oracle Java 8
* Cassandra 2.2 (from datastax repo)
* AWS VPC - two regions (Oregon, Ireland)
* A pair of 3 node DCs in a single cluster - 1 DC per region as above
* Ec2Snitch (NOT the Ec2MultiRegionSnitch - does not work in a VPC environment)

In following this documentation;


The rebuild (final) stage fails with this message;

error: Error while rebuilding node: Stream failed
-- StackTrace --
java.lang.RuntimeException: Error while rebuilding node: Stream failed
        at org.apache.cassandra.service.StorageService.rebuild(StorageService.java:1109)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
....

Obviously there is a much larger stack trace.

This happens repeatedly when attempting to run the rebuild on just a single node
in the US DC (pointing at the EU DC). I have not yet tried any other node from the
US DC.

Is this a bug or a configuration error perhaps? I know people out there are using
AWS for Cassandra - how are you replicating across regions? Here are two 
values I've tried modifying to no avail;

streaming_socket_timeout_in_ms
phi_convict_threshold

As both were referenced in various AWS related Cassandra sources on the web ;)

The amount of data being replicated would be tiny - we're testing a tiny TitanDB
graph of  no more than 100 edges and 100 nodes.

Can anyone point me in the direction of a correct solution and explanation?

Cheers,

Jim

--
Senior Code Pig
Industrial Light & Magic
aeljami.ext | 29 Jul 17:04 2015

problem with write_survey

Hello,

 

I start a node with the write survey = true

 

-Dcassandra.write_survey=true

 

Log:

 

INFO [main] 2015-07-29 15:29:35,697 StorageService.java (line 853) Startup complete, but write survey mode is active, not becoming an active ring member. Use JMX (StorageService->joinRing()) to finalize ring joining.

 

but the node allows read:

 

cqlsh> select * from myTable  where id = 1745;

 

id   | fname | lname

------+-------+-------

1745 |  john | smith

 

(Key 1745 exists on node with the the write survey = true)

Then, when attempting a transition a node from write survey to normal mode, the "nodetool join" invocation fails as the node is already "joined".

 

Do you have any idea ?

 

Cassandra Version: 2.0.10

 

 

_________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you.
Frisch, Michael | 29 Jul 15:56 2015

Leak detected during repair (v2.1.8)

ERROR [Reference-Reaper:1] 2015-07-29 12:34:45,941 Ref.java:179 - LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State <at> 52628658) to class org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier <at> 1113728425:/data/cassandra/data/KSName/CFName/snapshots/8179b820-35ec-11e5-a75f-ed9e19db78e9/KSName-CFName-ka-5544 was not released before the reference was garbage collected

ERROR [Reference-Reaper:1] 2015-07-29 12:34:45,948 Ref.java:179 - LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State <at> 4eef1d76) to class org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier <at> 880236150:/data/cassandra/data/KSName/CFName/snapshots/8179b820-35ec-11e5-a75f-ed9e19db78e9/KSName-CFName-ka-5543 was not released before the reference was garbage collected


I was just wondering if this is a known issue or if others are experiencing this with C* v2.1.8.  I couldn't find any open tickets in the Jira about these leaks occurring during repairs.  I've only seen this during repairs and the only commonality that I could find between the column families that these leaks have been repeated for is that they contain very little data.  Could there be a race condition that's only hit when repairing very small datasets?  Turning on debug logging didn't produce any more context like I had hoped for.

Regards,
- Mike
Tzach Livyatan | 28 Jul 16:56 2015

cassandra-stress: Not enough replica available for query at consistency LOCAL_ONE (1 required but only 0 alive)

I'm running benchmark on a 2 nodes C* 2.1.8 cluster using cassandra-stress, 
with the default of CL =1
Stress runs fine for some time, and than start throwing:

java.io.IOException: Operation x10 on key(s) [36333635504d4b343130]: Error executing: (UnavailableException): Not enough replica available for query at consistency LOCAL_ONE (1 required
 but only 0 alive)

at org.apache.cassandra.stress.Operation.error(Operation.java:216)
at org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:188)
at org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:99)
at org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:107)
at org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:259)
at org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)

The problem disappears when I decrease the number of client threads, but my goal is to test max performance, so lowering the bar defeat my purpose.

Is this normal server push back under too much pressure?
shouldn't the stress client slow down before this happened?

Thanks
Tzach
Yang | 28 Jul 10:31 2015
Picon

Re: question about bootstrapping sequence

I'm wondering how the Cassandra protocol brings a newly bootstrapped node "up to speed".

for ease of illustration, let's say we just have one key, K, and the value is continually updated: 1,2 ,3 ,4 ....

originally we have 1 node, A, now node B joins, and needs to bootstrap and get its newly assigned range (just "K") from A.

now let's say A has seen updates 1,2,3 up to this point. according to the StreamingRequestVerbHandler  , A does a flush of its memtable, then streams out the new sstables.


but what while the (newly-flushed) sstable is being streamed out from A, before B fully received them, A now gets more updates: 4,5,6.... ?

now B gets the streamed range, and happily declares itself ready, and joins the ring.  but now it's actually not "up to speed" with the "old members". cuz A now has a value K=6 while B has K=3


of course when clients query now, A's and B's results are reconciled, so client gets latest result. but would B stay forever "not up to speed" ? how can we make it up to speed?  cuz although the following is a very hypothetical scenario, it will lead to lost writes: say B is still in the "not up to date " state, then another node is removed and a new node is inserted, then after more of such cycles, all the "up to date" nodes are gone, and we essentially lose the latest writes.
rock zhang | 25 Jul 21:28 2015

if seed is diff on diff nodes, any problem ?

Hi All,

I have 6 node,  most of them are using node1 as seed, but I just find out 2 nodes are using node3 as seed, but
everything looks fine. Does that mean seed node does not have to be same on all nodes ?

Thanks
Rock
Andreas Schlüter | 25 Jul 18:22 2015
Picon
Picon

AssertionError on PasswordAuthenticator

Hi,

 

I am starting to setup Usergrid on Cassandra, but I run into an issue that I debugged into and which does not seem to be related to Usergrid or my setup, since I run into an AssertionError (which should  never happen according to the comment in the Cassandra Code, and which I don’t see how to avoid from the client side).

When I do a Login from Usergrid via thrift, I get the following stacktrace: (Cassandra Version is 2.08, I used the standard username/password  cassandra/cassandra to exclude errors here)

 

ERROR [Thrift:16] 2015-07-25 15:02:32,480 CassandraDaemon.java (line 199) Exception in thread Thread[Thrift:16,5,

main]

java.lang.AssertionError: org.apache.cassandra.exceptions.InvalidRequestException: Key may not be empty

at org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:117)

at org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471)

at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505)

at org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489)

at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)

at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)

at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201

)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Key may not be empty

at org.apache.cassandra.cql3.QueryProcessor.validateKey(QueryProcessor.java:120)

at org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:344)

at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:206)

at org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:110)

 

Any ideas what might be wrong or which prerequisites need to be met? This is the first request for a connection.

 

Help would be greatly appreciated, I tried everything I could come up with supported by Google…

 

Thanks in advance,

Andreas

rock zhang | 25 Jul 00:55 2015

Nodetool cleanup takes long time and no progress

Hi All,

After I added node, I run node tool cleanup on the old notes , but it takes forever, no error message,  and I
don't see space are freed. 

What should I do ? Repair first ?

Thanks
Rock 

Gmane