Jual Mahal | 12 Jul 05:33 2014
Picon

[jgroups-users] Lightweight payload for running JGroups TCP session in a bandwidth-limited custom Ethernet network

Hi, guys.

I have created a few Windows and Android apps that make use of JGroups 3.4.3.final. But, I need to create communication sessions between them under a bandwidth-limited custom network. 

I say custom because this network is using a modified TDMA waveform with Ethernet-frame based data slot that only has an effective bandwidth between 70-90 kbps and it is able to auto split and merge. 

Hence, JGroups is suitable for this kind of split-merge network. Only that I need a better JGroups configuration setup that can support many TCP clients without losing time when merging or tying to decide a coordinator.

The following setup is just testing between two clients...but I would like to achieve this with more than 10 clients with losing much time with merge (quick merge) under 70-90 kbps bandwidth condition

<config xmlns="urn:org:jgroups"
        xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.3.xsd">
    <TCP bind_port="7800"
         loopback="false"
         recv_buf_size="${tcp.recv_buf_size:5M}"
         send_buf_size="${tcp.send_buf_size:640K}"
         max_bundle_size="64K"
         max_bundle_timeout="30"
         use_send_queues="true"
         sock_conn_timeout="300"

         timer_type="new3"
         timer.min_threads="4"
         timer.max_threads="10"
         timer.keep_alive_time="3000"
         timer.queue_max_size="500"
         
         thread_pool.enabled="true"
         thread_pool.min_threads="1"
         thread_pool.max_threads="10"
         thread_pool.keep_alive_time="5000"
         thread_pool.queue_enabled="false"
         thread_pool.queue_max_size="100"
         thread_pool.rejection_policy="discard"

         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="1"
         oob_thread_pool.max_threads="8"
         oob_thread_pool.keep_alive_time="5000"
         oob_thread_pool.queue_enabled="false"
         oob_thread_pool.queue_max_size="100"
         oob_thread_pool.rejection_policy="discard"/>
                         
    <TCPPING timeout="3000"
             initial_hosts="192.168.0.200[7800],192.168.0.200[7801],192.168.0.201[7800],192.168.0.201[7801]"
             port_range="1"
             num_initial_members="4"/>
    <MERGE2  min_interval="10000"
             max_interval="30000"/>
    <FD_SOCK/>
    <FD timeout="3000" max_tries="3" />
    <VERIFY_SUSPECT timeout="1500"  />
    <BARRIER />
    <pbcast.NAKACK2 use_mcast_xmit="false"
                   discard_delivered_msgs="true"/>
    <UNICAST3 />
    <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                   max_bytes="4M"/>
    <pbcast.GMS print_local_addr="true" join_timeout="3000"

                view_bundling="true"/>
    <MFC max_credits="2M"
         min_threshold="0.4"/>
    <FRAG2 frag_size="60K"  />
    <!--RSVP resend_interval="2000" timeout="10000"/-->
    <pbcast.STATE_TRANSFER/>
------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Mark Lewis | 11 Jul 18:42 2014
Picon

[jgroups-users] Android support - How to Run the Demo Program

I'm trying to run the Jgroups demo 'Draw' program from a simple Android 
App wrapper.

Please provide some instructions to run a sample Jgroups in Android.
I get the error:
     I  get the FATAL EXCEPTION: java.lang.NoClassDefFoundError: 
org.jgroups.demos.Draw

Here is what I tried:

 From a Windows command prompt, I run 2 instances of demos.Draw running 
on windows, which will interact with the Android instance:
    java -cp ./jgroups-3.4.4.Final.jar -Djava.net.preferIPv4Stack=true 
org.jgroups.demos.Draw

In the Android  ADT, Creat a new Android Application Project 'jgroupDraw'
In Package Explorer, in the 'jGroupDraw' project, make a folder 'libs'
 From outside eclipse, download jgroups-3.4.4.Final.jar, and copy into 
the 'libs' folder
Inside eclipse, do 'F5' or right-click 'Refresh'
In Package Explorer, open 'libs', and see the jgroups jar file
Right-click the jar file, do Build-Path, Add to Build-Path
In the main layout, add a button 'Draw'
Edit the main activity source file:

At the end of the 'import' lines, add a line:
         'import org.jgroups.demos.Draw;'

Just after  the 'public class ....' line, add the line:

     Button drawButton;

At the end of the 'onCreate()' method, add the lines:

         drawButton = (Button) findViewById(R.id.button1);

         drawButton.setOnClickListener(
                 new OnClickListener() {
                      <at> Override
                     public void onClick(View arg0) {
                         org.jgroups.demos.Draw.main(null);   // <-- You 
will notice that this auto-completes for you as you type
                     }
                 }
         );

 From Eclipse, run this jgroupsDraw app, and touch the 'Draw' button.

Can someone provide help with this?

------------------------------------------------------------------------------
Aloka Munasinghe | 25 Jun 07:09 2014
Picon

[jgroups-users] NAKACK producing continuous warn messages

Hi,

In one of our clusters in the production system the following message is
logged continuously.

2014-06-20 09:32:55 WARN [tid=] [OOB-1,Agent,X]
org.jgroups.protocols.pbcast.NAKACK - (requester=Y, local_addr=X) message
X::1 not found in retransmission table of X:
2014-06-20 09:32:55 WARN [tid=] [OOB-1,Agent,X]
org.jgroups.protocols.pbcast.NAKACK - (requester=Y, local_addr=X) message
X::2 not found in retransmission table of X:

It seems that member Y re-requests messages from member X, but member X is
missing the messages from itself. According to the logs this message has
been logged around 3000 times within a minute and messages up to sequence no
23349 is requested within that minute. 
As a result of this hard disk becomes full and to recover we had to restart
all the nodes. 

What is the recovery solution for this problem without restarting the nodes
and what could be the root cause for the issue?

In production system we use jgroups 2.6.5.GA. In this is there is a way to
change the logging frequency?

Below is the config file we have been using.

<config>
    <UDP
         mcast_addr="${jgroups.udp.mcast_addr:228.10.10.10}"
         mcast_port="${jgroups.udp.mcast_port:45588}"
         tos="8"
         ucast_recv_buf_size="20000000"
         ucast_send_buf_size="640000"
         mcast_recv_buf_size="80000"
         mcast_send_buf_size="150000"
         loopback="false"
         discard_incompatible_packets="true"
         ip_ttl="${jgroups.udp.ip_ttl:2}"
         thread_naming_pattern="cl"

         thread_pool.enabled="true"
         thread_pool.min_threads="2"
         thread_pool.max_threads="8"
         thread_pool.keep_alive_time="5000"
         thread_pool.queue_enabled="true"
         thread_pool.queue_max_size="1000"
         thread_pool.rejection_policy="Run"

         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="1"
         oob_thread_pool.max_threads="8"
         oob_thread_pool.keep_alive_time="5000"
         oob_thread_pool.queue_enabled="false"
         oob_thread_pool.queue_max_size="100"
         oob_thread_pool.rejection_policy="Run"/>

    <PING timeout="2000"
            num_initial_members="3"/>
    <MERGE2 max_interval="10000"
            min_interval="5000"/>
    <FD_SOCK/>
    <FD timeout="1000" max_tries="5"   shun="false"/>
    <VERIFY_SUSPECT timeout="1500"  />
    <BARRIER />
    <pbcast.NAKACK gc_lag="50"
                   retransmit_timeout="300,600,1200,2400,4800"
                   />
    <UNICAST timeout="5000"/>
    <pbcast.STABLE desired_avg_gossip="20000"/>
    <VIEW_SYNC avg_send_interval="60000"   />
    <pbcast.GMS print_local_addr="false" join_timeout="5000"
                shun="false"
                view_bundling="true"/>
    <FC max_credits="500000"
                    min_threshold="0.20"/>
    <FRAG2 frag_size="4096"  />

    <pbcast.STATE_TRANSFER  />

</config>

Any help to recover this problem would be really appreciated.

Thanks
Aloka

--
View this message in context: http://jgroups.1086181.n5.nabble.com/NAKACK-producing-continuous-warn-messages-tp10269.html
Sent from the JGroups - General mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
Ron Gonzalez | 19 Jun 22:12 2014
Picon

[jgroups-users] JGRP-1755 fixes Unicast 3 but what about TCP?

We see the fix in unicast3 in github for JGRP-1755 issue, but we are using 
pbcast.gms, pbcast.nakack, we don't have UNICAST3 Configured in our 
implementation of  3.3.0.  We are using TCP.

Yet our cluster member cannot rejoin, ultimately we end up with the same member 
joined 3 or 4 times ( ip's redacted but we see the ip for node c listed multiple 
times)

Hundreds of these messages in logs:
INFO  [.jgroups.MuxRpcDispatcherMgr] suspect:1d8a6d91-2833-7238-22e0-72856922c78a
  WARN  [org.jgroups.protocols.TCP] nodeA:7600: no physical address for 
1d8a6d91-2833-7238-22e0-72856922c78a, dropping message
WARN  [org.jgroups.protocols.TCP] nodeA:7600: logical address cache didn't 
contain all physical address, sending up a discovery request

Then we see the same node readded multiple times into the view
view [nodeA:7600|] after 5000ms, missing ACKs from [nodeA:7600, nodeB:7600, 
nodeC:7600, nodeC:7600, nodeC:7600, nodeC:7600]

Any suggestions?

Here is our config:

<config>
     <TCP
             recv_buf_size="20000000"
             send_buf_size="640000"
             loopback="true"
             max_bundle_size="64000"
             max_bundle_timeout="30"
             bind_port="${cluster.bind.port}"
             use_send_queues="true"
             sock_conn_timeout="300"

             thread_pool.enabled="true"
             thread_pool.min_threads="4"
             thread_pool.max_threads="16"
             thread_pool.keep_alive_time="8000"
             thread_pool.queue_enabled="false"
             thread_pool.queue_max_size="100"
             thread_pool.rejection_policy="run"

             oob_thread_pool.enabled="true"
             oob_thread_pool.min_threads="4"
             oob_thread_pool.max_threads="16"
             oob_thread_pool.keep_alive_time="8000"
             oob_thread_pool.queue_enabled="false"
             oob_thread_pool.queue_max_size="100"
             oob_thread_pool.rejection_policy="run"
             ${BIND_ADDRESS_DIRECTIVE}/>
     <TCPPING timeout="3000"
              initial_hosts="${cluster.tcp.discovery.initial.hosts}"
              port_range="0"
              num_initial_members="2"/>
     <MERGE2 max_interval="100000" min_interval="20000"/>
     <FD_SOCK start_port="${cluster.failure.detection.bind.port}" 
${BIND_ADDRESS_DIRECTIVE}/>
     <FD timeout="10000" max_tries="5"/>
     <VERIFY_SUSPECT timeout="1500"/>
     <BARRIER/>
     <pbcast.NAKACK
                    use_mcast_xmit="false"
                    retransmit_timeout="300,600,1200,2400,4800"
                    discard_delivered_msgs="false"/>
     <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"  
max_bytes="400000"/>
     ${ENCRYPT_TAG}
     <AUTH auth_class="org.jgroups.auth.MD5Token" 
auth_value="${cluster.auth.pwd}" token_hash="MD5"/>
     <pbcast.GMS print_local_addr="true" join_timeout="3000"
                 view_bundling="true" view_ack_collection_timeout="5000"/>
     <FRAG2 frag_size="60000"/>
     <pbcast.STATE_TRANSFER/>
</config>

------------------------------------------------------------------------------
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
"Kędziora, Adam" | 17 Jun 15:44 2014
Picon

[jgroups-users] Connecting dynamic clients from behind the NAT

Let's say I have a Given Network topology:

NAT0[computer0,computer1]--(ROUTER0/*router of NAT0*/)--INTERNET[computer2/*publicly
available machine*/,...]--(ROUTER1/*router of NAT1*/)--NAT1[computer3,computer4]

>From each of computers I can see computer2 - lets say 4.4.4.4, but computer3 can't see any of
[computer0,computer1,computer3,computer4] as they are behind the nat.

I want to create a cluster where computer[0..4] can freely communicate, but I don't want to configure
external_addr as I don't have to know it (as I as an app developer won't even see the machine it is deployed
on), and it can change over time (let's say computer0,1,3,4 is a laptop and can be plugged in in either NAT0
or NAT1 or INTERNET networks).

Adam Kędziora
Software developer / Projektant-programista

PSI Polska Sp. z o.o.
ul. Towarowa 35
61-896 Poznań
Polska

Tel. / Phone: 
Fax: +48 61 6556-555
akedziora <at> psi.pl
www.psi.pl

Wpisano do rejestru przedsiębiorców Krajowego Rejestru Sądowego w Sądzie Rejonowym w Poznaniu,
VIII Wydział Gospodarczy Krajowego Rejestru Sądowego pod numerem KRS 0000216571. Prezes zarządu:
Arkadiusz Niemira. Kapitał zakładowy: 2 000.000,00 zł. NIP: 778-14-20-509

Wiadomość ta zawiera informacje poufne lub chronione prawem. Jeżeli nie jest Pan/Pani zamierzonym
odbiorcą, proszę powiadomić o tym nadawcę i niezwłocznie usunąć wiadomość. Nieautoryzowane
kopiowanie tej wiadomości lub rozpowszechnianie informacji w niej zawartych jest zabronione.

The information contained in this message is confidential or protected by law. If you are not the intended
recipient, please contact the sender and delete this message. Any unauthorised copying of this message
or unauthorised distribution of the information contained herein is prohibited.

------------------------------------------------------------------------------
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
tdumidu | 13 Jun 11:46 2014
Picon

[jgroups-users] Causal Order Broadcasting with Vector Clocks

Hi,

Does JGroup support for Causal Ordering of messages? Is it a inbuilt feature
of JGroup? How can i use that? Is there any tutorials for that?

Regards,
Thisara

--
View this message in context: http://jgroups.1086181.n5.nabble.com/Causal-Order-Broadcasting-with-Vector-Clocks-tp10260.html
Sent from the JGroups - General mailing list archive at Nabble.com.

------------------------------------------------------------------------------
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
pw | 13 Jun 11:46 2014

[jgroups-users] Using Gossip Router but all 4 nodes do not communicate with each other

Hi,

I have 4 nodes in the cluster named A, B, C and H and each node runs on a
seperate machine. I use gossip router for connecting since 2 machines are on
one subdomain and the other two on another subdomain.
On each machine a gossip router is started and below is the gossiprouter.xml

<config xmlns="urn:org:jgroups"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="urn:org:jgroups
http://www.jgroups.org/schema/JGroups-3.3.xsd">

    <TUNNEL
gossip_router_hosts="${jgroups.tunnel.gossip_router_hosts:ltlnxp1u.nroot.com[12003],ltlnxp2u.nroot.com[12003],inflmp3d.test.nroot.net[12003],
inflmp5d.test.nroot.net[12003]}"/>
    <PING num_initial_members="1" num_initial_srv_members="2"
force_sending_discovery_rsps="true" timeout="6000"/>
    <MERGE2/>
    <FD/>
    <VERIFY_SUSPECT/>
    <pbcast.NAKACK2 use_mcast_xmit="false"/>
    <UNICAST3/>
    <pbcast.STABLE/>
    <pbcast.GMS/>
    <UFC/>
    <MFC/>
    <FRAG2/>
    <pbcast.STATE_TRANSFER/>
    <pbcast.FLUSH timeout="2000"/>
</config>

The 4 nodes are able to form a cluster, but when I check the jgroups debug
logs, it appears that one node is communicating with only 2 other nodes
instead of 3 nodes.
Below are the Jgroup logs:
Cluster is formed: [LDP-C, LDP-A, LDP-B, LDP-H]

Node A:
[TUNNEL::OOB-1,LOCALTEST_MyCluster,LDP-A] - sent a message to LDP-C, GR used
ltlnxp2u.nroot.com/182.124.212.169:12003
[FD::Timer-4,LOCALTEST_MyCluster,LDP-A] - LDP-A: sending are-you-alive msg
to LDP-B
[TUNNEL::Timer-4,LOCALTEST_MyCluster,LDP-A] - sent a message to LDP-B, GR
used ltlnxp1u.nroot.com/182.124.212.170:12003

Node B:
[FD::Timer-2,LOCALTEST_MyCluster,LDP-B] - LDP-B: sending are-you-alive msg
to LDP-H
[TUNNEL::Timer-2,LOCALTEST_MyCluster,LDP-B] - sent a message to LDP-H, GR
used inflmp3d.test.nroot.net/189.187.177.58:12003
[TUNNEL::OOB-2,LOCALTEST_MyCluster,LDP-B] - sent a message to LDP-A, GR used
inflmp5d.test.nroot.net/189.187.177.60:12003

Node C:
[FD::Timer-2,LOCALTEST_MyCluster,LDP-C] - LDP-C: sending are-you-alive msg
to LDP-A
[TUNNEL::Timer-2,LOCALTEST_MyCluster,LDP-C] - sent a message to LDP-A, GR
used ltlnxp2u.nroot.com/182.124.212.169:12003
[TUNNEL::OOB-1,LOCALTEST_MyCluster,LDP-C] - sent a message to LDP-H, GR used
inflmp3d.test.nroot.net/189.187.177.58:12003

Node H:
[FD::Timer-2,LOCALTEST_MyCluster,LDP-H] - LDP-H: sending are-you-alive msg
to LDP-C
[TUNNEL::Timer-2,LOCALTEST_MyCluster,LDP-H] - sent a message to LDP-C, GR
used inflmp3d.test.nroot.net/189.187.177.58:12003
[TUNNEL::OOB-1,LOCALTEST_MyCluster,LDP-H] - sent a message to LDP-B, GR used
inflmp3d.test.nroot.net/189.187.177.58:12003

What is the reason for this? How can this be fixed?

I also did a small test by running the 4 nodes on one machine and running a
single gossip router on that machine. Once again I observed that only 3
nodes communicate with each other.

All help will be appreciated.

--
View this message in context: http://jgroups.1086181.n5.nabble.com/Using-Gossip-Router-but-all-4-nodes-do-not-communicate-with-each-other-tp10259.html
Sent from the JGroups - General mailing list archive at Nabble.com.

------------------------------------------------------------------------------
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
Jim Thomas | 6 Jun 03:35 2014

[jgroups-users] Odd RPC behavior

I'm using a muxed RPC on Android with JGroups 3.4.4, presently with two nodes.  I'm doing a 30 second periodic callRemoteMethodsWithFuture(null ...)  from node 1 and occasionally the call does not go through on node 2 until the next (of the same) call is sent.  So what I see is:

T     N1               N1
0     rpc1 fc1        rpc1
30   rpc2              nothing received
60   rpc3 fc2,fc3   rpc2 rpc3  (receive one call right after the other)
90   rpc4 fc4         rpc4

The future callbacks always show success=true and suspected=false.  On the call options I set the timeout to 1000 (1 sec right?) but I don't get any timeout behavior as far as I can tell.

The channels are carrying frequent unreliable traffic and infrequent rpc traffic but the rpc calls of other methods seem to be going through reliably.  

I was getting similar behavior of missed calls on the remote node when I was using callRemoteMethods with GET_NONE.

This is over wifi so I can see that maybe a message could be lost but this seems more frequent than I'd expect.  But I would expect the message to be resent long before the next RPC call.  

I do have rpc calls back and forth but I thought I had avoided deadlock.  It seems to me that if this were the case I'd see the same problem on the local as well as the remote node and it would happen most of the time.  I'd also expect it to not happen here since this is the first message in the chain of activity.

Here is my config:

    xmlns="urn:org:jgroups"
    xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.3.xsd" >

    <UDP
        enable_diagnostics="true"
        ip_mcast="true"
        ip_ttl="${jgroups.udp.ip_ttl:8}"
        loopback="true"
        max_bundle_size="1400"
        max_bundle_timeout="5"
        mcast_port="${jgroups.udp.mcast_port:45588}"
        mcast_recv_buf_size="200K"
        mcast_send_buf_size="200K"
        oob_thread_pool.enabled="true"
        oob_thread_pool.keep_alive_time="5000"
        oob_thread_pool.max_threads="8"
        oob_thread_pool.min_threads="1"
        oob_thread_pool.queue_enabled="false"
        oob_thread_pool.queue_max_size="100"
        oob_thread_pool.rejection_policy="discard"
        thread_naming_pattern="cl"
        thread_pool.enabled="true"
        thread_pool.keep_alive_time="5000"
        thread_pool.max_threads="8"
        thread_pool.min_threads="2"
        thread_pool.queue_enabled="true"
        thread_pool.queue_max_size="10000"
        thread_pool.rejection_policy="discard"
        timer.keep_alive_time="3000"
        timer.max_threads="10"
        timer.min_threads="4"
        timer.queue_max_size="500"
        timer_type="new3"
        tos="8"
        ucast_recv_buf_size="200K"
        ucast_send_buf_size="200K" />

    <PING />

    <MERGE2
        max_interval="30000"
        min_interval="10000" />

    <FD_SOCK />

    <FD_ALL />

    <VERIFY_SUSPECT timeout="1500" />

    <BARRIER />

    <pbcast.NAKACK2
        discard_delivered_msgs="true"
        max_msg_batch_size="500"
        use_mcast_xmit="false"
        xmit_interval="500"
        xmit_table_max_compaction_time="30000"
        xmit_table_msgs_per_row="2000"
        xmit_table_num_rows="100" />

    <UNICAST3
        conn_expiry_timeout="0"
        max_msg_batch_size="500"
        xmit_interval="500"
        xmit_table_max_compaction_time="60000"
        xmit_table_msgs_per_row="2000"
        xmit_table_num_rows="100" />

    <pbcast.STABLE
        desired_avg_gossip="50000"
        max_bytes="4M"
        stability_delay="1000" />

    <pbcast.GMS
        join_timeout="3000"
        print_local_addr="true"
        view_bundling="true" />

    <FRAG frag_size="1000" />

    <pbcast.STATE_TRANSFER />

    <CENTRAL_LOCK num_backups="2" />

</config>

Any ideas?

Thanks,

JT
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their 
applications. Written by three acclaimed leaders in the field, 
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Paul Illingworth | 2 Jun 14:01 2014

[jgroups-users] NPE logged by FORK protocol when fork_stack_id not found on local node.

Deal all,

I am getting an error logged when using the FORK protocol. I am using jgroups-3.4.4.Final.

ERROR [Incoming-2,shared=udp] (FORK.java:111) - failed passing up batch
java.lang.NullPointerException
        at org.jgroups.protocols.FORK.up(FORK.java:108)
        at org.jgroups.protocols.FRAG2.up(FRAG2.java:182)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:434)
        at org.jgroups.protocols.FlowControl.up(FlowControl.java:434)
        at org.jgroups.stack.Protocol.up(Protocol.java:409)
        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:294)
        at org.jgroups.protocols.UNICAST2.removeAndPassUp(UNICAST2.java:919)
        at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:800)
        at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:415)
        at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:600)
        at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:147)
        at org.jgroups.protocols.FD.up(FD.java:255)
        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:301)
        at org.jgroups.protocols.MERGE2.up(MERGE2.java:209)
        at org.jgroups.protocols.Discovery.up(Discovery.java:379)
        at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2615)
        at org.jgroups.protocols.TP.passMessageUp(TP.java:1405)
        at org.jgroups.protocols.TP$MyHandler.run(TP.java:1591)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:724)

In my case I am configuring Infinispan with a stack that contains the FORK protocol - but no fork stacks defined. For additional JChannels I am hijacking the Infinispan channel and passing in the fork_stack_id at this point.

The error is logged when the FORK protocol receives a message from a fork_stack_id that the local node does not currently have. I would have thought that in this case this should be handled quietly with a simple null check and no exception logged (or maybe logged at a much lower level than error).

            Protocol bottom_prot=get(fork_stack_id);
            MessageBatch mb=new MessageBatch(batch.dest(), batch.sender(), batch.clusterName(), batch.multicast(), list);
            try {
// CHECK FOR NULL HERE PERHAPS?
                bottom_prot.up(mb);
            }
            catch(Throwable t) {
                log.error("failed passing up batch", t);
            }

Other than noise in my log file it is not causing any issues.

Is this a real issue or is it down to me not configuring or using FORK in the way it was intended? Am I supposed to be able to create forked stacks dynamically like this?

Paul I.
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their 
applications. Written by three acclaimed leaders in the field, 
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
thisara | 1 Jun 21:14 2014
Picon

[jgroups-users] JGroup Simplechat application for Android Phones

Hi,

I'm new to developing. I want to develop JGroup simple chat application in
Android. I'm using Android Developer tool kit to develop the application.
Below are the two classes created in project. One is main activity class and
other one is the simple chat program. I tried to run this application in two
android virtual devices. But it is not working. Highly appreciate if someone
can help me to develop this program.

/package com.example.jgroupan;

import android.os.Bundle;
import android.app.Activity;
import android.view.Menu;

public class MainActivity extends Activity{

	 <at> Override
	protected void onCreate(Bundle savedInstanceState) {
		super.onCreate(savedInstanceState);
		setContentView(R.layout.activity_main);
		new Thread(new Runnable() {
			
			 <at> Override
			public void run() {
				try {
					new SimpleChat().start();
				} catch (Exception e) {
					// TODO Auto-generated catch block
					e.printStackTrace();
				}
				
			}
		});
	}

	 <at> Override
	public boolean onCreateOptionsMenu(Menu menu) {
		// Inflate the menu; this adds items to the action bar if it is present.
		getMenuInflater().inflate(R.menu.main, menu);
		return true;
	}

}
/

And other class is 

/package com.example.jgroupan;

import org.jgroups.ReceiverAdapter;
import java.io.BufferedReader;
import java.io.InputStreamReader;
import org.jgroups.JChannel;
import org.jgroups.Message;
import org.jgroups.ReceiverAdapter;
import org.jgroups.View;

public class SimpleChat extends ReceiverAdapter{
	
    JChannel channel;
    String user_name = System.getProperty("user.name", "n/a");

	public void start() throws Exception {
        System.setProperty("java.net.preferIPv4Stack" , "true");
        channel = new JChannel();
        channel.setReceiver(this);
        channel.connect("chatCluster");
        System.out.println("address");
        eventLoop();
        channel.close();
    }	

    private void eventLoop() {
        BufferedReader in = new BufferedReader(new
InputStreamReader(System.in));
        while (true) {
            try {
                System.out.print("> ");
                System.out.flush();
                String line = in.readLine().toLowerCase();
                if (line.startsWith("quit") || line.startsWith("exit")) {
                    break;
                }
                line = "[" + user_name + "] " + line;
                Message msg = new Message(null, null, line);
                channel.send(msg);
            } catch (Exception e) {
            }
        }

    }

    public void viewAccepted(View new_view) {
        System.out.println("** view: " + new_view);
    }

    public void receive(Message msg) {
        System.out.println(msg.getSrc() + ": " + msg.getObject());
    }

}/

--
View this message in context: http://jgroups.1086181.n5.nabble.com/JGroup-Simplechat-application-for-Android-Phones-tp10238.html
Sent from the JGroups - General mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet
Bela Ban | 28 May 13:47 2014
Picon

[jgroups-users] JGroups 3.5.0.Beta7

FYI,

I just released Beta7; it contains Tristan's SASL related changed and my 
monster commit for JGRP-1826 [1].

JGRP-1826 contains a lot of changes to the discovery protocols, and 
forms the basis to make discovery of file based (FILE_PING) or cloud 
based stores (S3_PING, GOOGLE_PING) much faster, especially for large 
clusters.

However, it won't be until JGRP-1841 [1] for these optimizations to come 
into play.

Nevertheless I wanted to merge this monster commit and tackle the 
remaining work in new branches.

It also removes dependencies between merge and discovery protocols; the 
code is much cleaner now.

I tested the following discovery protocols with UDP(ip_mcast=false) and 
TCP for discovery and merging (MERGE3):
- PING
- MPING
- TCPPING
- TCPGOSSIP
- FILE_PING
- GOOGLE_PING
- JDBC_PING

*Not* tested were S3_PING, SWIFT_PING and RACKSPACE_PING.

S3_PING *should* work, as it contains almost all functionality of 
GOOGLE_PING (which extends it), but it would nevertheless be good if we 
could test this.

Volunteers ?

Cheers,

[1] https://issues.jboss.org/browse/JGRP-1826
[2] https://issues.jboss.org/browse/JGRP-1841
--

-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

------------------------------------------------------------------------------
Time is money. Stop wasting it! Get your web API in 5 minutes.
www.restlet.com/download
http://p.sf.net/sfu/restlet

Gmane