g4 g | 9 May 22:32 2014
Picon

[jgroups-users] RELAY2 cannot be cast to org.jgroups.protocols.TP

Hi,
Trying to play around with RELAY2 using jgroups 3.2.10

Getting this error:

java.lang.ClassCastException: org.jgroups.protocols.relay.RELAY2 cannot be cast to org.jgroups.protocols.TP
    at org.jgroups.stack.Protocol.getTransport(Protocol.java:156)
    at org.jgroups.protocols.relay.RELAY2.init(RELAY2.java:149)
    at org.jgroups.stack.ProtocolStack.initProtocolStack(ProtocolStack.java:857)
    at org.jgroups.stack.ProtocolStack.setup(ProtocolStack.java:469)
    at org.jgroups.JChannel.init(JChannel.java:786)
    at org.jgroups.JChannel.<init>(JChannel.java:162)
    at org.jgroups.JChannel.<init>(JChannel.java:142)


My JChannel is being fed this file, per the docs "To use RELAY2, it has to be placed at the top of the configuration, e.g.:"

    <relay.RELAY2 site="site2" config="/path/to/my/relay2.xml" relay_multicasts="true" />       
    <FORWARD_TO_COORD />
    <UDP bind_addr="192.168.0.5" mcast_port="7600" />
.....
.....
.....
------------------------------------------------------------------------------
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
&#149; 3 signs your SCM is hindering your productivity
&#149; Requirements for releasing software faster
&#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Nickqwer | 9 May 14:13 2014
Picon

[jgroups-users] Address Uniqueness

Hi, is it true that since address == local_host_name + random_number, then it
is possible that two started nodes on the same machine can get the same
address?

--
View this message in context: http://jgroups.1086181.n5.nabble.com/Address-Uniqueness-tp10179.html
Sent from the JGroups - General mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
&#149; 3 signs your SCM is hindering your productivity
&#149; Requirements for releasing software faster
&#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
Bela Ban | 9 May 13:55 2014
Picon

[jgroups-users] Emails from belaban <at> yahoo.com geting dropped ?

Test. People told me that my emails are getting dropped due to some new 
policy implemented by Yahoo...

--

-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

------------------------------------------------------------------------------
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
&#149; 3 signs your SCM is hindering your productivity
&#149; Requirements for releasing software faster
&#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
Marilen Corciovei | 8 May 15:28 2014
Picon

[jgroups-users] Questions about the RPCDispatcher

Hello,

After jgroups saved our ehcache distributed infrastructure we decided to 
use it for all our cluster communication, initially with simple messages 
and now with RPC. In the design process and tests I realized I am not 
fully aware of the threads involved and thus this mail asking for more 
info. Let's assume I have 2 vm's communicating via syncronous RPC 
(blocking, no timeout set) can something like this be valid, or it's a 
certain deadlock:

A ----- calls -----> B
B ----- calls -----> A
A -- reponds --> B
B -- reponds --> A

in fact B waits for another response from A before responding itself. Of 
course this can be considered a bad practice but if jgroups has a 
different thread handling calls from other parties which is different 
from the thread which created the initial call it might work.

What about A receiving calls from B and C, does jgroups creates a thread 
per call or the call are handled in sequence?

In fact it's obvious that a timeout should be set but I hope that by 
having a better idea on what threads are created I might use this info 
for better decisions.

Thank you,
Len

------------------------------------------------------------------------------
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
&#149; 3 signs your SCM is hindering your productivity
&#149; Requirements for releasing software faster
&#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
Nickqwer | 5 May 14:27 2014
Picon

[jgroups-users] Waste Creating of JChannel

Hi.
How critical it is to have a lot of JChannel from the viewpoint of network
loading? What if i create a new JChannel for each message type, then how bad
it would be for small applications and big applications? By big application
i mean the application that is used my a lot users from the ouside, like
online market.

--
View this message in context: http://jgroups.1086181.n5.nabble.com/Waste-Creating-of-JChannel-tp10163.html
Sent from the JGroups - General mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Is your legacy SCM system holding you back? Join Perforce May 7 to find out:
&#149; 3 signs your SCM is hindering your productivity
&#149; Requirements for releasing software faster
&#149; Expert tips and advice for migrating your SCM now
http://p.sf.net/sfu/perforce
amangoyal007 | 21 Apr 23:42 2014
Picon

[jgroups-users] Not able to use jgroup

Hi,

I tried using jgroups for clustering using glassfish. Getting below error.
|failed to join /224.0.0.75:7500 on net5: java.net.SocketException:
Unrecognized Windows Sockets error: 0: no Inet4Address associated with
interface|#]

--
View this message in context: http://jgroups.1086181.n5.nabble.com/Not-able-to-use-jgroup-tp10152.html
Sent from the JGroups - General mailing list archive at Nabble.com.

------------------------------------------------------------------------------
Start Your Social Network Today - Download eXo Platform
Build your Enterprise Intranet with eXo Platform Software
Java Based Open Source Intranet - Social, Extensible, Cloud Ready
Get Started Now And Turn Your Intranet Into A Collaboration Platform
http://p.sf.net/sfu/ExoPlatform
Bela Ban | 17 Apr 17:27 2014
Picon

[jgroups-users] API change in MessageDispatcher, RpcDispatcher, MuxMessageDispatcher and MuxRpcDispatcher

I wanted to make an incompatible API change in 3.5 [1].

Usually, I only do this in major versions, but this is a bug and folks 
using the methods below will encounter a ClassCastException *if they 
access the result of the future in the FutureListener*.

The affected methods are:
* MessageDispatcher.castMessageWithFuture(Collection<Address>, 
MethodCall, RequestOptions, FutureListener<T>)
* RpcDispatcher.callRemoteMethodsWithFuture(Collection<Address>, 
MethodCall, RequestOptions, FutureListener<T>)

The CCE only occurs if *FutureListener* is accessed, e.g.

FutureListener<String> listener=new FutureListener<String>() {
    public void futureDone(Future<String> future) {
          String val=future.get(); // <----- CCE as get() returns 
RspList<String> !
    }
};

The signature of the above 2 methods therefore needs to be changed from 
FutureListener<T> to FutureListener<RspList<T>>.

Is (1) anyone using this, (2) providing a non-null FutureListener and 
(3) calling get() on the future received in futureDone() ?

If so, would this change cause you any problems ? Let me know as soon as 
possible, so I can either go ahead and make this change or postpone it 
until 4.0.

The change is in branch JGRP-1687 if you want to try this out.

[1] https://issues.jboss.org/browse/JGRP-1687

--

-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/NeoTech
Jim Thomas | 10 Apr 03:28 2014

[jgroups-users] Implementing Multiple ReplicatedHashMap Instances on a Cluster

I need multiple ReplicatedHashMaps in what would otherwise be a single channel cluster (all nodes need the exact same interface and have a fixed configuration for the duration of connection to the cluster).   Thus far I have used multiple channels on a shared transport to accomplish this successfully.  It is not clear to me how much extra overhead I'm suffering due to having multiple channels for this vs a single channel (discovery, membership, etc. are repeated right?), but the allure of an even lighter weight implementation using forked channels led me to investigate that possibility.  Very quickly I discovered that forked channels don't implement getState  (but if I understand the reason behind that right that is partially why a forked channel is attractive for my topology) but ReplicatedHashMap uses that to synchronize with the cluster when it first joins.

My application runs over wifi so I'm very sensitive to extra network traffic.  So I've been contemplating modifying ReplicatedHashMap to either work with multiple on a single channel or alternatively work with forked channels.  For a single channel I think I can make it work by switching to MuxRpcDispatcher and assigning a map number to each ReplicatedHashMap but I have not figured out how to fix getState / setState (can those be muxed?).  For forked channels I'd have to add getState / setState to the RPC and call that from start() instead of using state for synchronization.  Do you have any opinions on what way makes more sense?  Or is the multiple channels on a shared transport not as bad as I'm thinking?


I think I've also found a bug.  When setup a fork channel I get an error if I don't have udp.xml but when I don't have a fork channel things work fine with my main-stack.xml protocol file.  Literally if I just copy my main-stack.xml to udp.xml and change nothing else it will not crash.  Here is the exception:
04-09 17:33:38.270: W/System.err(5478): java.io.FileNotFoundException: JGRP000003: file "udp.xml" not found
04-09 17:33:38.270: W/System.err(5478):  at org.jgroups.conf.ConfiguratorFactory.getXmlConfigurator(ConfiguratorFactory.java:211)
04-09 17:33:38.270: W/System.err(5478):  at org.jgroups.conf.ConfiguratorFactory.getStackConfigurator(ConfiguratorFactory.java:102)
04-09 17:33:38.270: W/System.err(5478):  at org.jgroups.JChannel.<init>(JChannel.java:172)
04-09 17:33:38.270: W/System.err(5478):  at org.jgroups.JChannel.<init>(JChannel.java:123)
04-09 17:33:38.270: W/System.err(5478):  at org.jgroups.fork.ForkChannel.<init>(ForkChannel.java:75)
04-09 17:33:38.270: W/System.err(5478):  at org.jgroups.fork.ForkChannel.<init>(ForkChannel.java:118)
04-09 17:33:38.270: W/System.err(5478):  at com.novawurks.jgroupstest.JGroupsTestActivity$JGroupsSetupThread.run(JGroupsTestActivity.java:119)

Obviously I can just use udp.xml for my file name but I thought I'd pass this along.

I'm running 3.4.1 ported to Android.

Thanks,

JT
------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Bela Ban | 7 Apr 18:21 2014
Picon

[jgroups-users] Running a JGroups cluster on Google Compute Engine

FYI: 
http://belaban.blogspot.ch/2014/04/running-jgroups-on-google-compute-engine.html
--

-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
ChunWei Ho | 7 Apr 07:48 2014
Picon

[jgroups-users] JGroups (received X identical messages from non-member)

Hi,

I am a new JGroups user. I am using JGroups 3.2.10.Final.

We maintain a cluster of 4 nodes using TCP/TCPPING (multcasting is not used since the C and K machines cannot multicast to each other.)

Our configuration looks like:
TCP(bind_addr=localhost;bind_port=25000)
:TCPPING(initial_hosts=C03[25000],C10[25000],K03[25000],K04[25000];port_range=1;timeout=2000;num_initial_members=4)
:MERGE3(min_interval=10000;max_interval=30000)
:FD_SOCK:FD_ALL:VERIFY_SUSPECT(timeout=1500)
:pbcast.NAKACK2(xmit_interval=1000;xmit_table_num_rows=100;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=30000;max_msg_batch_size=500;use_mcast_xmit=false;discard_delivered_msgs=true)
:pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=4M)
:pbcast.GMS(join_timeout=3000;print_local_addr=true;view_bundling=true)
:UNICAST2(max_bytes=10M;xmit_table_num_rows=100;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=60000;max_msg_batch_size=500)
:FRAG2(frag_size=61440):UFC(max_credits=2M;min_threshold=0.4)
:RSVP(resend_interval=2000;timeout=10000)

Twice the cluster has suddenly gone into a mode where the cluster effectively disbands, but the nodes keep sending messages to each other (which are dropped because they are not in a cluster). Just a snippet of the logs:

K04

04:26:34,379 WARN  [NAKACK2:?] [JGRP00011] K04-49310: dropped message 15,224,123 from non-member C10-55424 (view=MergeView::[K03-44134|1977] [K03-44134, C04-44519, K04-49310, C06-43970, C03-26935], subgroups=[K03-44134|1976] [K03-44134, C04-44519, C06-43970, C03-26935], [K04-49310|1953] [K04-49310]) (received 223 identical messages from C10-55424 in the last 107,372 ms)
04:28:20,370 WARN  [NAKACK2:?] [JGRP00011] K04-49310: dropped message 15,264,086 from non-member C10-55424 (view=MergeView::[K03-44134|1977] [K03-44134, C04-44519, K04-49310, C06-43970, C03-26935], subgroups=[K03-44134|1976] [K03-44134, C04-44519, C06-43970, C03-26935], [K04-49310|1953] [K04-49310]) (received 39,706 identical messages from C10-55424 in the last 105,992 ms)
04:30:06,488 WARN  [NAKACK2:?] [JGRP00011] K04-49310: dropped message 15,284,996 from non-member C10-55424 (view=MergeView::[K03-44134|1977] [K03-44134, C04-44519, K04-49310, C06-43970, C03-26935], subgroups=[K03-44134|1976] [K03-44134, C04-44519, C06-43970, C03-26935], [K04-49310|1953] [K04-49310]) (received 20,465 identical messages from C10-55424 in the last 106,158 ms)
04:31:54,737 WARN  [NAKACK2:?] [JGRP00011] K04-49310: dropped message 15,298,484 from non-member C10-55424 (view=MergeView::[K03-44134|1977] [K03-44134, C04-44519, K04-49310, C06-43970, C03-26935], subgroups=[K03-44134|1976] [K03-44134, C04-44519, C06-43970, C03-26935], [K04-49310|1953] [K04-49310]) (received 13,483 identical messages from C10-55424 in the last 108,249 ms)
04:31:54,752 WARN  [NAKACK2:?] [JGRP00011] K04-49310: dropped message 15,298,477 from non-member C10-55424 (view=MergeView::[K03-44134|1977] [K03-44134, C04-44519, K04-49310, C06-43970, C03-26935], subgroups=[K03-44134|1976] [K03-44134, C04-44519, C06-43970, C03-26935], [K04-49310|1953] [K04-49310]) (received 13,483 identical messages from C10-55424 in the last 108,265 ms)

K03

04:30:06,503 WARN  [NAKACK2:?] [JGRP00011] K03-44134: dropped message 4,273,272 from non-member K04-49310 (view=[K03-44134|1978] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 1,127 identical messages from K04-49310 in the last 106,139 ms)
04:31:32,403 WARN  [NAKACK2:?] [JGRP00011] K03-44134: dropped message 15,298,859 from non-member C10-55424 (view=[K03-44134|1980] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 9,532 identical messages from C10-55424 in the last 98,639 ms)
04:33:39,915 WARN  [NAKACK2:?] [JGRP00011] K03-44134: dropped message 4,274,558 from non-member K04-49310 (view=[K03-44134|1980] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 1,287 identical messages from K04-49310 in the last 213,413 ms)

C10

04:28:20,365 WARN  [NAKACK2:?] [JGRP00011] C10-55424: dropped message 4,272,146 from non-member K04-49310 (view=MergeView::[K03-44134|1975] [K03-44134, C04-44519, C06-43970, C03-26935, C10-55424], subgroups=[K03-44134|1974] [K03-44134, C04-44519, C06-43970, C03-26935], [K03-44134|1972] [C10-55424]) (received 442 identical messages from K04-49310 in the last 101,745 ms)
04:30:06,509 WARN  [NAKACK2:?] [JGRP00011] C10-55424: dropped message 4,273,272 from non-member K04-49310 (view=MergeView::[K03-44134|1975] [K03-44134, C04-44519, C06-43970, C03-26935, C10-55424], subgroups=[K03-44134|1974] [K03-44134, C04-44519, C06-43970, C03-26935], [K03-44134|1972] [C10-55424]) (received 1,127 identical messages from K04-49310 in the last 106,143 ms)
04:31:57,752 WARN  [GMS:?] C10-55424: not member of view [K03-44134|1980]; discarding it

C03

04:28:20,363 WARN  [NAKACK2:?] [JGRP00011] C03-26935: dropped message 4,272,146 from non-member K04-49310 (view=[K03-44134|1978] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 701 identical messages from K04-49310 in the last 105,990 ms)
04:29:53,762 WARN  [NAKACK2:?] [JGRP00011] C03-26935: dropped message 15,288,960 from non-member C10-55424 (view=[K03-44134|1978] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 10,724 identical messages from C10-55424 in the last 99,376 ms)
04:30:06,504 WARN  [NAKACK2:?] [JGRP00011] C03-26935: dropped message 4,273,272 from non-member K04-49310 (view=[K03-44134|1978] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 1,127 identical messages from K04-49310 in the last 106,140 ms)
04:31:32,403 WARN  [NAKACK2:?] [JGRP00011] C03-26935: dropped message 15,298,859 from non-member C10-55424 (view=[K03-44134|1980] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 9,541 identical messages from C10-55424 in the last 98,641 ms)
04:31:46,708 WARN  [TCP:?] C03-26935: no physical address for C10-55424, dropping message
04:31:56,800 WARN  [NAKACK2:?] [JGRP00011] C03-26935: dropped message 4,274,558 from non-member K04-49310 (view=[K03-44134|1980] [K03-44134, C04-44519, C06-43970, C03-26935]) (received 1,287 identical messages from K04-49310 in the last 110,297 ms)


This pattern persists for several hours until all the nodes are restarted. The number of messages retransmitted seems rather significant and we suspect the message log has also caused the application heap to be filled. It also seems strange that the nodes would continue to send messages to each other when they are already aware that the destination is a non-member. Can anyone have a quick look at the configuration/logs and see where I can start to fix this?

Thanks and cheers,
Chun
------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees_APR
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Bela Ban | 27 Mar 18:17 2014
Picon

[jgroups-users] JGroups 3.5.0.Beta2

Released. Major new feature compared to Beta1:

- Messages are now always looped back (mcasts are discarded when copies 
are received)
https://issues.jboss.org/browse/JGRP-1765

- DONT_LOOPBACK flag: mesages can now be dropped at the transport level
https://issues.jboss.org/browse/JGRP-1816

- Removed bottleneck when sending a message to a non-existent dest
https://issues.jboss.org/browse/JGRP-1815

- UNICAST3: connections to non-members are now closed more quickly
https://issues.jboss.org/browse/JGRP-1814

- SHARED_LOOPBACK_PING: new discovery protocol which always discovers 
members in the same JVM. Used by unit tests.
https://issues.jboss.org/browse/JGRP-1809

- Hardened code for generating seqnos in UNICAST3 and NAKACK2
https://issues.jboss.org/browse/JGRP-1807

--

-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

------------------------------------------------------------------------------

Gmane