Picon
Picon

[jgroups-users] FILE_PING with custom AddressGenerator?

In https://github.com/belaban/JGroups/blob/master/doc/design/CloudBasedDiscovery.txt it recommends to use a custom AddressGenerator. As far as I could understand, it is to avoid repeating UUIDs. Is this done automatically or is a logic to be implemented ad-hoc?

Some testing shows that, if a node is shutdown and started up again, it will "repeat" itself with a different UUID. Shouldn't it take its UUID from the FILE_PING file? I think this question is related to a previous post.
This would avoid increasing the file size with each node restart and would also give some stability to node IDs. Is there any reason not to do this?

Thank you for your feedback!
Kind regards,
Matías

------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

[jgroups-users] Guaranties given by RELAY and RELAY2

Reading about RELAY, I have the following doubts about what should I expect when using one of this layers.
RELAY.txt documentation says:
"The data centers in NYC and SFO are *completely autonomous local clusters*. There are no stability, flow control or
retransmission messages exchanged between NYC and SFO. This is critical because we don't want the SFO cluster to block
for example on waiting for credits from a node in the NYC cluster!"

But in RELAY2.txt it is said that message routing across sites is performed (and, I think, is the point of having RELAY). So, which guaranties are given across sites?

Also, it says:
"A relay member has a UDP stack which additionally contains a protocol RELAY at the top (shown in the bottom part
of the figure). RELAY has a JChannel which connects to the TCP group, but *only* when it is (or becomes) coordinator
of the local cluster. The configuration of the TCP channel is done via a property in RELAY."

That means that every member of each site has to be able to connect to the "global" cluster?
Strictly speaking (I think), every potential coordinator needs to be able to connect to the "global" cluster. Right? I guess I could handle it by specifying a coordinator selection algorithm.
Anyway, I _must_ have the RELAY setting in every node of the site (despite the fact I would force it to never become a coordinator), right? Or relaying is only important/active for the current coordinator?

Finally, in RELAY.txt says:
"Todo:
#3 Handling temp coordinator outage - how do we prevent message loss?
#4 State transfer - replication across clusters, to bootstrap initial coords in a local cluster"

Are these issues supported in RELAY2?
Messages can be lost across sites? FORWARD_TO_COORD fixes this issue?
State transfer isn't supported across sites?

Sorry for the long mail and many questions!
Kind regards,
Matías
------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

Re: [jgroups-users] RELAY2 Draw example not working

Done https://github.com/belaban/JGroups/pull/254

Kind regards,
Matías

--------------------------------------------
El sáb 12-dic-15, Questions/problems related to using JGroups
<javagroups-users <at> lists.sourceforge.net> escribió:

 Asunto: Re: [jgroups-users] RELAY2 Draw example not working
 Para: javagroups-users <at> lists.sourceforge.net
 Fecha: sábado, 12 de diciembre de 2015, 5:10

 Yes, go ahead and create
 a PR to apply to the docs, thx !

 On 11/12/15 23:53, Questions/problems related
 to using JGroups wrote:
 > Thank you,
 changing the bind address to 127.0.0.1 worked!
 > I think it would be better if the
 documentation example is changed to
 > the
 following commands:
 > java
 -Djgroups.bind_addr=127.0.0.1
 -Djava.net.preferIPv4Stack=true
 >
 org.jgroups.demos.Draw -props ./sfo.xml -name sfo1
 > java -Djgroups.bind_addr=127.0.0.1
 -Djava.net.preferIPv4Stack=true
 >
 org.jgroups.demos.Draw -props ./sfo.xml -name sfo2
 > java -Djgroups.bind_addr=127.0.0.1
 -Djava.net.preferIPv4Stack=true
 >
 org.jgroups.demos.Draw -props ./lon.xml -name lon1
 > java -Djgroups.bind_addr=127.0.0.1
 -Djava.net.preferIPv4Stack=true
 >
 org.jgroups.demos.Draw -props ./lon.xml -name lon2
 > java -Djgroups.bind_addr=127.0.0.1
 -Djava.net.preferIPv4Stack=true
 >
 org.jgroups.demos.Draw -props ./nyc.xml -name nyc1
 > java -Djgroups.bind_addr=127.0.0.1
 -Djava.net.preferIPv4Stack=true
 >
 org.jgroups.demos.Draw -props ./nyc.xml -name nyc2
 > Or at least advice to change the
 initial_hosts configuration in global.xml.
 >
 > Other thing that
 caught my attention was that the TCP.bind_port setting.
 > I couldn't find where it is stated
 that this is taken as an _initial_
 >
 port. Used to the fact that if the bind port is taken,
 binding will
 > fail, it surprised me that
 in this case the next port is taken (which is
 > really useful in the case of RELAY2).
 >
 > If you want I can
 create a PR to update
 > http://www.jgroups.org/manual/index.html#Relay2Advanced
 and
 > http://www.jgroups.org/manual/index.html#TCP
 with these changes.
 >
 > Thank you again for your help!
 > Kind regards,
 >
 Matías
 >
 >
 >
 > El Viernes, 11 de
 diciembre, 2015 4:09:13, Questions/problems related to
 > using JGroups <javagroups-users <at> lists.sourceforge.net>
 escribió:
 >
 >
 >
 >     Try
 running 3 instances off of global only, e.g. Draw -props
 >     global.xml.
 >     If they don't find
 each other, the relayers won't be able to relay
 >     messages across data
 centers either.
 >
 > 
    I suggest replace localhost in
 TCPPING.initial_hosts with the actual
 > 
    bind address (bind_addr in TCP).
 Alternatively, use TCPGOSSIP
 >
 >     On 11/12/15 01:18,
 Questions/problems related to using JGroups wrote:
 >      > I'm following the
 documentation
 >      > <http://www.jgroups.org/manual/index.html#Relay2Advanced>
 to try to
 >      > execute the
 6-instance-3-sites RELAY2 Draw example, however, I'm
 not
 >      > being successful in
 it.
 >      > I created the
 following settings:
 >      >
 >      >  * lon.xml (http://pastebin.com/1utbEMuF)
 >      >  * nyc.xml (http://pastebin.com/3z1rDs8Z)
 >      >  * sfo.xml (http://pastebin.com/LxbTm7za)
 >      >  * relay2.xml (http://pastebin.com/s64ScyPt)
 >      >  * global.xml (http://pastebin.com/66FczbMF)
 >      >
 >     
 >
 >      > All lon/nyc/sfo are
 based on udp.xml. Just added RELAY2 (with
 >      >
 relay_multicasts="true") and FORWARD_TO_COORD to
 the stack and
 > 
    changed
 >      > the
 mcast_port for each one (as recommended in the
 documentation).
 >      > global.xml
 is based on tcp.xml (as recommended in the
 documentation).
 >      > relay2.xml
 is as defined in documentation (only changed
 >     global.xml paths).
 >      >
 >     
 > When I execute the 6 instances of Draw with:
 >      > java -cp './*'
 -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw
 >      > -props ./nyc.xml -name
 nyc2
 >      > java -cp
 './*' -Djava.net.preferIPv4Stack=true
 org.jgroups.demos.Draw
 >      >
 -props ./nyc.xml -name nyc1
 >     
 > java -cp './*' -Djava.net.preferIPv4Stack=true
 org.jgroups.demos.Draw
 >      >
 -props ./lon.xml -name lon2
 >     
 > java -cp './*' -Djava.net.preferIPv4Stack=true
 org.jgroups.demos.Draw
 >      >
 -props ./lon.xml -name lon1
 >     
 > java -cp '../*'
 -Djava.net.preferIPv4Stack=true
 > 
    org.jgroups.demos.Draw
 > 
     > -props ./sfo.xml -name sfo2
 >      > java -cp '../*'
 -Djava.net.preferIPv4Stack=true
 > 
    org.jgroups.demos.Draw
 > 
     > -props ./sfo.xml -name sfo1
 >      >
 >     
 > All same-site instances communicate with each other,
 but drawing
 >     is not
 >      > correctly multicasted. The
 first instance of each site correctly
 > 
    states
 >      > that
 it is connecting to "global", though.
 >      >
 >     
 > I'm probably making some mistake in the global.xml
 configuration, but
 >      > from
 the documentation and mail group archive I can't find
 any
 >      > information about
 where I'm messing it up.
 >     
 >
 >      > Any pointer is really
 welcome!
 >      > Thank you for
 your time!
 >      > Kind
 regards,
 >      > Matías
 >      >
 >     
 >
 >      >
 > 
    ------------------------------------------------------------------------------
 >      >
 >     
 >
 >      >
 >      >
 _______________________________________________
 >      > javagroups-users mailing
 list
 >      > javagroups-users <at> lists.sourceforge.net
 >     <mailto:javagroups-users <at> lists.sourceforge.net>
 >      > https://lists.sourceforge.net/lists/listinfo/javagroups-users
 >      >
 >
 >     --
 > 
    Bela Ban, JGroups lead (http://www.jgroups.org
 >     <http://www.jgroups.org/>)
 >
 >
 >
 > 
    ------------------------------------------------------------------------------
 > 
    _______________________________________________
 >     javagroups-users mailing
 list
 >     javagroups-users <at> lists.sourceforge.net
 >     <mailto:javagroups-users <at> lists.sourceforge.net>
 >     https://lists.sourceforge.net/lists/listinfo/javagroups-users
 >
 >
 >
 >
 >
 ------------------------------------------------------------------------------
 >
 >
 >
 >
 _______________________________________________
 > javagroups-users mailing list
 > javagroups-users <at> lists.sourceforge.net
 > https://lists.sourceforge.net/lists/listinfo/javagroups-users
 >

 -- 
 Bela Ban, JGroups lead (http://www.jgroups.org)

 
 ------------------------------------------------------------------------------
 _______________________________________________
 javagroups-users mailing list
 javagroups-users <at> lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/javagroups-users

------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

[jgroups-users] small question: converting from FD to FD_ALL

Hi,

Spinoff from a separate thread. If I wanted to switch from FD to FD_ALL in our stack, where we currently pass in these values:

            .addProtocol(new FD()
                .setValue("max_tries", <get from input properties>)
                .setValue("timeout", <get from input properties>))

...what would be the equivalent for me to pass in. Simply "timeout" instead, which would be approx max_tries * timeout that we're passing into FD? Our defaults for the above are 8 max tries and 5000ms, and the end user can change those.

Thanks,
Bobby

------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

[jgroups-users] RELAY2 Draw example not working

I'm following the documentation to try to execute the 6-instance-3-sites RELAY2 Draw example, however, I'm not being successful in it.
I created the following settings:
  • lon.xml (http://pastebin.com/1utbEMuF)
  • nyc.xml (http://pastebin.com/3z1rDs8Z)
  • sfo.xml (http://pastebin.com/LxbTm7za)
  • relay2.xml (http://pastebin.com/s64ScyPt)
  • global.xml (http://pastebin.com/66FczbMF)

All lon/nyc/sfo are based on udp.xml. Just added RELAY2 (with relay_multicasts="true") and FORWARD_TO_COORD to the stack and changed the mcast_port for each one (as recommended in the documentation).
global.xml is based on tcp.xml (as recommended in the documentation).
relay2.xml is as defined in documentation (only changed global.xml paths).

When I execute the 6 instances of Draw with:
java -cp './*' -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props ./nyc.xml -name nyc2
java -cp './*' -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props ./nyc.xml -name nyc1
java -cp './*' -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props ./lon.xml -name lon2
java -cp './*' -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props ./lon.xml -name lon1
java -cp '../*' -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props ./sfo.xml -name sfo2
java -cp '../*' -Djava.net.preferIPv4Stack=true org.jgroups.demos.Draw -props ./sfo.xml -name sfo1

All same-site instances communicate with each other, but drawing is not correctly multicasted. The first instance of each site correctly states that it is connecting to "global", though.

I'm probably making some mistake in the global.xml configuration, but from the documentation and mail group archive I can't find any information about where I'm messing it up.

Any pointer is really welcome!
Thank you for your time!
Kind regards,
Matías
------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

Re: [jgroups-users] Atomic/reliable multicast to subgroup

This would be FLUSH, but be warned that it employs the stop-the-world 
model. Also, it doesn't do subsets, it only knows messages to the entire 
cluster.

On 10/12/15 18:03, Questions/problems related to using JGroups wrote:
> Thank you.
>
> Now, one more question about something I'm sure JGroups can do. Suppose
> I want to send a message to a strict subset of the cluster and ensure it
> arrives at all nodes uniformally (unless they fail) without rollback.
> I.e. all non-suspect nodes in the subset will receive the message even
> if the sender fails mid-operation. What's the best way to achieve that
> with JGroups?
>
> Ron
>
>
> ------------------------------------------------------------------------------
>
>
>
> _______________________________________________
> javagroups-users mailing list
> javagroups-users <at> lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/javagroups-users
>

--

-- 
Bela Ban, JGroups lead (http://www.jgroups.org)

------------------------------------------------------------------------------
Picon
Picon

[jgroups-users] isolated node getting multiple view changes

Hi,

This is with version 3.4.7.Final (I'll upgrade in our next product version if I ever get there!), using the tcp-based stack below. A customer is trying to test how node isolation affects out clusters; e.g. if a master database node becomes isolated from the rest of the cluster it needs to detect that and take appropriate action to prevent split-brain issues.

Given {A - B - C} running on AWS, they're simulating B/C in one zone becoming isolated from A in another zone by shutting off the network on A: 'service network stop', wait, then start. When this happens, B and C see A leave the view properly and do the right thing, which is replace the master database.

On A, though, B and C are leaving in two separate view changes back to back, 1-2 seconds apart. This causes problems because A doesn't see itself suddenly isolated from a majority of the cluster, but instead thinks that two other nodes failed separately, and so it doesn't do the right thing.

Is there a setting I can change so that B/C both leave the view at the same time on A, or is there some other recommended way to handle this, for instance based on timing of view changes? This works properly in our testing for the case of physically pulling a network cable; bringing the network service down causes this other behavior that I need to get fixed.

Thanks,
Bobby

        JChannel channel = new JChannel(false);
        ProtocolStack stack = new ProtocolStack();
        channel.setProtocolStack(stack);
        stack.addProtocol(new TCP()
            .setValue("oob_thread_pool_keep_alive_time", 5000)
            .setValue("timer_keep_alive_time", 3000)
            .setValue("bind_addr", InetAddress.getByName(<get address>))
            .setValue("bind_port", bindingPort)
            .setValue("thread_pool_min_threads", 1)
            .setValue("thread_pool_keep_alive_time", 5000)
            .setValue("send_buf_size", 640000)
            .setValue("oob_thread_pool_queue_max_size", 100)
            .setValue("oob_thread_pool_max_threads", 8)
            .setValue("thread_pool_queue_enabled", false)
            .setValue("sock_conn_timeout", 300)
            .setValue("oob_thread_pool_min_threads", 1)
            .setValue("loopback", false)
            .setValue("oob_thread_pool_queue_enabled", false)
            .setValue("max_bundle_timeout", 30)
            .setValue("thread_pool_queue_max_size", 100)
            .setValue("recv_buf_size", 5000000))
            .addProtocol(new TCPPING()
                .setValue("initial_hosts", <get hosts>)
                .setValue("num_initial_members", 3)) // default: 10
            .addProtocol(new MERGE2()
                .setValue("min_interval", 10000)
                .setValue("max_interval", 30000))
            .addProtocol(new FD_SOCK())
            .addProtocol(new FD()
                .setValue("max_tries", <our default is 8>)
                .setValue("timeout", <our default is 5000>))
            .addProtocol(new VERIFY_SUSPECT()
                .setValue("timeout", 1500))
            .addProtocol(new BARRIER())
            .addProtocol(new NAKACK2()
                .setValue("use_mcast_xmit", false))
            .addProtocol(new UNICAST3()
                .setValue("conn_close_timeout", 5000L))
            .addProtocol(new STABLE()
                .setValue("desired_avg_gossip", 50000)
                .setValue("max_bytes", 4000000)
                .setValue("stability_delay", 1000))
            .addProtocol(<our own auth protocol>)
            .addProtocol(new GMS()
                .setValue("join_timeout", 3000)
                .setValue("print_local_addr", false))
            .addProtocol(new MFC()
                .setValue("max_credits", 2000000)
                .setValue("min_credits", 800000))
            .addProtocol(new FRAG2())
            .addProtocol(new STATE_TRANSFER());

------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

[jgroups-users] Atomic/reliable multicast to subgroup

Hi.

I have the following problem:
Suppose my cluster is {A, B, C, D}, and I'm node A. I need to send a message to {B,C} atomically -- i.e. either both receive it or neither. If during the process one of them fails (and is suspected by fault-detection, so it falls out of view), I don't want the other to receive it (maybe I'll choose to resend when the new view is installed and maybe not). Finally, I want to know the result of the operation, namely if both had the message delivered or neither. I have no requirement of total order (but it's OK to have it).

The docs suggest that the TOA protocol might be appropriate, but its semantics aren't clear. Also, it is unclear to me how the sender can be notified of the result.

How can I do this with JGroups?

Ron
------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

[jgroups-users] GridFilesystem

Hello,

I need to build a cluster which would cache data, but not in memory but on the file system.

I've found ReplCache, and thought I could follow the code as an example or make adjustments to store the data as files.

But l discovered that there is something like this already called GridFilesystem.

I couldn't find any example of how to use it or any documentation, so first of all, I wanted to ask if my assumption is correct, and these classes are for the case I'm talking about? 

And second question, I noticed that some comments says this is experimental feature, so how far the development moved? Could it be used on production with some monitoring, or this is not ready at all?

And any info about GridFilesystem or any example is greatly appreciated. I'm first time JGroups user, so any pointers or suggestions would be helpful.

Thank you,

Eugene

------------------------------------------------------------------------------
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

[jgroups-users] S3_PING - Extra hosts show up in members file

I am JGroups 3.6.6.Final on a two-node cluster (for XWiki)

When I delete the members file in my S3 bucket and restart XWiki on both hosts, I notice that after the XWiki application starts up, the members (JGroups hosts) file in S3 contains multiple entries instead of just two:

wiki-gamma-2b-i-bdcdb578-57943 9c77acca-248d-6f81-e920-53404c41c112 172.22.163.161:7800 F wiki-gamma-2c-i-43890a98-6186 214cb3e2-e740-2f14-754e-b8bc6bc6536d 172.23.183.179:7800 F wiki-gamma-2c-i-43890a98-20775 80d26ae9-9738-aac7-ba38-5cf994f6db77 172.23.183.179:7800 F wiki-gamma-2b-i-bdcdb578-55801 f02a05ca-5201-89ed-f651-ce694f85ce77 172.22.163.161:7800 F wiki-gamma-2c-i-780385a3-9285 a2b05292-aeb3-6e2e-70b7-fbd5698e321a 172.23.153.66:7800 F wiki-gamma-2b-i-bdcdb578-28740 2b059b54-48dc-a58e-b8f7-8eb293b08bfb 172.22.163.161:7800 F wiki-gamma-2b-i-87add242-21142 91b3f796-9002-c447-4ace-ccef0fd6c0b9 172.22.133.102:7800 F wiki-gamma-2c-i-43890a98-7350 2e9d96a5-27c1-ac95-409d-fe08ce148bbc 172.23.183.179:7800 T wiki-gamma-2b-i-bdcdb578-46182 4aeb385a-fd17-9e87-27f9-1f1406abb0b6 172.22.163.161:7800 F wiki-gamma-2b-i-bdcdb578-47213 70176220-3337-3767-f1ca-1e598b09b3af 172.22.163.161:7800 F wiki-gamma-2c-i-43890a98-30807 df42bb9b-5ec1-7592-804e-989d7f4d5341 172.23.183.179:7800 F wiki-gamma-2b-i-bdcdb578-54816 977ef0f2-4279-438f-a238-7357ab5434dd 172.22.163.161:7800 F wiki-gamma-2c-i-43890a98-37675 3608cf76-8a81-4149-f117-612b6fc77dd0 172.23.183.179:7800 F wiki-gamma-2c-i-43890a98-26930 52760af1-75b6-548b-338f-670a1e6714a6 172.23.183.179:7800 F

I would expect to see only two hosts listed in the above file (no other files get created). 

Any pointers as to what might be causing this? 

Do the hosts in a JGroups cluster need to be started in a particular order?

—Debajit
------------------------------------------------------------------------------
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140
_______________________________________________
javagroups-users mailing list
javagroups-users <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/javagroups-users
Picon
Picon

[jgroups-users] FILE_PING in cloud environments

Hello,

I am using a FILE_PING discovery protocol(NATIVE_S3[1]) in my Wildfly 
cluster and noticed that the discovery files or entries in the discovery 
files aren't removed when a node shuts down gracefully. Is that supposed 
to be like that or am I missing a configuration?
This will become a big problem when the cluster is long running and very 
dynamic because the files are getting bigger and files of already 
stopped cluster nodes aren't removed. This leads to gradually increasing 
startup times of new nodes because these will try to join all other 
nodes which they will fail to do, since they shut down.
In the end, a socket startup timeout in Wildfly will be hit and the 
cluster becomes unusable. The main problem is, that I can't even purge 
the list manually even if I deleted all files because each node keeps a 
list of all the other nodes that they know of. Sooner or later these 
still running nodes will write out a file again containing all nodes and 
when a new node joins the cluster, it gets the whole list again.

I am using Wildfly 10.0.0.CR4 with JGroups 3.6.7-SNAPSHOT built from 
this commit: 
https://github.com/belaban/JGroups/commit/dea68562a80ec4ad493af3668ce7be711eacb7c5

Thanks for any help in advance!

Regards,
Christian

[1] https://github.com/Sweazer/jgroups-native-s3-ping/tree/wildfly10

dea68562a80ec4ad493af3668ce7be711eacb7c5dea68562a80ec4ad493af3668ce7be711eacb7c5

dea68562a80ec4ad493af3668ce7be711eacb7c5dea68562a80ec4ad493af3668ce7be711eacb7c5

dea68562a80ec4ad493af3668ce7be711eacb7c5

------------------------------------------------------------------------------
Go from Idea to Many App Stores Faster with Intel(R) XDK
Give your users amazing mobile app experiences with Intel(R) XDK.
Use one codebase in this all-in-one HTML5 development environment.
Design, debug & build mobile apps & 2D/3D high-impact games for multiple OSs.
http://pubads.g.doubleclick.net/gampad/clk?id=254741911&iu=/4140

Gmane