Klemen Pogacnik | 13 Nov 14:19 2014
Picon

Using TIPC in geographically distributed cluster

We’ve been using TIPC protocol in our cluster systems for some time and had
good experiences. Nodes in our cluster are in the same rack and connected
with each other by fast and reliable network connections. We use TIPC as a
communication protocol for messages between nodes and also topology service
to find out the state of an individual node.

In the next period we are thinking to geographically distribute nodes to
make geographically distributed cluster. No L2 connectivity between nodes
will be provided. If I know right, no TIPC connection is possible between
those nodes.

How to solve this problem? I’m thinking to use other bearer service (some
transport layer protocol - SCTP, UDP, TCP) for TIPC connections between
geographically allocated nodes. What’s your opinion about this idea? Has
anybody been using those bearers yet?
Thanks a lot for your answers!  Klemen
------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk
_______________________________________________
tipc-discussion mailing list
tipc-discussion <at> lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tipc-discussion
Jon Maloy | 5 Nov 11:54 2014
Picon

[PATCH net-next v3 0/6] tipc: resolve message disordering problem

When TIPC receives messages from multi-threaded device drivers it may
occasionally deliver messages to their destination sockets in the wrong
order. This happens despite correct resequencing at the link layer,
because the upcall path from link to socket is done without any locks
held.

The commits in this series solve the problem by introducing an 
'input' message queue in each link, though which messages must
be delivered to the upper layers.

Note that although the first two commits may look unrelated to the
above, they constitute a preparation step for the later commits.

v3:
   - Commit #1:   Changed names on some stack variables, as suggested by
                  Ying
   - Commit #2/3: Moved introduction of function -tipc_sk_enqueue_msg()
                  to #2, to make it cleaner. I did NOT re-introduce the
                  'exit' label in tipc_sk_rcv(), because it would end up
                  inside a loop, something I consider an abomination.
   - Commit #6:   Realizing that we may have to kick several input
                  queues at the same reception occasion, I introduced a
                  new bit map 'inputq_map' in node::bclink, indicating
                  which input queues need to be delivered. This is
                  because messages extracted from the same bundle may
                  have different originating port, and will end up
                  in different queues.

Jon Maloy (6):
  tipc: fix bug in socket message reception
(Continue reading)

Jon Maloy | 4 Nov 19:26 2014
Picon

[PATCH net-next v2 0/6] tipc: resolve message disordering problem

When TIPC receives messages from multi-threaded device drivers it may
occasionally deliver messages to their destination sockets in the wrong
order. This happens despite correct resequencing at the link layer,
because the upcall path from link to socket is done without any locks
held.

The commits in this series solve the problem by introducing an 
'input' message queue in each link, though which messages must
be delivered to the upper layers.

Note that although the first two commits may look unrelated to the
others, they constitute a preparation step for the latter.

Jon Maloy (6):
  tipc: fix bug in socket message reception
  tipc: simplify message reception function in socket
  tipc: resolve race problem during unicast message reception
  tipc: simplify connection abort notifications when links break
  tipc: simplify socket multicast reception
  tipc: eliminate race condition at multicast msg reception

 net/tipc/bcast.c      |  99 ++++++----------------
 net/tipc/bcast.h      |  18 ----
 net/tipc/link.c       | 231 ++++++++++++++++++++++----------------------------
 net/tipc/link.h       |   9 +-
 net/tipc/msg.c        |  79 +++++++++++++++--
 net/tipc/msg.h        |   5 +-
 net/tipc/name_distr.c |  24 ++++--
 net/tipc/name_distr.h |   2 +-
 net/tipc/name_table.c |  42 ++++++++-
(Continue reading)

Ying Xue | 4 Nov 09:33 2014

[PATCH net-next 00/15] tipc: prepare namespace support

This patchset aims to add net namespace support for tipc stack.

Currently TIPC module assigns below global resources:
- TIPC network idenfication
- TIPC node table
- TIPC bearer list table
- TIPC broadcast link
- TIPC socket reference table
- TIPC name service table
- TIPC node address
- TIPC service subscriber server
- TIPC random value
- TIPC netlink

In order that namespace is supported by TIPC, above each resource
must be allocated for every namespace. So the patchset mainaly does
make the changes step by step, having each namespace own its private
copy for above global resources. But before the changes are made, we
need to do necessary preparation works: cleanup core.c and core.h
files, remove unnecessary wrapper functions of kernel timer APIs and
so on.

Ying Xue (15):
  tipc: remove tipc_core_start/stop routines
  tipc: remove unnecessary wrapper functions of kernel timer APIs
  tipc: cleanup core.c and core.h
  tipc: feed tipc sock pointer to tipc_sk_timeout routine
  tipc: remove unused tipc_link_get_max_pkt routine
  tipc: involve namespace infrastructure
  tipc: make tipc node table per net namespace
(Continue reading)

O'Brien, Chris | 31 Oct 18:05 2014

soft lockup on bearer blocked

I know the bearer area is changing a lot lately, but I wanted to throw out an observation I ran into using TIPC
2.0 from kernel 3.10. Maybe someone else might find the info useful.

I ran into the below softlockup on our node while disabling the bearer. While digging I found a
controversial patch that doesn't appear to have made it into the load addressed the issue. The patch was
titled "[PATCH net-next v3 01/12] tipc: convert 'blocked' flag in struct tipc_bearer to atomic_t".
Applying the patch addresses the issue.

Chris

sh-4.2# ifconfig eth0 down

tipc: Blocking bearer <eth:eth0.16>

tipc: Lost link <1.1.25:eth0.16-1.1.16:eth0.16> on network plane B

tipc: Lost contact with <1.1.16>

tipc: Lost link <1.1.25:eth0.16-1.1.12:eth0.16> on network plane B

tipc: Lost contact with <1.1.12>

tipc: Lost link <1.1.25:eth0.16-1.1.13:eth0.16> on network plane B

tipc: Lost contact with <1.1.13>

tipc: Lost link <1.1.25:eth0.16-1.1.6:eth0.16> on network plane B

tipc: Lost contact with <1.1.6>

(Continue reading)

Ying Xue | 31 Oct 08:43 2014

[PATCH net-next 0/9] Convert name table read-write lock to RCU

Now TIPC name table is statically allocated and is protected with a
Read-Write lock. To enhance the performance of TIPC name table lookup,
we are going to involve RCU lock to protect the name table. Especially
after that, it's lockless while concurrently looking up name table on
read side. However, before the conversion happens, below changes must
be made firstly:

 - change allocation of name table from static way to dynamic way
 - fix several incorrect locking policy issues
 - lastly, convert the read-write lock to RCU

Ying Xue (9):
  tipc: remove size variable from publ_list struct
  tipc: make name table allocated dynamically
  tipc: ensure all name sequences are released when name table is
    stopped
  tipc: ensure all name sequences protected with its lock
  tipc: any name table member must be protected under name table lock
  tipc: simplfy relationship between name table lock and node lock
  tipc: fix incorrect locking policy for publication list of socket
  tipc: remove unnecessary INIT_LIST_HEAD
  tipc: convert name table read-write lock to RCU

 include/linux/rculist.h |    9 ++
 net/tipc/name_distr.c   |   81 +++++-----------
 net/tipc/name_table.c   |  242 ++++++++++++++++++++++++++---------------------
 net/tipc/name_table.h   |   23 ++++-
 net/tipc/socket.c       |   52 ++++++----
 net/tipc/subscr.c       |    1 -
 6 files changed, 222 insertions(+), 186 deletions(-)
(Continue reading)

O'Brien, Chris | 21 Oct 15:52 2014

Fragmented multicast SOCK_DGRAM and retransmission

Hi,

I have a scenario where one of our nodes is rate limiting broadcast packets and drops around 50% of the
packets. This causes havoc with reliable multicast SOCK_RDM and we end up getting congestion on the
sender. I assumed I could move to an unreliable transmission socket SOCK_DGRAM to bypass this node until
we address the rate limiting.

Even when I configure the socket for unreliable connectionless I still see the congestion and
retransmission occurring.

I was scanning though historical messages that appear to be related dated back to June 2007. http://sourceforge.net/p/tipc/mailman/message/8455616/

I am using fragmented messages. I would like to continue to send messages and the misbehaving node can be
flagged by the application and dealt with without affecting all the other nodes. My kernels are based off
of 3.4.34/2.6.27 and I am in a mixed 2.0/1.7 environment. Any suggestions how I can work around this node?

Thanks,
Chris

============================================================
The information contained in this message may be privileged
and confidential and protected from disclosure. If the reader
of this message is not the intended recipient, or an employee
or agent responsible for delivering this message to the
intended recipient, you are hereby notified that any reproduction,
dissemination or distribution of this communication is strictly
prohibited. If you have received this communication in error,
please notify us immediately by replying to the message and
deleting it from your computer. Thank you. Coriant-Tellabs
============================================================
(Continue reading)

Ying Xue | 20 Oct 09:17 2014

[PATCH net-next 00/10] standardize SKB queue operations

On TIPC packet transmission path, link layer maintains a single linked
list as its outbound queue with a sk_buff struct pointer recording the
queue's head. This requires when packets built up as a doubly linked
list are sent out through link, the doubly linked list has to be
converted a single linked list before it's jointed in the link outbound
queue. But the conversion depends on an assumption that sk_buffs have
the next and prev pointers at the beginning of the struct. However, not
only this assumption might not be true, but also it may prevent some
improvements for networking generic SKB management list.

To respond to this request, we decide that all SKB queues in the entire
TIPC stack are now managed with standard SKB list APIs associated with
struct sk_buff_head, having all relevant code more clean. But before
doing it, we need to clean up below redundant functionalities:

- remove node subscribe infrastructure
- remove protocol message queue
- remove retransmission queue
- clean up process of pushing packets in link layer

After that, below SKB queues are managed with standard SKB list APIs:

- link outqueue
- link deferred queue
- link receive queue
- link transmission queue

The series is based on the latest Richard's patchset of netlink.

Ying Xue (10):
(Continue reading)

erik.hugne | 17 Oct 08:45 2014
Picon

[PATCH] tipc: reduce amount of duplicate nak messages

From: Erik Hugne <erik.hugne <at> ericsson.com>

If an out of sequence packet is received and it is not a duplicate,
it will be placed on the deferred queue and the peer notified that
there is a sequence gap. However, if the received packet fills a
hole in, or is placed at the end of the defer queue, a duplicate NAK
message will be sent out. This in turn will generate duplicate
retransmissions and degraded link performance.
We fix this by only sending out a new NAK when the received packet
is placed a the head of the defer queue.

Signed-off-by: Erik Hugne <erik.hugne <at> ericsson.com>
---

This should be applied on top of Ying's skblist patchset.

 net/tipc/link.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/tipc/link.c b/net/tipc/link.c
index a74be61..45f94b7 100644
--- a/net/tipc/link.c
+++ b/net/tipc/link.c
 <at>  <at>  -1395,7 +1395,7  <at>  <at>  static void link_handle_out_of_seq_msg(struct tipc_link *l_ptr,
 	if (tipc_link_defer_pkt(&l_ptr->deferred_queue, buf)) {
 		l_ptr->stats.deferred_recv++;
 		TIPC_SKB_CB(buf)->deferred = true;
-		if ((skb_queue_len(&l_ptr->deferred_queue) % 16) == 1)
+		if (skb_peek(&l_ptr->deferred_queue) == buf)
 			tipc_link_proto_xmit(l_ptr, STATE_MSG, 0, 0, 0, 0, 0);
(Continue reading)

Matthew Clark | 16 Oct 19:13 2014
Picon

TIPC 2.0.0 packets not being transmitted

I'm trying to get a TIPC cluster using a variety of ARM based processors,
but I'm having issues with some zc706 Zynq boards running a Yocto-built
kernel 3.14.2. Some see its neighbors perfect well, but the zc706 boards
can't be seen by anyone and think everyone else is down. I ran wireshark
and from what I can tell, the ZC706 boards simply aren't broadcasting any
packets. I see TIPC packets flying around from the overos and zedboard, but
nothing from the zynqs.

Can anyone help me debug this? I'm at a bit of a loss to explain the
behavior. Thanks!

Matt

---

Linux overo 3.5.7 #1 PREEMPT Tue Mar 11 09:06:14 EDT 2014 armv7l GNU/Linux
<1.1.1>
<1.1.2>
<1.1.15>
<1.1.16>

Linux zedboard 3.8.0-xilinx #4 SMP PREEMPT Thu Jul 10 15:13:36 EDT 2014
armv7l GNU/Linux
<1.1.3>

Linux zc706 3.14.2-xilinx #2 SMP PREEMPT Thu Oct 2 14:53:07 EDT 2014 armv7l
GNU/Linux
<1.1.4>
<1.1.5>
<1.1.6>
(Continue reading)

Ying Xue | 15 Oct 09:27 2014

[PATCH] tipc: fix lockdep warning when intra-node messages are delivered

When running tipcTC&tipcTS test suite, below lockdep unsafe locking
scenario is reported:

[ 1109.997854]
[ 1109.997988] =================================
[ 1109.998290] [ INFO: inconsistent lock state ]
[ 1109.998575] 3.17.0-rc1+ #113 Not tainted
[ 1109.998762] ---------------------------------
[ 1109.998762] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
[ 1109.998762] swapper/7/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
[ 1109.998762]  (slock-AF_TIPC){+.?...}, at: [<ffffffffa0011969>] tipc_sk_rcv+0x49/0x2b0 [tipc]
[ 1109.998762] {SOFTIRQ-ON-W} state was registered at:
[ 1109.998762]   [<ffffffff810a4770>] __lock_acquire+0x6a0/0x1d80
[ 1109.998762]   [<ffffffff810a6555>] lock_acquire+0x95/0x1e0
[ 1109.998762]   [<ffffffff81a2d1ce>] _raw_spin_lock+0x3e/0x80
[ 1109.998762]   [<ffffffffa0011969>] tipc_sk_rcv+0x49/0x2b0 [tipc]
[ 1109.998762]   [<ffffffffa0004fe8>] tipc_link_xmit+0xa8/0xc0 [tipc]
[ 1109.998762]   [<ffffffffa000ec6f>] tipc_sendmsg+0x15f/0x550 [tipc]
[ 1109.998762]   [<ffffffffa000f165>] tipc_connect+0x105/0x140 [tipc]
[ 1109.998762]   [<ffffffff817676ee>] SYSC_connect+0xae/0xc0
[ 1109.998762]   [<ffffffff81767b7e>] SyS_connect+0xe/0x10
[ 1109.998762]   [<ffffffff817a9788>] compat_SyS_socketcall+0xb8/0x200
[ 1109.998762]   [<ffffffff81a306e5>] sysenter_dispatch+0x7/0x1f
[ 1109.998762] irq event stamp: 241060
[ 1109.998762] hardirqs last  enabled at (241060): [<ffffffff8105a4ad>] __local_bh_enable_ip+0x6d/0xd0
[ 1109.998762] hardirqs last disabled at (241059): [<ffffffff8105a46f>] __local_bh_enable_ip+0x2f/0xd0
[ 1109.998762] softirqs last  enabled at (241020): [<ffffffff81059a52>] _local_bh_enable+0x22/0x50
[ 1109.998762] softirqs last disabled at (241021): [<ffffffff8105a626>] irq_exit+0x96/0xc0
[ 1109.998762]
[ 1109.998762] other info that might help us debug this:
(Continue reading)


Gmane