E-ZPass Agent | 1 Sep 01:59 2015
Picon

Pay for driving on toll road, invoice #0000304138

Notice to Appear,

You have a unpaid bill for using toll road.
You are kindly asked to service your debt in the shortest time possible.

The invoice is attached to this email.

Sincerely,
Jeff Hewitt,
E-ZPass Manager.

Attachment (E-ZPass_Invoice_0000304138.zip): application/zip, 2445 bytes
Karen Xie | 1 Sep 01:15 2015

Q: where to put common code of iscsi initiator and target drivers

Christoph/Nicholas,

Chelsio has an offload driver for the open-iscsi initiator "cxgbi" already in the kernel. And we are
currently working on offloading the LIO iscsi target on our T5 10G/40G adapters.

Obviously, there are some common code both the initiator and target can share. We are thinking of putting it
with the Chelsio network driver: drivers/net/ethernet/chelsio/cxgb4 as lio is under drivers/target
and open-iscsi is under driver/scsi.

We are wondering if you have any preference/suggestion?

Thanks,
Karen
김준성 | 28 Aug 14:26 2015
Picon

How to setup multiple iSCSI targets which span from different node?

Hi.

I want to setup iSCSI targets which have different IP addresses.
More detail, I want to make a specific node which handles all
discovery requests from clients.
My scenario is as follow.

(1) The storage server has multiple storage nodes and each node has a
different IP address. Each storage node exports many iSCSI targets.
      <node1>: 10.0.0.11
           1st_target_in_node_1, (portal: 10.0.0.11:3333) <for client 1>
           2st_target_in_node_1, (portal: 10.0.0.11:3334) <for client 2>
           3st_target_in_node_1, (portal: 10.0.0.11:3335) <for client 3>
      <node2>: 10.0.0.12
           1st_target_in_node_2, (portal: 10.0.0.12:3333) <for client 1>
      <node3>: 10.0.0.13
           1st_target_in_node_3, (portal: 10.0.0.13:3333) <for client 2>
           2st_target_in_node_3, (portal: 10.0.0.13:3334) <for client 3>
      ...

(2) Multiple clients try to discovery available iSCSI targets by using
a single portal.
 10.0.0.10 is an alias IP address. Let's assume <node1>:10.0.0.11 has
this alias IP address.
  [Client1] iscsiadm -m discovery -t st -P 10.0.0.10
           1st_target_in_node_1, (portal: 10.0.0.11:3333) <for client 1>
           1st_target_in_node_2, (portal: 10.0.0.12:3333) <for client 1>
  [Client2] iscsiadm -m discovery -t st -P 10.0.0.10
           2st_target_in_node_1, (portal: 10.0.0.11:3334) <for client 2>
           1st_target_in_node_3, (portal: 10.0.0.13:3333) <for client 2>
(Continue reading)

Andy Grover | 24 Aug 19:26 2015
Picon

[PATCH 0/4] Removal of textual IP usage

Based on hch's suggestion, move away from storing textual
representations of IP and port in favor of storing struct
sockaddr_storage, which can then be printed as needed using %pISc or
%pISpc prink formats.

The first patch fixes the double-brackets issue, I tested it
separately and recommend  <at> stable.

The following patches replace other spots in a similar manner, and
then switch all usage of __kernel_sockaddr_storage to the shorter
equivalent, sockaddr_storage.

Andy Grover (4):
  target/iscsi: Fix np_ip bracket issue by removing np_ip
  target/iscsi: Keep local_ip as the actual sockaddr
  target/iscsi: Replace conn->login_ip with login_sockaddr
  target/iscsi: Replace __kernel_sockaddr_storage with sockaddr_storage

 drivers/infiniband/ulp/isert/ib_isert.c      | 29 ++---------
 drivers/target/iscsi/iscsi_target.c          | 49 +++++++-----------
 drivers/target/iscsi/iscsi_target.h          |  6 +--
 drivers/target/iscsi/iscsi_target_configfs.c | 22 ++++----
 drivers/target/iscsi/iscsi_target_login.c    | 76 ++++++++++++++--------------
 drivers/target/iscsi/iscsi_target_login.h    |  4 +-
 drivers/target/iscsi/iscsi_target_stat.c     |  2 +-
 drivers/target/iscsi/iscsi_target_tpg.c      | 19 ++++---
 drivers/target/iscsi/iscsi_target_tpg.h      |  2 +-
 drivers/target/iscsi/iscsi_target_util.c     | 32 ++++++++++--
 include/target/iscsi/iscsi_target_core.h     | 10 ++--
 include/target/iscsi/iscsi_target_stat.h     |  2 +-
(Continue reading)

Etzion Bar-Noy | 22 Aug 00:33 2015
Picon

Non persistent SCSI serial (word 83)

Hi. I have been looking for a solution for a while now, and found
none, so this post here is my last attempt to solve the LIO issue i am
encountering, before I give up and move to some other solution...
Description:
OS: Centos 7.1, latest updates (correct for Aug, 2015).
Targetcli version: rpm -qa | grep targetcli
targetcli-2.1.fb37-3.el7.noarch
Kernel: uname -a
Linux controller1 3.10.0-229.11.1.el7.x86_64 #1 SMP Thu Aug 6 01:06:18
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

If there's anything missing, let me know.

Problem summary: In a PCS-based HA cluster, when failing over the LUN,
the lun serial changes, and this causes multipath clients to misbehave
(especially after an iSCSI client reboot).
Some more of the setup: the setup makes use of two nodes with
PCS-based cluster. The cluster setup was a modified follow up of this
site: https://bm-stor.com/index.php/blog/Linux-cluster-with-ZFS-on-Cluster-in-a-Box/
, except that I use multipathing and not network teaming.
iSCSI layout:
targetcli ls
o- / .........................................................................................................................
[...]
  o- backstores
..............................................................................................................
[...]
  | o- block ..................................................................................................
[Storage Objects: 2]
  | | o- lun2-tier2
(Continue reading)

Zhu Lingshan | 20 Aug 07:27 2015

Feature: Reloading target after reboot

Hi,
I am developing for the feature that can reload target after reboot, do 
you have any comments on that?

Thanks
BR
Zhu Lingshan
Andy Grover | 20 Aug 03:22 2015
Picon

[PATCH] target/iscsi: Fix sendtargets_response() for ipv6, again

Revert commit 1997e6259, which just papers over the issue and causes
double brackets on ipv6 inaddr_any address.

Now, both addresses via conn->local_ip and np->np_ip for ipv6 are kept
in bracketed format.

Change lio_target_call_addnptotpg() to pass bracketed ipv6 address to
iscsit_tpg_add_network_portal. Alas, we need to rebuild the bracketed
string at the end because we need unbracketed for in6_pton().

Fix and extend some comments earlier in the function.

Tested to work for :: and ::1 via iscsiadm, previously :: failed, see
https://bugzilla.redhat.com/show_bug.cgi?id=1249107 .

Cc: stable <at> vger.kernel.org
Signed-off-by: Andy Grover <agrover <at> redhat.com>
---
 drivers/target/iscsi/iscsi_target.c          |  9 ++-------
 drivers/target/iscsi/iscsi_target_configfs.c | 13 +++++++++++--
 2 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
index cd77a06..8051a8b 100644
--- a/drivers/target/iscsi/iscsi_target.c
+++ b/drivers/target/iscsi/iscsi_target.c
 <at>  <at>  -3464,7 +3464,6  <at>  <at>  iscsit_build_sendtargets_response(struct iscsi_cmd *cmd,
 						tpg_np_list) {
 				struct iscsi_np *np = tpg_np->tpg_np;
 				bool inaddr_any = iscsit_check_inaddr_any(np);
(Continue reading)

Alex Gorbachev | 19 Aug 18:16 2015

Re: ESXi + LIO + Ceph RBD problem

I have to say that changing default_cmdsn_depth did not help us with
the abnormal timeouts, i.e. OSD failing or some other abrupt event.
When that happens we detect the event via ABORT_TASK and if the event
is transient usually nothing happens.  Anything more than a few
seconds will usually result in Ceph recovery but ESXi gets stuck and
never comes out of APD.  Looks like it tries to establish another
session by bombarding the target with retries and resets, and
ultimately gives up and goes to PDL state.  Then the only option is
reboot.

So to be clear, we have moved on from a discussion about slow storage
to a discussion about what happens during unexpected and abnormal
timeouts.  Anecdotal evidence suggests that SCST based systems will
allow ESXi recover from this condition, while ESXi does not play as
well with LIO based systems in those situations.

What is the difference, and is there willingness to allow LIO to be
modified to work with this ESXi behavior?  Or should we ask Vmware to
do something for ESXi to play better with LIO?  I cannot fix the code,
but would be happy to be the voice of the issue via any available
channels.

Best regards,
Alex

On Wed, Aug 19, 2015 at 4:22 AM, Steve Beaudry
<Steve.Beaudry <at> royalroads.ca> wrote:
> Thanks Nicolas,
>
>    I'll modify the resource agent script so that the default_cmdsn_depth can
(Continue reading)

Martin Svec | 19 Aug 13:02 2015
Picon

Re: ESXi + LIO + Ceph RBD problem

Hi all,

thank you for sharing all the interesting tips and ideas. I agree with Steve that there're two
different issues. It makes sense to reduce default_cmdsn_depth if the backend storage is overloaded
and cannot handle more outstanding I/O in a timely manner. However, this doesn't help in case of
temporary backend outages like RAID disk or Ceph node failure where we know we definitely exceed 5
secs timeout and want to reset the sessions. ESXi does quite well when recovering from APD
conditions but it seems not to be this situation.

Steve, I was testing the same Pacemaker+DRBD setup as you in 2011 and decided to rewrite target
resource agent from scratch. The original one was too unreliable and slow. (Sorry, I cannot provide
it to public.) Note that I never saw ABORT_TASKs when running this setup on our Dell hardware.

Martin

Dne 19.8.2015 v 10:22 Steve Beaudry napsal(a):
> Thanks Nicolas,
>
>    I'll modify the resource agent script so that the default_cmdsn_depth can be set, and reduce
> the value to 16, based on your recommendation, amd see what impact it has.
>
>    I do still believe that we're talking about two different problems, one being performance and
> outstanding IOs timing out, while the other being a seeming incompatibility between LIO and ESX
> with regards to handling sessions when VMWare decides to restart a session which it does for a
> number of reasons (really, in response to any number of SCSI errors that pop).
>
> ...Steve...
>
>
> -------- Original message --------
(Continue reading)

Nicholas A. Bellinger | 19 Aug 07:48 2015

[GIT PULL] target fixes for v4.2-rc7

Hi Linus,

Here are the outstanding target-pending fixes for v4.2-rc7 code.

Please pull from:

  git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master

It contains a v4.2-rc specific RCU module unload regression bug-fix, a
long-standing iscsi-target bug-fix for duplicate target_xfer_tags during
NOP processing from Alexei, and two more small REPORT_LUNs emulation
related patches to make Solaris FC host LUN scanning happy from Roland.

There is also one patch not included that allows target-core to limit
the number of fabric driver SGLs per I/O request using residuals, that
is currently required as a work-around for FC hosts which don't honor
EVPD block-limits settings.  At this point, it will most likely become
for-next material.

Thank you,

--nab

Alexei Potashnik (1):
  target/iscsi: Fix double free of a TUR followed by a solicited NOPOUT

Nicholas Bellinger (1):
  target: Perform RCU callback barrier before backend/fabric unload

Roland Dreier (2):
(Continue reading)

John Sullivan | 15 Aug 20:02 2015

Debugging Qlogic 8152 target and initiator, no link.


Hi,

I am looking for general help to debug a non-working FC/FCoE setup.

[ if there is a better mailing list for this, please let me know. ]

I have two systems with QLE8152 cards.  The cards are connected with an
Intel twinax SFP+ cable.  LAN networking between the systems appears to
work fine, but I can't get a working target/initiator connection.

The QLE8152 is a converged networking card with dual ethernet and fibre
channel functionality.  The LIO fibre channel page claims the QLE81xxx
cards are supported as of Linux 3.9.  I have tested with a variety of
kernel versions from 3.18.17 to 4.1.3.

The initiator system comes up and recognizes the card as an HBA.  The
target system (with qla2xxx.qlini_mode = disabled) comes up and I can
configure the target and luns with targetcli.

However, the port state of the cards never proceeds past "Linkdown".

At the time the target is configured in targetcli, I see the
kernel message:

   > qla2xxx [0000:02:00.2]-00af:0: Performing ISP error recovery - ha=ffff880414078000.
   > qla2xxx [0000:02:00.2]-8038:0: Cable is unplugged...

The Qlogic console CLI tool reports the HBA status on both systems as
"Loop Down, Diagnostic Mode".  At boot this starts at "Link Down".
(Continue reading)


Gmane