salim | 17 Nov 22:17 2014
Picon

Re:

Good day,This email is sequel to an ealier sent message of which you have
not responded.I have a personal charity project which I will want you to
execute on my behalf.Please kidnly get back to me with this code
MHR/3910/2014 .You can reach me on mrsalimqadri <at> gmail.com .

Thank you

Salim Qadri
Roland Dreier | 14 Nov 21:54 2014

[PATCH 1/2] target: Fix target_core_register_fabric() for built-in fabric modules

From: Roland Dreier <roland <at> purestorage.com>

If we try to create a fabric directory in configfs for one of the
default hard-coded fabric modules (iscsi and loopback), and that
fabric is actually built into the kernel, then the operation will
spuriously fail because request_module() (for the code that's already
linked into the kernel) fails.

Fix this by running the autoprobing code only if an initial
target_core_get_fabric() fails.

Signed-off-by: Roland Dreier <roland <at> purestorage.com>
---
 drivers/target/target_core_configfs.c | 75 ++++++++++++++++++++---------------
 1 file changed, 42 insertions(+), 33 deletions(-)

diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index bf55c5a..85d1e63 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
 <at>  <at>  -126,48 +126,57  <at>  <at>  static struct config_group *target_core_register_fabric(

 	pr_debug("Target_Core_ConfigFS: REGISTER -> group: %p name:"
 			" %s\n", group, name);
-	/*
-	 * Below are some hardcoded request_module() calls to automatically
-	 * local fabric modules when the following is called:
-	 *
-	 * mkdir -p /sys/kernel/config/target/$MODULE_NAME
-	 *
(Continue reading)

Andy Grover | 13 Nov 21:50 2014
Picon

[PATCH 1/3] target: Remove TRANSPORT_LUNFLAGS_READ_WRITE

LUNFLAGS_READ_WRITE is always the inverse of LUNFLAGS_READ_ONLY.

Removing this enum value resulted in some spots where a parameter's value
can be just true or false, which we can represent with a bool instead of
u32. Change to a bool named "lun_access_ro".

Signed-off-by: Andy Grover <agrover <at> redhat.com>
---
 drivers/target/target_core_device.c          | 46 ++++++++++------------------
 drivers/target/target_core_fabric_configfs.c | 18 +++++------
 drivers/target/target_core_internal.h        |  6 ++--
 drivers/target/target_core_tpg.c             | 16 +++++-----
 include/target/target_core_base.h            |  1 -
 5 files changed, 34 insertions(+), 53 deletions(-)

diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
index c45f9e9..f32a02d 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
 <at>  <at>  -275,20 +275,17  <at>  <at>  int core_free_device_list_for_node(

 void core_update_device_list_access(
 	u32 mapped_lun,
-	u32 lun_access,
+	bool lun_access_ro,
 	struct se_node_acl *nacl)
 {
 	struct se_dev_entry *deve;

 	spin_lock_irq(&nacl->device_list_lock);
(Continue reading)

Lance Gropper | 12 Nov 22:52 2014

CentOS 7.0 - Qlogic in target mode - targetcli not seeing it

Hello:

I have the Qlogic 2562 in target mode - i.e.:

[root <at> KBNI /]# cd /sys/module/qla2xxx/parameters/
[root <at> KBNI parameters]# cat qlini_mode
disabled
[root <at> KBNI parameters]#
But targetcli won't see it:

[root <at> KBNI parameters]# targetcli
targetcli shell version 2.1.fb34
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / 
..................................................................... 
[...]
   o- backstores 
.......................................................... [...]
   | o- block .............................................. [Storage 
Objects: 0]
   | o- fileio ............................................. [Storage 
Objects: 0]
   | o- pscsi .............................................. [Storage 
Objects: 0]
   | o- ramdisk ............................................ [Storage 
Objects: 0]
   o- iscsi ........................................................ 
(Continue reading)

Sagi Grimberg | 11 Nov 17:17 2014

[PATCH 00/21] iser target fixes for 3.19

Hey Nic,

This series mainly consists of error flow fixes:
Patches 1-2: Some logging refactoring. It is much easier to
	instruct a user to increase the log level in this case.
Patches 3-10: Some error flow fixes for live target stack shutdown
	and cable pull with stress IO scenarios.
Patches 11-15: Remove t10_pi attribute and fix a crash in due to a bad
	dereference.
Patches 16-19: Fixes in the area of bond failover scenarios
Patch 20: Workaround for live target stack unload in the presence
	of multiple (50+) of active sessions.

While this set makes things better, there is still some work
left to do especially in the area of multi-session error flows.

The set applies cleanly on master branch (with or without the isert
patch from Chris Moore).

Sagi Grimberg (20):
  iser-target: Use debug_level parameter to control logging level
  iser-target: Adjust log levels and prettify some prints
  iscsi-target: Add call to wait_conn in establishment error flow
  iser-target: Destroy the connection when getting a connect error
    event
  iser-target: Initiate connection termination only once
  iser-target: Don't deffer disconnected handler to a work
  iser-target: Reject connect request in failure path
  iser-target: Introduce state ISER_CONN_FULL_FEATURE
  iser-target: Parallelize CM connection establishment
(Continue reading)

Suresh Babu Kandukuru | 10 Nov 16:12 2014
Picon

RE: Is having a QLogic HBA as initiator + qla2xxx LIO Target mode possible?


Hi All, How to run the qla drivers  in initiator mode and target mode both on the same system on  different
FC ports ?. SCST stack  supports  qla2x00tgt . along with that we can run qla2xxx in initiator mode . 

So far I did not find anything like this in LIO  target stack..  Appreciate any kind of pointers ? Thanks in advance

/Suresh
Turbo Fredriksson | 8 Nov 16:14 2014

LIO target stop responding on 3.18-rc3

As Subj… I can do a discovery on the targets just fine from another host for 'a while'
after the target server have started. But after 'a while' it just stops responding:

Negotia:/etc/iscsi# iscsiadm -m discovery -t st -p celia:3260     
iscsiadm: connect to 192.168.69.8 timed out
iscsiadm: connect to 192.168.69.8 timed out
iscsiadm: connect to 192.168.69.8 timed out
iscsiadm: connect to 192.168.69.8 timed out
iscsiadm: connect to 192.168.69.8 timed out
iscsiadm: connect to 192.168.69.8 timed out
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: Could not perform SendTargets discovery: encountered connection failure

And yet, it still … "supplies" (?) the already logged in targets.

Rebooting 'celia', and all is well. For 'a while' (we're talking minutes).

Is there some config option I've missed that haven't been documented?

celia# lsmod | grep target
target_core_user        8782  0
uio                     7072  1 target_core_user
target_core_pscsi       7901  0
target_core_file        7831  0
target_core_iblock      7155  82
iscsi_target_mod      178301  657
target_core_mod       215674  998 target_core_iblock,tcm_qla2xxx,target_core_pscsi,iscsi_target_mod,tcm_fc,ib_srpt,target_core_file,target_core_user,tcm_loop
configfs               20185  5 tcm_qla2xxx,iscsi_target_mod,target_core_mod,netconsole
scsi_mod              174355  16 sg,qla2xxx,scsi_transport_fc,scsi_transport_sas,libfc,mvsas,usb_storage,tcm_qla2xxx,target_core_pscsi,libata,libsas,iscsi_target_mod,target_core_mod,sd_mod,tcm_fc,tcm_loop

(Continue reading)

Steve Beaudry | 6 Nov 18:05 2014
Picon

tcm_node difficulties when used with pacemaker cluster, and suggested patches.

Good day folks,

   First off, please let me thank you for the fine work that you've done with the Linux-IO Target
project.  We have used it as the foundation for our central SAN storage at Royal Roads University,
supporting our VMWare ESX virtual infrastructure.  We make use of LIO for its iSCSI target, in
combination with DRBD, LVM and Pacemaker, to create the redundant, replicated iSCSI stack described in
http://www.linbit.com/en/downloads/tech-guides?download=9:highly-available-iscsi-with-drbd-and-pacemaker
(I'm sure you're well familiar with it).

  We've gone well beyond the basic setup described in that document, and using LIO iSCSI in the cluster
the way we are, we've encountered a few issues along the way, and a couple of them I thought were worth
sending back to you, in the way of suggested patches.

   Both patches relate to issues caused when migrating iSCSI resources in the cluster.  I'm fairly sure
that with without using a cluster manager to control the iSCSI targets/LUNS, these things would go
unnoticed.  Still, when used within the cluster, the issues result in significant difficulty
migrating resources from one host to another.

   The first issue is to do with starting multiple iSCSI targets/LUNs simultaneously.  In our
configuration, we have up to four targets on a host (each with its own distinct IP address and IQN).  Each
target has only a single LUN presented.  With multiple targets/LUNS running in the same cluster (only
two hosts in the cluster), if one node fails, it forces the simultaneous startup of multiple target/LUNS
on the other node.  Because the cluster manager simply executes multiple instances of 'tcm_node' to
accomplish the work, there exists a situation where multi-threaded type problems can occur.  We're
seeing one of these problems very consistently, which manifests itself in the following log entry:

2014-11-05T15:15:37.509489-08:00 capacity-3 iSCSILogicalUnit(lun0_vmCapacity-f)[7421]:
[7717]: ERROR: Traceback (most recent call last):
                File "/usr/sbin/tcm_node", line 754, in <module> main()
                File "/usr/sbin/tcm_node", line 746, in main (options, args) = parser.parse_args()
(Continue reading)

Benjamin ESTRABAUD | 6 Nov 17:30 2014

LIO iSER small random IO performance.

Hi All,

When using LIO iSER over RoCE, we see variations in 8K read IOPS 
performance depending on the backend storage.

If using a ramdisk backend storage (loop device created atop a 20G tmpfs 
RAM filesystem, or ramdisk_mcp which yields more or less the same 
performance) we get 2.5x less IOPS than when running "fio" locally.

If using a "real" block backend (MD or LV interleaved (RAID 0 or 
interleaved volume) built ontop of six Crucial M50 1TB SSDs) we get 3.4x 
less IOPS than when running "fio" locally.

While we expected a small performance degradation between "local" IOs 
and iSER ones, we did not expect to see a gap of 2.5x or 3.5x less IOPS.

Is this expected? It's hard to find proper unbiased benchmarks that 
compare local IOPS vs iSER IOPS. We don't get that issue when running 
large nice sequential IOs, where our local bandwidth is equivalent to 
our remote one. We were wondering if there were anything obvious we 
might have overlooked in our configuration. Any idea would be greatly 
appreciated.

The system configuration is as follow:

Target node (Running LIO):

* "Homemade" buildroot based distribution, Linux 3.10.35 x86_64 (SMP), 
stock Infiniband drivers (*NO* OFED drivers).
* Running on a Xeon E5-2695v2 (2.40Ghz, 12 physical cores, 24 logical 
(Continue reading)

Turbo Fredriksson | 5 Nov 22:01 2014

[PATCH 1/1] Conform to RFC 3720.


RFC 3720 states:
      c) iSCSI names are composed only of displayable characters.  iSCSI
         names allow the use of international character sets but are not
         case sensitive.  No whitespace characters are used in iSCSI
         names.
But lio_node effectively nullifies the 'case sensitiveness' by running
'lower()' on the values.

Remove these, to conform to the RFC.

Signed-off-by: Turbo Fredriksson <turbo <at> bayour.com>
---
 lio-py/lio_node.py |   53 ----------------------------------------------------
 1 file changed, 53 deletions(-)

Chris Moore | 4 Nov 17:28 2014

[PATCH] IB/isert: Adjust CQ size to HW limits


This is the isert version of the patch that Minh Tran submitted for iser.
isert has the same issue of trying to create a CQ with more CQEs than are supported by the hardware.

Signed-off-by: Chris Moore <chris.moore <at> emulex.com>

---

diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 3effa93..f837a23 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
 <at>  <at>  -225,12 +225,16  <at>  <at>  isert_create_device_ib_res(struct isert_device *device)
 	struct isert_cq_desc *cq_desc;
 	struct ib_device_attr *dev_attr;
 	int ret = 0, i, j;
+	int max_rx_cqe, max_tx_cqe;

 	dev_attr = &device->dev_attr;
 	ret = isert_query_device(ib_dev, dev_attr);
 	if (ret)
 		return ret;

+	max_rx_cqe = min(ISER_MAX_RX_CQ_LEN, dev_attr->max_cqe);
+	max_tx_cqe = min(ISER_MAX_TX_CQ_LEN, dev_attr->max_cqe);
+
 	/* asign function handlers */
 	if (dev_attr->device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS &&
 	    dev_attr->device_cap_flags & IB_DEVICE_SIGNATURE_HANDOVER) {  <at>  <at>  -272,7 +276,7  <at>  <at> 
isert_create_device_ib_res(struct isert_device *device)
(Continue reading)


Gmane