Mike Christie | 17 Apr 19:53 2014
Picon

Re: Target reboot -> iscsiadm rescan Stuck

Thanks. This makes sense and more what I was expecting.

iscsiadm is stuck on the scan completion. It might have sent some 
inquiry and maybe report luns (can't tell from the below how far we 
got). While this scan is going on a couple mutexes are held. This is why 
if you did try a logout command later we would see the other hang you sent.

For why we are hanging in the scan, could you send enable iscsi logging 
when you run the iscsiadm rescan command:

echo 1 > /sys/module/libiscsi/paramters/debug_libiscsi_conn
echo 1 > /sys/module/libiscsi/paramters/debug_libiscsi_session
echo 1 > /sys/module/libiscsi/paramters/debug_libiscsi_eh
echo 1 > /sys/module/libiscsi_tcp/paramters/debug_libiscsi_tcp
echo 1 > /sys/module/iscsi_tcp/paramters/debug_iscsi_tcp

Could you also enable scsi scan logging? If you do not know how to do 
that, I attached the scsi_logging_level script from the s390 tools 
(works on any arch). Just run

scsi_logging_level --scan 7 -s

On 4/17/14, 6:14 AM, Cale, Yonatan wrote:
> Hi Mike,
>
> Regarding the analysis of waiting-tasks.txt, I'd like to add some information about the issue:
>
> After running the "rescan" that gets stuck, any other iscsiadm command we try running - gets stuck too. I'm
guessing that this is because the first command locks some mutex that all other commands wait on. (I didn't
verify this, it's mainly a guess)
(Continue reading)

Mike Christie | 14 Apr 19:07 2014
Picon

Re: Target reboot -> iscsiadm rescan Stuck

On 04/13/2014 07:29 AM, Cale, Yonatan wrote:

> # cat waiting-tasks.txt
> SysRq : Show Blocked State
>   task                        PC stack   pid father
> iscsid          D 0000000000000000     0  2842      1 0x00000000
>  ffff880137f83918 0000000000000086 ffff88010e3be2d0 ffff880137f82010
>  0000000000004000 ffff88013b2b8c40 ffff880137f83fd8 ffff880137f83fd8
>  0000000000000000 ffff88013b2b8c40 ffffffff81a0b020 ffff88013b2b8ed0
> Call Trace:
>  [<ffffffff81273a97>] ? kobject_put+0x27/0x60
>  [<ffffffff812dc447>] ? put_device+0x17/0x20
>  [<ffffffff8103418f>] ? complete+0x4f/0x60
>  [<ffffffff815ed07f>] schedule+0x3f/0x60
>  [<ffffffff815eda02>] __mutex_lock_slowpath+0x102/0x180
>  [<ffffffff815edf0b>] mutex_lock+0x2b/0x50
>  [<ffffffffa0011a97>] __iscsi_unbind_session+0x67/0x160 [scsi_transport_iscsi]
>  [<ffffffffa0011ca1>] iscsi_remove_session+0x111/0x1f0 [scsi_transport_iscsi]
>  [<ffffffffa0011d96>] iscsi_destroy_session+0x16/0x60 [scsi_transport_iscsi]
>  [<ffffffffa002573d>] iscsi_session_teardown+0x9d/0xd0 [libiscsi]
>  [<ffffffffa0032300>] iscsi_sw_tcp_session_destroy+0x50/0x70 [iscsi_tcp]
>  [<ffffffffa0012c9d>] iscsi_if_rx+0x7dd/0xaa0 [scsi_transport_iscsi]
>  [<ffffffff814f50ee>] netlink_unicast+0x2ae/0x2c0
>  [<ffffffff814d11dc>] ? memcpy_fromiovec+0x7c/0xa0
>  [<ffffffff814f5aae>] netlink_sendmsg+0x33e/0x380
>  [<ffffffff814c55f8>] sock_sendmsg+0xe8/0x120
>  [<ffffffff811078bf>] ? do_lookup+0xcf/0x360
>  [<ffffffff8111ad6f>] ? mntput+0x1f/0x40
>  [<ffffffff81107012>] ? path_put+0x22/0x30
>  [<ffffffff814c512b>] ? move_addr_to_kernel+0x6b/0x70
(Continue reading)

Mike Christie | 11 Apr 18:56 2014
Picon

Re: Very strange behavior - 4 devices but only one nic getting traffic?

On 04/11/2014 08:04 AM, Eric Raskin wrote:
> Thanks.  I see that I missed that part of the setup. I guess I thought
> that the name of the connection was what matched it to the device.
> 
> I assume that "netdev" is the ipconfig device name (eth0, eth1, etc.),
> right?
> 

Yes.

--

-- 
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe@...
To post to this group, send email to open-iscsi@...
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

Eric Raskin | 5 Apr 23:59 2014
Picon

Very strange behavior - 4 devices but only one nic getting traffic?

I have an Equallogic SAN and an Oracle VM 2.2 server (Redhat 2.6.18-128 kernel).  It is an old box, running iscsi-initiator-utils-6.2.0.872-6.0.2.el5.  Configuration is controlled by Oracle.

I have a disk configured with 4 nics:

# iscsiadm -m node -P 1
Target: iqn.2001-05.com.equallogic:0-8a0906-988411502-b4727367bde4852c-ovsdata
Portal: 192.168.100.2:3260,1
Iface Name: eth2
Iface Name: eth1
Iface Name: eth3
Iface Name: eth0

# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-871
version 2.0-872
Target: iqn.2001-05.com.equallogic:0-8a0906-988411502-b4727367bde4852c-ovsdata
Current Portal: 192.168.100.24:3260,1
Persistent Portal: 192.168.100.2:3260,1
**********
Interface:
**********
Iface Name: eth1
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:e8df89ac4ae1
Iface IPaddress: 192.168.100.71
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 4 State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running

**********
Interface:
**********
Iface Name: eth0
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:e8df89ac4ae1
Iface IPaddress: 192.168.100.71
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 6 State: running
scsi6 Channel 00 Id 0 Lun: 0
Attached scsi disk sdc State: running
Current Portal: 192.168.100.23:3260,1
Persistent Portal: 192.168.100.2:3260,1
**********
Interface:
**********
Iface Name: eth3
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:e8df89ac4ae1
Iface IPaddress: 192.168.100.71
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 5 State: running
scsi5 Channel 00 Id 0 Lun: 0
Attached scsi disk sdd State: running
Current Portal: 192.168.100.21:3260,1
Persistent Portal: 192.168.100.2:3260,1
**********
Interface:
**********
Iface Name: eth2
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:e8df89ac4ae1
Iface IPaddress: 192.168.100.71
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 4
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 7 State: running
scsi7 Channel 00 Id 0 Lun: 0
Attached scsi disk sde State: running

I have multipath configured for round-robin:

# multipath -ll
OVSData (36090a028501184982c85e4bd677372b4) dm-0 EQLOGIC,100E-00
[size=1.5T][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=4][active]
 \_ 4:0:0:0 sdb 8:16  [active][ready]
 \_ 6:0:0:0 sdc 8:32  [active][ready]
 \_ 5:0:0:0 sdd 8:48  [active][ready]
 \_ 7:0:0:0 sde 8:64  [active][ready]

I have an OCFS2 fsck running at the moment.  Traffic is going over the 4 disk devices:

# iostat -d 2 2Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               0.00         0.00         0.00          0          0
sda1              0.00         0.00         0.00          0          0
sda2              0.00         0.00         0.00          0          0
sda3              0.00         0.00         0.00          0          0
sda4              0.00         0.00         0.00          0          0
sda5              0.00         0.00         0.00          0          0
sdb             535.00         0.00      4280.00          0       4280
sdc             528.00         0.00      4224.00          0       4224
sdd             540.00         2.00      4305.00          2       4305
dm-0           2148.00         2.00     17169.00          2      17169
sde             544.00         0.00      4352.00          0       4352

When I check traffic on the individual nics, the traffic is only going out on 1 nic (eth0)!  How can this be???  Isn't each nic attached to one of the disk devices???

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to open-iscsi-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.
myselfandfredy | 3 Apr 10:42 2014
Picon

Target reboot -> iscsiadm rescan Stuck

Hi,

In a scenario where we reboot a remote target and then run "iscsiadm -m session --rescan" on the initiator,
iscsiadm starts taking around 85% CPU, and any other iscsiadm command we send afterwards also gets stuck (probably waiting on a mutex), and the "rescan" iscsiadm process can't be killed (niether with kill -9 or -11), so it's probably stuck in the kernel.

Is this a known/fixed issue?

Logs: Just ask. We have plenty.

open-iscsi version: 2.0.873 (from Debian)

Have a nice day
:)

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to open-iscsi-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.
Rushikesh Jadhav | 3 Apr 16:32 2014
Picon

iSCSI Driver hangup for IBM V3700

Hello Forum,
 
We have a IBM V3700 and 4 Xenserver6.2 hosts. We are facing trouble in adding SR from it and below are the logs generated in kern.log
 
n2 kernel: [429671.643262]  connection19:0: pdu (op 0x2c itt 0x1) rejected. Reason code 0x7
n2 kernel: [429671.643302]  connection19:0: pdu (op 0x2a itt 0x1) rejected. Reason code 0x4
n2 kernel: [429671.643354]  connection19:0: pdu (op 0x2b itt 0x1) rejected. Reason code 0x4
n2 kernel: [429671.643524]  connection19:0: pdu (op 0x2f itt 0x1) rejected. Reason code 0x4
n2 kernel: [429671.643554]  connection19:0: pdu (op 0x2e itt 0x1) rejected. Reason code 0x4
n2 kernel: [429671.643703]  connection19:0: pdu (op 0x2d itt 0x1) rejected. Reason code 0x4
n2 kernel: [429671.643750]  connection19:0: pdu (op 0x33 itt 0x1) rejected. Reason code 0x4
n2 kernel: [429671.643846]  connection19:0: pdu (op 0x35 itt 0x1) rejected. Reason code 0x7
 
Unfortunately VMs running from existing iSCSI luns face IO timeout as the LUNs on host hang. This hang occurs as soon as the new LUN is getting added.

Existing luns can be recovered only if we do restart iscsid and get no success in adding additional luns.
 
The iscsid is running with default parameters.

The iscsi module losses all LUN access even for non-IBM as soon as it tries to mount LUN from IBM-V3700. The iscsi rpm is open-iscsi-2.0.871-0.20.3.xs1120


Regards,
Rishi

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to open-iscsi-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.
Or Gerlitz | 6 Apr 15:11 2014

few data-len issues for processing of scsi command responses

Hi Mike,

the kernel code path of iscsi_complete_pdu--> __iscsi_complete_pdu -->  
iscsi_scsi_cmd_rsp
uses a "datalen" value which should account for the length of the data 
received from the target.

In iser/iscsi_iser_recv() we don't skip the AHS in case the target sent 
it and I'd like to fix it.
I wasn't fully surewhat is done today in iscsi_tcp... I see few hits in 
libiscsi_tcp.c for ahslen but didn't manageto spot the a place where the 
ahs is being skipped, is it?

Another related issue is with this code in iscsi_scsi_cmd_rsp():

invalid_datalen:
                         iscsi_conn_printk(KERN_ERR,  conn,
                                          "Got CHECK_CONDITION but 
invalid data "
                                          "buffer size of %d\n", datalen);
                         sc->result = DID_BAD_TARGET << 16;
                         goto out;
                 }

                 senselen = get_unaligned_be16(data);
                 if (datalen < senselen)
                         goto invalid_datalen;

Partial sense can come into play e.g with target (e.g non-Linux one) 
sending sense
which is bigger than SCSI_SENSE_BUFFERSIZE or b/c the initiator RDSL 
doesn't let them
send the whole sense, etc.

This code isn't willing to provide the SCSI midlayer with partial sense, 
why not do
allow that?

Or.

--

-- 
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe@...
To post to this group, send email to open-iscsi@...
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

陈江宏 | 27 Mar 04:11 2014
Picon

is it possible adjust target connect priority

I have a synology data station which have two Ethernet work,eth0 ip is 172.18.133.210 and eth1 ip is 172.18.133.218.
eth0 had been connect to network,but eht1 hadn't connect to network,follow is information:

# iscsiadm -m node 
172.18.133.218:3260,0 iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail
172.18.133.210:3260,0 iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail

# iscsiadm -m node -T iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail 172.18.133.210:3260 -l
Logging in to [iface: default, target: iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail, portal: 172.18.133.218,3260] (multiple)
Logging in to [iface: default, target: iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail, portal: 172.18.133.210,3260] (multiple)
Login to [iface: default, target: iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail, portal: 172.18.133.218,3260] successful.
iscsiadm: Could not login to [iface: default, target: iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail, portal: 172.18.133.210,3260].
iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI login failure)
iscsiadm: Could not log into all portals

is it possible adjust the login in order,let 172.18.133.210 first than 172.18.133.218 target.

althrough i can use follow command delete target at 172.18.133.218 interface
#iscsiadm -m node -o delete -T iqn.2010-10.synology-iscsi:virtualdisk.whxg-mail -p 172.18.133.218:3260

but where I create virtual storage pool in virsh,system auto connect target at 172.18.133.218.

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to open-iscsi-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.
John Soni Jose | 21 Mar 07:20 2014

[PATCH 0/2] iscsi tools: Fixes for be2iscsi and StatSN

 Following are the patches which fixes
  - be2iscsi MaxXmitDataLength when target returns MRDSL=0
  - Fix StatSN data type

John Soni Jose (2):
  be2iscsi: Fix MaxXmitDataLenght of the driver.
  Fix StatSN in Open-iSCSI Stack.

 usr/be2iscsi.c         |    4 ----
 usr/initiator_common.c |    2 +-
 usr/iscsi_ipc.h        |    1 +
 usr/netlink.c          |    3 +++
 4 files changed, 5 insertions(+), 5 deletions(-)

-- 
1.7.4.1

--

-- 
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe@...
To post to this group, send email to open-iscsi@...
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

Sony John-N | 17 Mar 08:10 2014

RE: Error on discovery using emulex be2iscsi

Hi guys,

 

I am hitting an issue where I could not even discovery targets using be2iscsi driver.

So If someone are using emulex on HP Proliant servers and have any hint on what is going please drop me a line. 

 

The following is the output I get when I load the kernel driver and trying to discover on a target running ilo (tgt do the same).

 

My system 

 

Kernel : Linux 3.9.6-030906-generic #201306131535 SMP Thu Jun 13 19:35:54 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Controller : Mass storage controller: Emulex Corporation OneConnect 10Gb iSCSI Initiator (rev 02)

Emulex driver: Emulex OneConnectOpen-iSCSI Driver version10.0.272.0

Distro : Ubuntu 12.10

 

Thanks in advance!

 

 

------

 

modprobe be2iscsi


[10835.048474] scsi17 : Emulex 10Gbe open-iscsi Initiator Driver
[10835.049145] be2iscsi 0000:05:00.2: irq 112 for MSI/MSI-X
[10835.049164] be2iscsi 0000:05:00.2: irq 113 for MSI/MSI-X
[10835.049188] be2iscsi 0000:05:00.2: irq 114 for MSI/MSI-X
[10835.049204] be2iscsi 0000:05:00.2: irq 115 for MSI/MSI-X
[10835.049220] be2iscsi 0000:05:00.2: irq 116 for MSI/MSI-X
[10835.049236] be2iscsi 0000:05:00.2: irq 117 for MSI/MSI-X
[10835.049251] be2iscsi 0000:05:00.2: irq 118 for MSI/MSI-X
[10835.049266] be2iscsi 0000:05:00.2: irq 119 for MSI/MSI-X
[10840.391427] scsi host17: BC_174 : MBX Cmd Completion timed out
[10840.402716] scsi host17: BG_1163 : MBX CMD get_boot_target Failed
[10840.413885] scsi host17: BM_3880 : No boot session
[10840.436963] scsi host17: BC_174 : MBX Cmd Completion timed out
[10840.447943] scsi host17: BG_751 : mgmt_exec_nonemb_cmd Failed status
[10840.470781] scsi host17: BC_174 : MBX Cmd Completion timed out
[10840.481637] scsi host17: BG_751 : mgmt_exec_nonemb_cmd Failed status

iscsiadm -m discovery -I be2iscsi.38:ea:a7:a5:62:89.ipv4.0 -p 10.70.3.250 -t st

Mar 13 17:49:56 e31-host02 kernel: [10946.559435] scsi host15: BC_206
: MBX Cmd Failed for Subsys : 6 Opcode : 52 with Status : 32 and
Extd_Status : 1
Mar 13 17:49:56 e31-host02 kernel: [10946.569905] scsi host15: BS_1073
: mgmt_open_connection Failed
Mar 13 17:49:56 e31-host02 kernel: [10946.580209] scsi host15: BS_1141
: Failed in beiscsi_open_conn
iscsiadm: Received iferror -16: Device or resource busy.
iscsiadm: Connection to discovery portal 10.70.3.250 failed:
encountered connection failure
Mar 13 17:49:57 e31-host02 kernel: [10947.615583] scsi host15: BC_206
: MBX Cmd Failed for Subsys : 6 Opcode : 52 with Status : 32 and
Extd_Status : 1
Mar 13 17:49:57 e31-host02 kernel: [10947.626074] scsi host15: BS_1073
: mgmt_open_connection Failed
Mar 13 17:49:57 e31-host02 kernel: [10947.636385] scsi host15: BS_1141
: Failed in beiscsi_open_conn
iscsiadm: Received iferror -16: Device or resource busy.
iscsiadm: Connection to discovery portal 10.70.3.250 failed:
encountered connection failure
Mar 13 17:49:58 e31-host02 kernel: [10948.672143] scsi host15: BC_206
: MBX Cmd Failed for Subsys : 6 Opcode : 52 with Status : 32 and
Extd_Status : 1
Mar 13 17:49:58 e31-host02 kernel: [10948.682591] scsi host15: BS_1073
: mgmt_open_connection Failed
Mar 13 17:49:58 e31-host02 kernel: [10948.692899] scsi host15: BS_1141
: Failed in beiscsi_open_conn
iscsiadm: Received iferror -16: Device or resource busy.
iscsiadm: Connection to discovery portal 10.70.3.250 failed:
encountered connection failure
Mar 13 17:49:59 e31-host02 kernel: [10949.730847] scsi host15: BC_206
: MBX Cmd Failed for Subsys : 6 Opcode : 52 with Status : 32 and
Extd_Status : 1
Mar 13 17:49:59 e31-host02 kernel: [10949.741342] scsi host15: BS_1073
: mgmt_open_connection Failed
Mar 13 17:49:59 e31-host02 kernel: [10949.751655] scsi host15: BS_1141
: Failed in beiscsi_open_conn
iscsiadm: Received iferror -16: Device or resource busy.
iscsiadm: Connection to discovery portal 10.70.3.250 failed:
encountered connection failure
Mar 13 17:50:00 e31-host02 snmpd[10412]: error on subcontainer
'ia_addr' insert (-1)
Mar 13 17:50:00  snmpd[10412]: last message repeated 2 times
Mar 13 17:50:00 e31-host02 kernel: [10950.787615] scsi host15: BC_206
: MBX Cmd Failed for Subsys : 6 Opcode : 52 with Status : 32 and
Extd_Status : 1
Mar 13 17:50:00 e31-host02 kernel: [10950.798721] scsi host15: BS_1073
: mgmt_open_connection Failed
Mar 13 17:50:00 e31-host02 kernel: [10950.809107] scsi host15: BS_1141
: Failed in beiscsi_open_conn
iscsiadm: Received iferror -16: Device or resource busy.
iscsiadm: Connection to discovery portal 10.70.3.250 failed:
encountered connection failure
Mar 13 17:50:01 e31-host02 kernel: [10951.845008] scsi host15: BC_206
: MBX Cmd Failed for Subsys : 6 Opcode : 52 with Status : 32 and
Extd_Status : 1
Mar 13 17:50:01 e31-host02 kernel: [10951.855527] scsi host15: BS_1073
: mgmt_open_connection Failed
Mar 13 17:50:01 e31-host02 kernel: [10951.865858] scsi host15: BS_1141
: Failed in beiscsi_open_conn
iscsiadm: Received iferror -16: Device or resource busy.
iscsiadm: Connection to discovery portal 10.70.3.250 failed:
encountered connection failure
iscsiadm: connection login retries (reopen_max) 5 exceeded
iscsiadm: No portals found

 

 

--

Hi

 

Looks like initiator is not able to establish a TCP_CXN with the target.

Can the wire-trace during discovery command be captured and provided for debugging the issue.

 

Had few queries regarding the setup

-          Is the initiator-name set in /etc/iscsi path.

-          Is the IP address set on the initiator port. If IP is set can a ping from target machine to the initiator IP be tried out.

-          Please provide the wire-trace when discovery command was issued.

 

-Sony

 

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to open-iscsi-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

frank.vandamme | 19 Mar 15:55 2014
Picon

connection back, disk gone

Hi list,

I had some trouble with an iscsi disk that lost its connection to the target. I am running xfs on lvm on iscsi (on Debian 7.0 amd64). The connection came back up OK after 10 minutes or so:

iscsid: connection1:0 is operational after recovery (31 attempts)

but disk only came back after I restarted open-iscsid using the init script. After that, the disk shows up in my kernel log, lvm got back on its feet and the file system was mounted again.

I'm interested to know why, when the connection is restored, the session associated with it aren't restored as well? After all, what you want after a couple minutes of network failure is just to have your disk back, correct?

Greetings,

--
Frank Van Damme

--
You received this message because you are subscribed to the Google Groups "open-iscsi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-iscsi+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to open-iscsi-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/open-iscsi.
For more options, visit https://groups.google.com/d/optout.

Gmane