Sainsburys Finance | 1 Feb 07:30 2012



We give out loans within the range of $5,000 and above with an interest rate of 3%. Our loans are well insured
for maximum security is our priority. Sainsburys Finance is a leading online provider of finance.

Lender's Name: Mr.Kester Switch
Tel Number:  +44 701 117 3877
Fax Number:  +44 871 503 5094
Lender's Email: sainsburysfinance <at>

Interested applicants are advice to get back to us via email for our application

Mrs Cherry Anderson
Sainsburys Finance.

To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo <at>
More majordomo info at

loody | 1 Feb 07:44 2012

Re: Kernel coredump about kblockd deadlock


2012/1/16 Jun'ichi Nomura <j-nomura <at>>:
> Hi,
> On 01/15/12 23:45, loody wrote:
>> hi all:
>> 2012/1/15 loody <miloody <at>>:
>>> Dear all:
>>> I use kernel and cpu for mips
>>> When I plug in/out usb stick for several times I got following warning
>>> messages as I append at the end of mail.
>>> is it possible correlate to below issue
>> I want to reproduce the issue of the thread above in my platform.
>> But I have no idea about:
>> 1  dm-multipath
>> 2. dm-round-robin
>> are they additional module I need to download or it already include
>> kernel source?
>> if the later, which config I shout select?
> They are in kernel source (in drivers/md).
> to enable them.
> To try reproducer in the thread, you also need "dmsetup"
> command, which is part of device-mapper utility.
(Continue reading)

Shaohua Li | 1 Feb 08:03 2012

Re: [LSF/MM TOPIC][ATTEND]IOPS based ioscheduler

On Tue, 2012-01-31 at 13:12 -0500, Jeff Moyer wrote:
> Shaohua Li < <at>> writes:
> > Flash based storage has its characteristics. CFQ has some optimizations
> > for it, but not enough. The big problem is CFQ doesn't drive deep queue
> > depth, which causes poor performance in some workloads. CFQ also isn't
> > quite fair for fast storage (or further sacrifice of performance to get
> > fairness) because it uses time based accounting. This isn't good for
> > block cgroup. We need something different to make both performance and
> > fairness good.
> >
> > A recent attempt is to use IOPS based ioscheduler for flash based
> > storage. It's expected to drive deep queue depth (so better performance)
> > and be more fairness (IOPS based accounting instead of time based).
> >
> > I'd like to discuss:
> >  - Do we really need it? Or the question is if it is popular real
> > workloads drive deep io depth?
> >  - Should we have a separate ioscheduler for this or merge it to CFQ?
> >  - Other implementation discussions like differentiation of read/write
> > requests and request size. Flash based storage doesn't like rotate
> > storage, request cost of read/write and different request size usually
> > is different.
> I think you need to define a couple things to really gain traction.
> First, what is the target?  Flash storage comes in many varieties, from
> really poor performance to really, really fast.  Are you aiming to
> address all of them?  If so, then let's see some numbers that prove that
> you're basing your scheduling decisions on the right metrics for the
> target storage device types.
(Continue reading)

Pekka Enberg | 1 Feb 08:31 2012

Re: [PATCH v4 0/3] virtio-scsi driver

Hello Paolo,

On Mon, Jan 30, 2012 at 10:48 AM, Paolo Bonzini <pbonzini <at>> wrote:
> On 01/20/2012 05:45 PM, Paolo Bonzini wrote:
>> This is the first implementation of the virtio-scsi driver, a virtual
>> HBA that will be supported by KVM.  It implements a subset of the spec,
>> in particular it does not implement asynchronous notifications for either
>> LUN reset/removal/addition or CD-ROM media events, but it is already
>> functional and usable.
>> Other matching bits:
>> - spec at
>> - QEMU implementation at git://,
>>   branch virtio-scsi
>> Please review.  I would like this to be included in 3.3, since the
>> possibility of regressions is obviously zero.
>> Paolo Bonzini (3):
>>   virtio-scsi: first version
>>   virtio-scsi: add error handling
>>   virtio-scsi: add power management support
>>     fixed 32-bit compilation; added power management support;
>>     adjusted calls to virtqueue_add_buf
(Continue reading)

Paolo Bonzini | 1 Feb 09:13 2012

Re: [PATCH v4 0/3] virtio-scsi driver

On 02/01/2012 08:31 AM, Pekka Enberg wrote:
> What's the benefit of virtio-scsi over virtio-blk?

Most of this is in the spec or the KVM Forum 2011 presentation.

1) scalability limitations: virtio-blk-over-PCI puts a strong upper
limit on the number of devices that can be added to a guest. Common
configurations have a limit of ~30 devices. While this can be worked
around by implementing a PCI-to-PCI bridge, or by using multifunction
virtio-blk devices, these solutions either have not been implemented
yet, or introduce management restrictions.

2) limited flexibility: virtio-blk does not support all possible storage
scenarios.  For example, persistent reservations require you to pass a
whole LUN to the guest, they do not work with images.  In principle,
virtio-scsi provides anything that the underlying SCSI target supports.
The SCSI target can also be the in-kernel LIO target, which can
talk to virio-scsi via vhost.

3) limited extensibility: over the time, many features have been added
to virtio-blk. Each such change requires modifications to the virtio
specification, to the guest drivers, and to the device model in the
host. The virtio-scsi spec has been written to follow SAM conventions,
and exposing new features to the guest will only require changes to the
host's SCSI target implementation.

> Are we going to support both or eventually phase out virtio-blk?

Certainly older guests will have no virtio-scsi support, so it's going
to stay with us for a long time.
(Continue reading)

Pekka Enberg | 1 Feb 09:18 2012

Re: [PATCH v4 0/3] virtio-scsi driver

On Wed, 1 Feb 2012, Paolo Bonzini wrote:
>> Have the virtio specification changes been reviewed? Can we guarantee
>> stable ABI for the virtio-scsi driver?
> Of course.  I would have proposed it for staging otherwise.

I'm not a SCSI expert but FWIW I went through the patches and I think 
they're definitely worthwhile having in mainline for v3.4:

Acked-by: Pekka Enberg <penberg <at>>

Paolo Bonzini | 1 Feb 09:21 2012

Re: [PATCH v4 0/3] virtio-scsi driver

On 02/01/2012 09:18 AM, Pekka Enberg wrote:
>> Of course.  I would have proposed it for staging otherwise.
> I'm not a SCSI expert but FWIW I went through the patches and I think
> they're definitely worthwhile having in mainline for v3.4:
> Acked-by: Pekka Enberg <penberg <at>>

Thanks Pekka!

Actually we had a conflict on the virtio id, so I'll have to respin with 
the id changed to 8.

Simon McNair | 1 Feb 10:01 2012

Re: mvsas with 3.1

Hi Thomas,
I've been trying to get an answer to this same question (I have a
supermicro AOC-SASLP-MV8 too).  See my post at  .

My only suggestion is to get a kernel that's pre 2.6.30 (my Thecus
5200 marvell based unit is on 2.6.13 and it seems okay).  If the data
is mission critical I certainly wouldn't be using the latest kernels
in production until it had 6+months in a stress test environment (of
course I'm an obsessive apt-get upgrade which makes me a hypocrite

I can see from searching  for mvsas that there is a lot of
code going in to this driver and unless your're a linux guru and can
fix these problems I can only suggest you keep anything important on a
=< 2.6.13 kernel and monitor the situation.

Oh, and keep back ups ;-) (again here's me being the hypocrite).

That's my plan :-(


On 13 January 2012 18:44, Thomas Fjellstrom <thomas <at>> wrote:
> Is there chance this driver will ever be stable? After more than two years I'm
> starting to get extremely frustrated. I don't really have the option at the
> moment to get a new card, otherwise I would, likely a non marvell based
> device.
> It actually managed to last 10 days this time though. Which is a record. The
(Continue reading)

Hannes Reinecke | 1 Feb 10:27 2012


On 01/31/2012 08:02 PM, Chad Dupuis wrote:
> With more and more storage controller hardware supporting SR-IOV in
> next
> couple of years it seems to make sense at this point to discuss, from a
> storage stack perspective, managing how we instantiate and manage
> virtual functions (VFs).  Currently for hardware that does support
> the management of that functionality is managed entirely by the
> hardware
> device driver.  However, a more dynamic management to how VFs are
> created
> and destroyed (assuming the hardware supports it) may be more desirable
> since the most common use case, assigning VFs to virtual guests, also
> tends to be very dynamic and fluid.  We also need to consider how VFs
> interact with complimentary technology such as NPIV in Fibre Channel.
> I'd like to propose that we discuss the following issues to see if a
> consensus can be reached about how to deal with them:
> * VF instantiation
> * VF/NPIV port pairing
> * Namespace management
> * LUN presentation
Actually, SR-IOV on FC should be easy to handle; it maps quite
easily on NPIV (which probably was the intention :-).

However, for SAS things get really tricky, as we don't have virtual
SAS IDs. Some vendor was actually planning on doing ACLs in firmware
(Continue reading)

SYSTEM ADMIN. | 1 Feb 12:47 2012

Account User....

Your mailbox has exceeded the storage limit set by your administrator, you may
not be able to send or receive new mail until you re-validate your mailbox.
Copy the below link and fill the form to re-validate your mailbox

System Administrator

This message was sent using IMP, the Internet Messaging Program.

To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo <at>
More majordomo info at