James Smart [Fri, 3 Nov 2017 16:33:30 +0000 (09:33 -0700)]
lpfc: tie in to new dev_loss_tmo interface in nvme transport
This patch calls the new nvme transport routine for dev_loss_tmo
whenever the SCSI fc transport calls the lldd to make a dynamic
change to a remote ports dev_loss_tmo.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
James Smart [Fri, 3 Nov 2017 15:13:17 +0000 (08:13 -0700)]
nvme-fc: decouple ns references from lldd references
In the lldd api, a lldd may unregister a remoteport (loss of connectivity
or driver unload) or localport (driver unload). The lldd must wait for the
remoteport_delete or localport_delete before completing its actions post
the unregister. The xxx_deletes currently occur only when the xxxport
structure is fully freed after all references are removed. Thus the lldd
may be held hostage until an app or in-kernel entity that has a namespace
open finally closes so the namespace can be removed, the controller
removed, thus the transport objects, thus the lldd.
This patch decouples the transport and os-facing objects from the lldd
and the remoteport and localport. There is a point in all deletions
where the transport will no longer interact with the lldd on behalf of
a controller. That point centers around the association established
with the target/subsystem. It will access the lldd whenever it attempts
to create an association and while the association is active. New
associations may only be created if the remoteport is live (thus the
localport is live). It will not access the lldd after deleting the
association.
Therefore, the patch tracks the count of active controllers - those with
associations being created or that are active - on a remoteport. It also
tracks the number of remoteports that have active controllers, on a
a localport. When a remoteport is unregistered, as soon as there are no
active controllers, the lldd's remoteport_delete may be called and the
lldd may continue. Similarly, when a localport is unregistered, as soon
as there are no remoteports with active controllers, the localport_delete
callback may be made. This significantly speeds up unregistration with
the lldd.
The transport objects continue in suspended status with reconnect timers
running, and upon expiration, normal ref-counting will occur and the
objects will be freed. The transport object may still be held hostage
by the application/kernel module, but that is acceptable.
With this change, the lldd may be fully unloaded and reloaded, and
if registrations occur prior to the timeouts, the nvme controller and
namespaces will resume normally as if a link bounce.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
James Smart [Fri, 3 Nov 2017 15:13:16 +0000 (08:13 -0700)]
nvme-fc: fix localport resume using stale values
The localport resume was not updating the lldd ops structure. If the
lldd is unloaded and reloaded, the ops pointers will differ.
Additionally, as there are device references taken by the localport,
ensure that resume only resumes if the device matches as well.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Tue, 7 Nov 2017 17:28:32 +0000 (10:28 -0700)]
nvme: check admin passthru command effects
The NVMe standard provides a command effects log page so the host may
be aware of special requirements it may need to do for a particular
command. For example, the command may need to run with IO quiesced to
prevent timeouts or undefined behavior, or it may change the logical block
formats that determine how the host needs to construct future commands.
This patch saves the nvme command effects log page if the controller
supports it, and performs appropriate actions before and after an admin
passthrough command is completed. If the controller does not support the
command effects log page, the driver will define the effects for known
opcodes. The nvme format and santize are the only commands in this patch
with known effects.
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Tue, 7 Nov 2017 17:28:31 +0000 (10:28 -0700)]
nvme: factor get log into a helper
And fix the warning on a successful firmware log.
Reviewed-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Tue, 7 Nov 2017 16:27:34 +0000 (17:27 +0100)]
nvme: fix and clarify the check for missing metadata
Update the check in nvme_setup_rw for missing metadata so that it is
together with the other metadata handling, does not contain impossible
to reach conditions and warns if we get an impossible requests for
a (non-PI) metadata-enabled namespace when CONFIG_BLK_DEV_INTEGRITY
is not set.
Also add a little helper that checks if a given metadata configuration
contains protection information
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reported-by: Javier González <jg@lightnvm.io>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:28:56 +0000 (21:28 +0300)]
nvme: split __nvme_revalidate_disk
Split out the code that applies the calculate value to a given disk/queue
into new helper that can be reused by the multipath code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:28:55 +0000 (21:28 +0300)]
nvme: set the chunk size before freezing the queue
We don't need a frozen queue to update the chunk_size, which just is a
hint, and moving it a little earlier will allow for some better code
reuse with the multipath code.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:28:54 +0000 (21:28 +0300)]
nvme: don't pass struct nvme_ns to nvme_config_discard
To allow reusing this function for the multipath node.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:28:53 +0000 (21:28 +0300)]
nvme: don't pass struct nvme_ns to nvme_init_integrity
To allow reusing this function for the multipath node.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:28:52 +0000 (21:28 +0300)]
nvme: always unregister the integrity profile in __nvme_revalidate_disk
This is safe because the queue is always frozen when we revalidate, and
it simplifies both the existing code as well as the multipath
implementation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:28:51 +0000 (21:28 +0300)]
nvme: move the dying queue check from cancel to completion
With multipath we don't want a hard DNR bit on a request that is cancelled
by a controller reset, but instead want to be able to retry it on another
patch. To archive this don't always set the DNR bit when the queue is
dying in nvme_cancel_request, but defer that decision to
nvme_req_needs_retry. Note that it applies to any command there and not
just cancelled commands, but one the queue is dying that is the right
thing to do anyway.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Sun, 5 Nov 2017 16:16:09 +0000 (09:16 -0700)]
blktrace: fix unlocked registration of tracepoints
We need to ensure that tracepoints are registered and unregistered
with the users of them. The existing atomic count isn't enough for
that. Add a lock around the tracepoints, so we serialize access
to them.
This fixes cases where we have multiple users setting up and
tearing down tracepoints, like this:
CPU: 0 PID: 2995 Comm: syzkaller857118 Not tainted
4.14.0-rc5-next-
20171018+ #36
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
panic+0x1e4/0x41c kernel/panic.c:183
__warn+0x1c4/0x1e0 kernel/panic.c:546
report_bug+0x211/0x2d0 lib/bug.c:183
fixup_bug+0x40/0x90 arch/x86/kernel/traps.c:177
do_trap_no_signal arch/x86/kernel/traps.c:211 [inline]
do_trap+0x260/0x390 arch/x86/kernel/traps.c:260
do_error_trap+0x120/0x390 arch/x86/kernel/traps.c:297
do_invalid_op+0x1b/0x20 arch/x86/kernel/traps.c:310
invalid_op+0x18/0x20 arch/x86/entry/entry_64.S:905
RIP: 0010:tracepoint_add_func kernel/tracepoint.c:210 [inline]
RIP: 0010:tracepoint_probe_register_prio+0x397/0x9a0 kernel/tracepoint.c:283
RSP: 0018:
ffff8801d1d1f6c0 EFLAGS:
00010293
RAX:
ffff8801d22e8540 RBX:
00000000ffffffef RCX:
ffffffff81710f07
RDX:
0000000000000000 RSI:
ffffffff85b679c0 RDI:
ffff8801d5f19818
RBP:
ffff8801d1d1f7c8 R08:
ffffffff81710c10 R09:
0000000000000004
R10:
ffff8801d1d1f6b0 R11:
0000000000000003 R12:
ffffffff817597f0
R13:
0000000000000000 R14:
00000000ffffffff R15:
ffff8801d1d1f7a0
tracepoint_probe_register+0x2a/0x40 kernel/tracepoint.c:304
register_trace_block_rq_insert include/trace/events/block.h:191 [inline]
blk_register_tracepoints+0x1e/0x2f0 kernel/trace/blktrace.c:1043
do_blk_trace_setup+0xa10/0xcf0 kernel/trace/blktrace.c:542
blk_trace_setup+0xbd/0x180 kernel/trace/blktrace.c:564
sg_ioctl+0xc71/0x2d90 drivers/scsi/sg.c:1089
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x444339
RSP: 002b:
00007ffe05bb5b18 EFLAGS:
00000206 ORIG_RAX:
0000000000000010
RAX:
ffffffffffffffda RBX:
00000000006d66c0 RCX:
0000000000444339
RDX:
000000002084cf90 RSI:
00000000c0481273 RDI:
0000000000000009
RBP:
0000000000000082 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000000 R11:
0000000000000206 R12:
ffffffffffffffff
R13:
00000000c0481273 R14:
0000000000000000 R15:
0000000000000000
since we can now run these in parallel. Ensure that the exported helpers
for doing this are grabbing the queue trace mutex.
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Tested-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Sun, 5 Nov 2017 16:13:48 +0000 (09:13 -0700)]
blktrace: fix unlocked access to init/start-stop/teardown
sg.c calls into the blktrace functions without holding the proper queue
mutex for doing setup, start/stop, or teardown.
Add internal unlocked variants, and export the ones that do the proper
locking.
Fixes: 6da127ad0918 ("blktrace: Add blktrace ioctls to SCSI generic devices")
Tested-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Minwoo Im [Tue, 7 Nov 2017 16:25:37 +0000 (09:25 -0700)]
null_blk: add an usage for shared tags in documentation
Add usage explanation for a shared_tags, introduced by commit:
82f402fefa50 ("null_blk: add support for shared tags")
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reworded slightly.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Minwoo Im [Tue, 7 Nov 2017 13:23:01 +0000 (22:23 +0900)]
null_blk: fix default values in documentation
null_blk.c has initial value of
(1) nr_devices as 1.
(2) completion_nsec as 10,000ns, not 10.000ns.
documentation should be updated for fixes above.
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Mon, 6 Nov 2017 21:11:58 +0000 (16:11 -0500)]
nbd: don't start req until after the dead connection logic
We can end up sleeping for a while waiting for the dead timeout, which
means we could get the per request timer to fire. We did handle this
case, but if the dead timeout happened right after we submitted we'd
either tear down the connection or possibly requeue as we're handling an
error and race with the endio which can lead to panics and other
hilarity.
Fixes: 560bc4b39952 ("nbd: handle dead connections")
Cc: stable@vger.kernel.org
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Josef Bacik [Mon, 6 Nov 2017 21:11:57 +0000 (16:11 -0500)]
nbd: wait uninterruptible for the dead timeout
If we have a pending signal or the user kills their application then
it'll bring down the whole device, which is less than awesome. Instead
wait uninterruptible for the dead timeout so we're sure we gave it our
best shot.
Fixes: 560bc4b39952 ("nbd: handle dead connections")
Cc: stable@vger.kernel.org
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Thu, 2 Nov 2017 15:24:38 +0000 (23:24 +0800)]
blk-mq: don't allocate driver tag upfront for flush rq
The idea behind it is simple:
1) for none scheduler, driver tag has to be borrowed for flush rq,
otherwise we may run out of tag, and that causes an IO hang. And
get/put driver tag is actually noop for none, so reordering tags
isn't necessary at all.
2) for a real I/O scheduler, we need not allocate a driver tag upfront
for flush rq. It works just fine to follow the same approach as
normal requests: allocate driver tag for each rq just before calling
->queue_rq().
One driver visible change is that the driver tag isn't shared in the
flush request sequence. That won't be a problem, since we always do that
in legacy path.
Then flush rq need not be treated specially wrt. get/put driver tag.
This cleans up the code - for instance, reorder_tags_to_front() can be
removed, and we needn't worry about request ordering in dispatch list
for avoiding I/O deadlock.
Also we have to put the driver tag before requeueing.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 4 Nov 2017 18:39:57 +0000 (12:39 -0600)]
blk-mq: move blk_mq_put_driver_tag*() into blk-mq.h
We need this helper to put the driver tag for flush rq, since we will
not share tag in the flush request sequence in the following patch
in case that I/O scheduler is applied.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Thu, 2 Nov 2017 15:24:36 +0000 (23:24 +0800)]
blk-mq-sched: decide how to handle flush rq via RQF_FLUSH_SEQ
In case of IO scheduler we always pre-allocate one driver tag before
calling blk_insert_flush(), and flush request will be marked as
RQF_FLUSH_SEQ once it is in flush machinery.
So if RQF_FLUSH_SEQ isn't set, we call blk_insert_flush() to handle
the request, otherwise the flush request is dispatched to ->dispatch
list directly.
This is a preparation patch for not preallocating a driver tag for flush
requests, and for not treating flush requests as a special case. This is
similar to what the legacy path does.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Thu, 2 Nov 2017 15:24:35 +0000 (23:24 +0800)]
blk-flush: use blk_mq_request_bypass_insert()
In the following patch, we will use RQF_FLUSH_SEQ to decide:
1) if the flag isn't set, the flush rq need to be inserted via
blk_insert_flush()
2) otherwise, the flush rq need to be dispatched directly since
it is in flush machinery now.
So we use blk_mq_request_bypass_insert() for requests of bypassing
flush machinery, just like the legacy path did.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Thu, 2 Nov 2017 15:24:34 +0000 (23:24 +0800)]
block: pass 'run_queue' to blk_mq_request_bypass_insert
Block flush need this function without running the queue, so add a
parameter controlling whether we run it or not.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Thu, 2 Nov 2017 15:24:33 +0000 (23:24 +0800)]
blk-flush: don't run queue for requests bypassing flush
blk_insert_flush() should only insert request since run queue always
follows it.
In case of bypassing flush, we don't need to run queue because every
blk_insert_flush() follows one run queue.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jianchao Wang [Thu, 2 Nov 2017 15:24:32 +0000 (23:24 +0800)]
blk-mq: put the driver tag of nxt rq before first one is requeued
When freeing the driver tag of the next rq with an I/O scheduler
configured, we get the first entry of the list. However, this can
race with requeue of a request, and we end up getting the wrong request
from the head of the list. Free the driver tag of next rq before the
failed one is requeued in the failure branch of queue_rq callback.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
weiping zhang [Tue, 17 Oct 2017 15:56:21 +0000 (23:56 +0800)]
blkcg: add sanity check for blkcg policy operations
blkcg policy should keep cpd/pd's alloc_fn and free_fn in pairs,
otherwise policy would register fail.
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: weiping zhang <zhangweiping@didichuxing.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 4 Nov 2017 18:21:12 +0000 (02:21 +0800)]
blk-mq: don't handle failure in .get_budget
It is enough to just check if we can get the budget via .get_budget().
And we don't need to deal with device state change in .get_budget().
For SCSI, one issue to be fixed is that we have to call
scsi_mq_uninit_cmd() to free allocated ressources if SCSI device fails
to handle the request. And it isn't enough to simply call
blk_mq_end_request() to do that if this request is marked as
RQF_DONTPREP.
Fixes: 0df21c86bdbf(scsi: implement .get_budget and .put_budget for blk-mq)
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 4 Nov 2017 01:55:34 +0000 (09:55 +0800)]
SCSI: don't get target/host busy_count in scsi_mq_get_budget()
It is very expensive to atomic_inc/atomic_dec the host wide counter of
host->busy_count, and it should have been avoided via blk-mq's mechanism
of getting driver tag, which uses the more efficient way of sbitmap queue.
Also we don't check atomic_read(&sdev->device_busy) in scsi_mq_get_budget()
and don't run queue if the counter becomes zero, so IO hang may be caused
if all requests are completed just before the current SCSI device
is added to shost->starved_list.
Fixes: 0df21c86bdbf(scsi: implement .get_budget and .put_budget for blk-mq)
Reported-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Fri, 20 Oct 2017 14:45:23 +0000 (16:45 +0200)]
block: fix peeking requests during PM
We need to look for an active PM request until the next softbarrier
instead of looking for the first non-PM request. Otherwise any cause
of request reordering might starve the PM request(s).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Bart Van Assche [Mon, 16 Oct 2017 23:32:26 +0000 (16:32 -0700)]
blk-mq: Make blk_mq_get_request() error path less confusing
blk_mq_get_tag() can modify data->ctx. This means that in the
error path of blk_mq_get_request() data->ctx should be passed to
blk_mq_put_ctx() instead of local_ctx. Note: since blk_mq_put_ctx()
ignores its argument, this patch does not change any functionality.
References: commit
1ad43c0078b7 ("blk-mq: don't leak preempt counter/q_usage_counter when allocating rq failed")
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
weiping zhang [Fri, 22 Sep 2017 15:36:28 +0000 (23:36 +0800)]
blk-mq: fix nr_requests wrong value when modify it from sysfs
if blk-mq use "none" io scheduler, nr_request get a wrong value when
input a number > tag_set->queue_depth. blk_mq_tag_update_depth will get
the smaller one min(nr, set->queue_depth), and then q->nr_request get a
wrong value.
Reproduce:
echo none > /sys/block/nvme0n1/queue/scheduler
echo
1000000 > /sys/block/nvme0n1/queue/nr_requests
cat /sys/block/nvme0n1/queue/nr_requests
1000000
Signed-off-by: weiping zhang <zhangweiping@didichuxing.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 3 Nov 2017 17:00:03 +0000 (11:00 -0600)]
cdrom: hide CONFIG_CDROM menu selection
We don't need to expose this. The point is that drivers select
the uniform CDROM layer, if they need it, the user should not
have to make a conscious decision on whether to include this
separately or not.
Fixes: 2a750166a5be ("block: Rework drivers/cdrom/Makefile")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:29:54 +0000 (21:29 +0300)]
block: add a poll_fn callback to struct request_queue
That we we can also poll non blk-mq queues. Mostly needed for
the NVMe multipath code, but could also be useful elsewhere.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:29:53 +0000 (21:29 +0300)]
block: introduce GENHD_FL_HIDDEN
With this flag a driver can create a gendisk that can be used for I/O
submission inside the kernel, but which is not registered as user
facing block device. This will be useful for the NVMe multipath
implementation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:29:52 +0000 (21:29 +0300)]
block: don't look at the struct device dev_t in disk_devt
The hidden gendisks introduced in the next patch need to keep the dev
field in their struct device empty so that udev won't try to create
block device nodes for them. To support that rewrite disk_devt to
look at the major and first_minor fields in the gendisk itself instead
of looking into the struct device.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:29:51 +0000 (21:29 +0300)]
block: add a blk_steal_bios helper
This helpers allows to bounce steal the uncompleted bios from a request so
that they can be reissued on another path.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:29:50 +0000 (21:29 +0300)]
block: provide a direct_make_request helper
This helper allows reinserting a bio into a new queue without much
overhead, but requires all queue limits to be the same for the upper
and lower queues, and it does not provide any recursion preventions.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:29:49 +0000 (21:29 +0300)]
block: add REQ_DRV bit
Set aside a bit in the request/bio flags for driver use.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 2 Nov 2017 18:29:48 +0000 (21:29 +0300)]
block: move REQ_NOWAIT
This flag should be before the operation-specific REQ_NOUNMAP bit.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Fri, 3 Nov 2017 16:28:51 +0000 (10:28 -0600)]
Merge branch 'nvme-4.15' of git://git.infradead.org/nvme into for-4.15/block
Pull NVMe changes from Christoph:
"Below are the currently queue nvme updates for Linux 4.15. There are
a few more things that could make it for this merge window, but I'd
like to get things into linux-next, especially for the unlikely case
that Linus decided to cut -rc8.
Highlights:
- support for SGLs in the PCIe driver (Chaitanya Kulkarni)
- disable I/O schedulers for the admin queue (Israel Rukshin)
- various Fibre Channel fixes and enhancements (James Smart)
- various refactoring for better code sharing between transports
(Sagi Grimberg and me)
as well as lots of little bits from various contributors."
Minwoo Im [Thu, 2 Nov 2017 09:07:44 +0000 (18:07 +0900)]
nvme: comment typo fixed in clearing AER
a tiny typo fixed in a comment.
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Arnd Bergmann [Thu, 2 Nov 2017 11:42:00 +0000 (12:42 +0100)]
skd: use ktime_get_real_seconds()
Like many storage drivers, skd uses an unsigned 32-bit number for
interchanging the current time with the firmware. This will overflow in
y2106 and is otherwise safe.
However, the get_seconds() function is generally considered deprecated
since the behavior is different between 32-bit and 64-bit architectures,
and using it may indicate a bigger problem.
To annotate that we've thought about this, let's add a comment here
and migrate to the ktime_get_real_seconds() function that consistently
returns a 64-bit number.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Arnd Bergmann [Thu, 2 Nov 2017 11:19:32 +0000 (12:19 +0100)]
block: fix CDROM dependency on BLK_DEV
After the cdrom cleanup, I get randconfig warnings for some configurations:
warning: (BLK_DEV_IDECD && BLK_DEV_SR) selects CDROM which has unmet direct dependencies (BLK_DEV)
This adds an explicit BLK_DEV dependency for both drivers. The other
drivers that select 'CDROM' already have this and don't need a change.
Fixes: 2a750166a5be ("block: Rework drivers/cdrom/Makefile")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Keith Busch [Fri, 20 Oct 2017 22:18:03 +0000 (16:18 -0600)]
nvme: Remove unused headers
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Fri, 27 Oct 2017 20:12:49 +0000 (13:12 -0700)]
nvmet: fix fatal_err_work deadlock
Below is a stack trace for an issue that was reported.
What's happening is that the nvmet layer had it's controller kato
timeout fire, which causes it to schedule its fatal error handler
via the fatal_err_work element. The error handler is invoked, which
calls the transport delete_ctrl() entry point, and as the transport
tears down the controller, nvmet_sq_destroy ends up doing the final
put on the ctlr causing it to enter its free routine. The ctlr free
routine does a cancel_work_sync() on fatal_err_work element, which
then does a flush_work and wait_for_completion. But, as the wait is
in the context of the work element being flushed, its in a catch-22
and the thread hangs.
[ 326.903131] nvmet: ctrl 1 keep-alive timer (15 seconds) expired!
[ 326.909832] nvmet: ctrl 1 fatal error occurred!
[ 327.643100] lpfc 0000:04:00.0: 0:6313 NVMET Defer ctx release xri
x114 flg x2
[ 494.582064] INFO: task kworker/0:2:243 blocked for more than 120
seconds.
[ 494.589638] Not tainted 4.14.0-rc1.James+ #1
[ 494.594986] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 494.603718] kworker/0:2 D 0 243 2 0x80000000
[ 494.609839] Workqueue: events nvmet_fatal_error_handler [nvmet]
[ 494.616447] Call Trace:
[ 494.619177] __schedule+0x28d/0x890
[ 494.623070] schedule+0x36/0x80
[ 494.626571] schedule_timeout+0x1dd/0x300
[ 494.631044] ? dequeue_task_fair+0x592/0x840
[ 494.635810] ? pick_next_task_fair+0x23b/0x5c0
[ 494.640756] wait_for_completion+0x121/0x180
[ 494.645521] ? wake_up_q+0x80/0x80
[ 494.649315] flush_work+0x11d/0x1a0
[ 494.653206] ? wake_up_worker+0x30/0x30
[ 494.657484] __cancel_work_timer+0x10b/0x190
[ 494.662249] cancel_work_sync+0x10/0x20
[ 494.666525] nvmet_ctrl_put+0xa3/0x100 [nvmet]
[ 494.671482] nvmet_sq_:q+0x64/0xd0 [nvmet]
[ 494.676540] nvmet_fc_delete_target_queue+0x202/0x220 [nvmet_fc]
[ 494.683245] nvmet_fc_delete_target_assoc+0x6d/0xc0 [nvmet_fc]
[ 494.689743] nvmet_fc_delete_ctrl+0x137/0x1a0 [nvmet_fc]
[ 494.695673] nvmet_fatal_error_handler+0x30/0x40 [nvmet]
[ 494.701589] process_one_work+0x149/0x360
[ 494.706064] worker_thread+0x4d/0x3c0
[ 494.710148] kthread+0x109/0x140
[ 494.713751] ? rescuer_thread+0x380/0x380
[ 494.718214] ? kthread_park+0x60/0x60
Correct by having the fc transport convert to a different workq context
for the actual controller teardown which may call the cancel_work_sync.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Wed, 25 Oct 2017 23:43:17 +0000 (16:43 -0700)]
nvme-fc: add dev_loss_tmo timeout and remoteport resume support
When a remoteport is unregistered (connectivity lost), the following
actions are taken:
- the remoteport is marked DELETED
- the time when dev_loss_tmo would expire is set in the remoteport
- all controllers on the remoteport are reset.
After a controller resets, it will stall in a RECONNECTING state waiting
for one of the following:
- the controller will continue to attempt reconnect per max_retries and
reconnect_delay. As no remoteport connectivity, the reconnect attempt
will immediately fail. If max reconnects has not been reached, a new
reconnect_delay timer will be schedule. If the current time plus
another reconnect_delay exceeds when dev_loss_tmo expires on the remote
port, then the reconnect_delay will be shortend to schedule no later
than when dev_loss_tmo expires. If max reconnect attempts are reached
(e.g. ctrl_loss_tmo reached) or dev_loss_tmo ix exceeded without
connectivity, the controller is deleted.
- the remoteport is re-registered prior to dev_loss_tmo expiring.
The resume of the remoteport will immediately attempt to reconnect
each of its suspended controllers.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
[hch: updated to use nvme_delete_ctrl]
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Wed, 25 Oct 2017 23:43:13 +0000 (16:43 -0700)]
nvme: allow controller RESETTING to RECONNECTING transition
Transport will typically transition from LIVE to RESETTING when initially
performing a reset or recovering from an error. Adding this transition
allows a transport to transition to RECONNECTING when it checks/waits for
connectivity then creates new transport connections and reinits the
controller.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Wed, 25 Oct 2017 23:43:16 +0000 (16:43 -0700)]
nvme-fc: check connectivity before initiating reconnects
Check remoteport connectivity before initiating reconnects
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Wed, 25 Oct 2017 23:43:15 +0000 (16:43 -0700)]
nvme-fc: add a dev_loss_tmo field to the remoteport
Add a dev_loss_tmo value, paralleling the SCSI FC transport, for device
connectivity loss.
The transport initializes the value in the nvme_fc_register_remoteport()
call. If the value is not set, a default of 60s is set.
Add a new routine to the api, nvme_fc_set_remoteport_devloss() routine,
which allows the lldd to dynamically update the value on an existing
remoteport.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Wed, 25 Oct 2017 23:43:14 +0000 (16:43 -0700)]
nvme-fc: change ctlr state assignments during reset/reconnect
Clean up some of the controller state checks and add the
RESETTING->RECONNECTING state transition.
Specifically:
- the movement of the RESETTING state change and schedule of reset_work
to core doesn't work wiht nvme_fc_error_recovery setting state to
RECONNECTING before attempting to reset. Remove the state change as
the reset request does it.
- In the rare cases where an error occurs right as we're transitioning
to LIVE, defer the controller start actions.
- In error handling on teardown of associations while performing initial
controller creation - avoid quiesce calls on the admin_q. They are
unneeded.
- Add the RESETTING->RECONNECTING transition in the reset handler.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Sun, 29 Oct 2017 12:21:02 +0000 (14:21 +0200)]
nvme: flush reset_work before safely continuing with delete operation
Prevent racing controller reset and delete flows. reset_work must not
ever self-requeue so flushing it suffices.
Reported-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Sun, 29 Oct 2017 12:21:01 +0000 (14:21 +0200)]
nvme-rdma: reuse nvme_delete_ctrl when reconnect attempts expire
instead of just queueing delete work
Reported-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Sun, 29 Oct 2017 08:44:31 +0000 (10:44 +0200)]
nvme: consolidate common code from ->reset_work
No change in behavior except that the FC code cancels two work items a
little later now.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Sun, 29 Oct 2017 08:44:30 +0000 (10:44 +0200)]
nvme-rdma: remove nvme_rdma_remove_ctrl
It is only used in two places, and some of the work done by it will
be taken into common code soon.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Sun, 29 Oct 2017 08:44:29 +0000 (10:44 +0200)]
nvme: move controller deletion to common code
Move the ->delete_work and the associated helpers to common code instead
of duplicating them in every driver. This also adds the missing reference
get/put for the loop driver.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Sun, 29 Oct 2017 08:44:28 +0000 (10:44 +0200)]
nvme-fc: merge __nvme_fc_schedule_delete_work into __nvme_fc_del_ctrl
No need to have two functions doing the same thing.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Thu, 28 Sep 2017 16:56:31 +0000 (09:56 -0700)]
nvme-fc: avoid workqueue flush stalls
There's no need to wait for the full nvme_wq, which is now shared,
to flush. flush only the delete_work item.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Sagi Grimberg <sgi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Bart Van Assche [Mon, 30 Oct 2017 16:02:19 +0000 (09:02 -0700)]
block: Rework drivers/cdrom/Makefile
Instead of referring from inside drivers/cdrom/Makefile to all the
drivers that use this driver, let these drivers select the cdrom
driver. This change makes the cdrom build code follow the approach
that is used for most other drivers, namely refer from the higher
layers to the lower layer instead of from the lower layer to the
higher layers.
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 27 Oct 2017 04:43:30 +0000 (12:43 +0800)]
blk-mq: don't restart queue when .get_budget returns BLK_STS_RESOURCE
SCSI restarts its queue in scsi_end_request() automatically, so we don't
need to handle this case in blk-mq.
Especailly any request won't be dequeued in this case, we needn't to
worry about IO hang caused by restart vs. dispatch.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Fri, 27 Oct 2017 04:43:29 +0000 (12:43 +0800)]
blk-mq: don't handle TAG_SHARED in restart
Now restart is used in the following cases, and TAG_SHARED is for
SCSI only.
1) .get_budget() returns BLK_STS_RESOURCE
- if resource in target/host level isn't satisfied, this SCSI device
will be added in shost->starved_list, and the whole queue will be rerun
(via SCSI's built-in RESTART) in scsi_end_request() after any request
initiated from this host/targe is completed. Forget to mention, host level
resource can't be an issue for blk-mq at all.
- the same is true if resource in the queue level isn't satisfied.
- if there isn't outstanding request on this queue, then SCSI's RESTART
can't work(blk-mq's can't work too), and the queue will be run after
SCSI_QUEUE_DELAY, and finally all starved sdevs will be handled by SCSI's
RESTART when this request is finished
2) scsi_dispatch_cmd() returns BLK_STS_RESOURCE
- if there isn't onprogressing request on this queue, the queue
will be run after SCSI_QUEUE_DELAY
- otherwise, SCSI's RESTART covers the rerun.
3) blk_mq_get_driver_tag() failed
- BLK_MQ_S_TAG_WAITING covers the cross-queue RESTART for driver
allocation.
In one word, SCSI's built-in RESTART is enough to cover the queue
rerun, and we don't need to pay special attention to TAG_SHARED wrt. restart.
In my test on scsi_debug(8 luns), this patch improves IOPS by 20% ~ 30% when
running I/O on these 8 luns concurrently.
Aslo Roman Pen reported the current RESTART is very expensive especialy
when there are lots of LUNs attached in one host, such as in his
test, RESTART causes half of IOPS be cut.
Fixes: https://marc.info/?l=linux-kernel&m=150832216727524&w=2
Fixes: 6d8c6c0f97ad ("blk-mq: Restart a single queue if tag sets are shared")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:32 +0000 (17:22 +0800)]
scsi: implement .get_budget and .put_budget for blk-mq
We need to tell blk-mq to reserve resources before queuing one request,
so implement these two callbacks. Then blk-mq can avoid to dequeue
request too early, and IO merging can be improved a lot.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:31 +0000 (17:22 +0800)]
scsi: allow passing in null rq to scsi_prep_state_check()
In the following patch, we will implement scsi_get_budget()
which need to call scsi_prep_state_check() when rq isn't
dequeued yet.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:30 +0000 (17:22 +0800)]
blk-mq-sched: improve dispatching from sw queue
SCSI devices use host-wide tagset, and the shared driver tag space is
often quite big. However, there is also a queue depth for each lun(
.cmd_per_lun), which is often small, for example, on both lpfc and
qla2xxx, .cmd_per_lun is just 3.
So lots of requests may stay in sw queue, and we always flush all
belonging to same hw queue and dispatch them all to driver.
Unfortunately it is easy to cause queue busy because of the small
.cmd_per_lun. Once these requests are flushed out, they have to stay in
hctx->dispatch, and no bio merge can happen on these requests, and
sequential IO performance is harmed.
This patch introduces blk_mq_dequeue_from_ctx for dequeuing a request
from a sw queue, so that we can dispatch them in scheduler's way. We can
then avoid dequeueing too many requests from sw queue, since we don't
flush ->dispatch completely.
This patch improves dispatching from sw queue by using the .get_budget
and .put_budget callbacks.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:29 +0000 (17:22 +0800)]
blk-mq: introduce .get_budget and .put_budget in blk_mq_ops
For SCSI devices, there is often a per-request-queue depth, which needs
to be respected before queuing one request.
Currently blk-mq always dequeues the request first, then calls
.queue_rq() to dispatch the request to lld. One obvious issue with this
approach is that I/O merging may not be successful, because when the
per-request-queue depth can't be respected, .queue_rq() has to return
BLK_STS_RESOURCE, and then this request has to stay in hctx->dispatch
list. This means it never gets a chance to be merged with other IO.
This patch introduces .get_budget and .put_budget callback in blk_mq_ops,
then we can try to get reserved budget first before dequeuing request.
If the budget for queueing I/O can't be satisfied, we don't need to
dequeue request at all. Hence the request can be left in the IO
scheduler queue, for more merging opportunities.
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:28 +0000 (17:22 +0800)]
block: kyber: check if there are requests in ctx in kyber_has_work()
There may be request in sw queue, and not fetched to domain queue
yet, so check it in kyber_has_work().
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:27 +0000 (17:22 +0800)]
sbitmap: introduce __sbitmap_for_each_set()
For blk-mq, we need to be able to iterate software queues starting
from any queue in a round robin fashion, so introduce this helper.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Cc: Omar Sandoval <osandov@fb.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:26 +0000 (17:22 +0800)]
blk-mq-sched: move actual dispatching into one helper
So that it becomes easy to support to dispatch from sw queue in the
following patch.
No functional change.
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Suggested-by: Christoph Hellwig <hch@lst.de> # for simplifying dispatch logic
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Sat, 14 Oct 2017 09:22:25 +0000 (17:22 +0800)]
blk-mq-sched: dispatch from scheduler IFF progress is made in ->dispatch
When the hw queue is busy, we shouldn't take requests from the scheduler
queue any more, otherwise it is difficult to do IO merge.
This patch fixes the awful IO performance on some SCSI devices(lpfc,
qla2xxx, ...) when mq-deadline/kyber is used by not taking requests if
hw queue is busy.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Scott Bauer [Tue, 31 Oct 2017 20:15:02 +0000 (14:15 -0600)]
MAINTAINERS: Remove Rafael from Opal maintainers.
He is no longer working on storage.
Signed-off-by: Scott Bauer <scott.bauer@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Liang Chen [Mon, 30 Oct 2017 21:46:35 +0000 (14:46 -0700)]
bcache: explicitly destroy mutex while exiting
mutex_destroy does nothing most of time, but it's better to call
it to make the code future proof and it also has some meaning
for like mutex debug.
As Coly pointed out in a previous review, bcache_exit() may not be
able to handle all the references properly if userspace registers
cache and backing devices right before bch_debug_init runs and
bch_debug_init failes later. So not exposing userspace interface
until everything is ready to avoid that issue.
Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Reviewed-by: Eric Wheeler <bcache@linux.ewheeler.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
tang.junhui [Mon, 30 Oct 2017 21:46:34 +0000 (14:46 -0700)]
bcache: fix wrong cache_misses statistics
Currently, Cache missed IOs are identified by s->cache_miss, but actually,
there are many situations that missed IOs are not assigned a value for
s->cache_miss in cached_dev_cache_miss(), for example, a bypassed IO
(s->iop.bypass = 1), or the cache_bio allocate failed. In these situations,
it will go to out_put or out_submit, and s->cache_miss is null, which leads
bch_mark_cache_accounting() to treat this IO as a hit IO.
[ML: applied by 3-way merge]
Signed-off-by: tang.junhui <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Tang Junhui [Mon, 30 Oct 2017 21:46:33 +0000 (14:46 -0700)]
bcache: update bucket_in_use in real time
bucket_in_use is updated in gc thread which triggered by invalidating or
writing sectors_to_gc dirty data, It's a long interval. Therefore, when we
use it to compare with the threshold, it is often not timely, which leads
to inaccurate judgment and often results in bucket depletion.
We have send a patch before, by the means of updating bucket_in_use
periodically In gc thread, which Coly thought that would lead high
latency, In this patch, we add avail_nbuckets to record the count of
available buckets, and we calculate bucket_in_use when alloc or free
bucket in real time.
[edited by ML: eliminated some whitespace errors]
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Elena Reshetova [Mon, 30 Oct 2017 21:46:32 +0000 (14:46 -0700)]
bcache: convert cached_dev.count from atomic_t to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:
- counter is initialized to 1 using atomic_set()
- a resource is freed upon counter reaching zero
- once counter reaches zero, its further
increments aren't allowed
- counter schema uses basic atomic operations
(set, inc, inc_not_zero, dec_and_test, etc.)
Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.
The variable cached_dev.count is used as pure reference counter.
Convert it to refcount_t and fix up the operations.
Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Mon, 30 Oct 2017 21:46:31 +0000 (14:46 -0700)]
bcache: only permit to recovery read error when cache device is clean
When bcache does read I/Os, for example in writeback or writethrough mode,
if a read request on cache device is failed, bcache will try to recovery
the request by reading from cached device. If the data on cached device is
not synced with cache device, then requester will get a stale data.
For critical storage system like database, providing stale data from
recovery may result an application level data corruption, which is
unacceptible.
With this patch, for a failed read request in writeback or writethrough
mode, recovery a recoverable read request only happens when cache device
is clean. That is to say, all data on cached device is up to update.
For other cache modes in bcache, read request will never hit
cached_dev_read_error(), they don't need this patch.
Please note, because cache mode can be switched arbitrarily in run time, a
writethrough mode might be switched from a writeback mode. Therefore
checking dc->has_data in writethrough mode still makes sense.
Changelog:
V4: Fix parens error pointed by Michael Lyle.
v3: By response from Kent Oversteet, he thinks recovering stale data is a
bug to fix, and option to permit it is unnecessary. So this version
the sysfs file is removed.
v2: rename sysfs entry from allow_stale_data_on_failure to
allow_stale_data_on_failure, and fix the confusing commit log.
v1: initial patch posted.
[small change to patch comment spelling by mlyle]
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reported-by: Arne Wolf <awolf@lenovo.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Nix <nix@esperi.org.uk>
Cc: Kai Krakow <hurikhan77@gmail.com>
Cc: Eric Wheeler <bcache@lists.ewheeler.net>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Bart Van Assche [Thu, 19 Oct 2017 17:00:48 +0000 (10:00 -0700)]
block: Fix a race between blk_cleanup_queue() and timeout handling
Make sure that if the timeout timer fires after a queue has been
marked "dying" that the affected requests are finished.
Reported-by: chenxiang (M) <chenxiang66@hisilicon.com>
Fixes: commit 287922eb0b18 ("block: defer timeouts to a workqueue")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Tested-by: chenxiang (M) <chenxiang66@hisilicon.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
James Smart [Mon, 23 Oct 2017 22:11:36 +0000 (15:11 -0700)]
nvme-fc: remove NVME_FC_MAX_SEGMENTS
The define is an arbitrary limit to the io size on the initiator,
capping the io to 1MB-4KB.
Remove the define from the transport. I/O size will solely be limited
by the LLDD sg limits.
Signed-off-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Fri, 20 Oct 2017 23:17:08 +0000 (16:17 -0700)]
nvme-fc: add support for duplicate_connect option
Adds support for the duplicate_connect option. When set to true,
checks whether there's an existing controller via the same host port
and target port for the same host (hostnqn, hostid) to the same
subsystem. Fails the connection request if an existing controller.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Fri, 20 Oct 2017 23:17:09 +0000 (16:17 -0700)]
nvme-rdma: add support for duplicate_connect option
Adds support for the duplicate_connect option. When set to true,
checks whether there's an existing controller via the same target
address (traddr), target port (trsvcid), and if specified, host
address (host_traddr). Fails the connection request if there is
an existing controller.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Fri, 20 Oct 2017 23:17:07 +0000 (16:17 -0700)]
nvme: add helper to compare options to controller
Adds a helper function that compares the host and subsytem
specified in a connect options list vs a controller.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Fri, 20 Oct 2017 23:17:06 +0000 (16:17 -0700)]
nvme: add duplicate_connect option
Add the "duplicate_connect" boolean option (presence means true).
Default is false.
When false, the transport should validate whether a new controller request
is targeted for the same host transport addressing and target transport
addressing as an existing controller. If so, the new controller request
should be rejected.
When true, the callee is explicitly requesting a duplicate controller
connection to be made and the new request should be attempted.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Wed, 18 Oct 2017 15:09:31 +0000 (17:09 +0200)]
nvme: check for a live controller in nvme_dev_open
This is a much more sensible check than just the admin queue.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@rimbeg.me>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Christoph Hellwig [Wed, 18 Oct 2017 14:59:25 +0000 (16:59 +0200)]
nvme: get rid of nvme_ctrl_list
Use the core chrdev code to set up the link between the character device
and the nvme controller. This allows us to get rid of the global list
of all controllers, and also ensures that we have both a reference to
the controller and the transport module before the open method of the
character device is called.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sgi@grimberg.me>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Christoph Hellwig [Wed, 18 Oct 2017 11:25:42 +0000 (13:25 +0200)]
nvme: switch controller refcounting to use struct device
Instead of allocating a separate struct device for the character device
handle embedd it into struct nvme_ctrl and use it for the main controller
refcounting. This removes double refcounting and gets us an automatic
reference for the character device operations. We keep ctrl->device as a
pointer for now to avoid chaning printks all over, but in the future we
could look into message printing helpers that take a controller structure
similar to what other subsystems do.
Note the delete_ctrl operation always already has a reference (either
through sysfs due this change, or because every open file on the
/dev/nvme-fabrics node has a refernece) when it is entered now, so we
don't need to do the unless_zero variant there.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Christoph Hellwig [Wed, 18 Oct 2017 11:22:00 +0000 (13:22 +0200)]
nvme: simplify nvme_open
Now that we are protected against lookup vs free races for the namespace
by using kref_get_unless_zero we don't need the hack of NULLing out the
disk private data during removal.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Christoph Hellwig [Wed, 18 Oct 2017 11:20:01 +0000 (13:20 +0200)]
nvme: use kref_get_unless_zero in nvme_find_get_ns
For kref_get_unless_zero to protect against lookup vs free races we need
to use it in all places where we aren't guaranteed to already hold a
reference. There is no such guarantee in nvme_find_get_ns, so switch to
kref_get_unless_zero in this function.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Jens Axboe [Wed, 25 Oct 2017 15:47:20 +0000 (09:47 -0600)]
mq-deadline: add 'deadline' as a name alias
The scheduler framework now supports looking up the appropriate
scheduler with the {name,mq} tupple. We can register mq-deadline
with the alias of 'deadline', so that switching to 'deadline'
will do the right thing based on the type of driver attached to
it.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 25 Oct 2017 18:35:02 +0000 (12:35 -0600)]
elevator: allow name aliases
Since we now lookup elevator types with the appropriate multiqueue
capability, allow schedulers to register with an alias alongside
the real name. This is in preparation for allowing 'mq-deadline'
to register an alias of 'deadline' as well.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 25 Oct 2017 18:33:42 +0000 (12:33 -0600)]
elevator: lookup mq vs non-mq elevators
If an IO scheduler is selected via elevator= and it doesn't match
the driver in question wrt blk-mq support, then we fail to boot.
The elevator= parameter is deprecated and only supported for
non-mq devices. Augment the elevator lookup API so that we
pass in if we're looking for an mq capable scheduler or not,
so that we only ever return a valid type for the queue in
question.
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=196695
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ilya Dryomov [Mon, 16 Oct 2017 13:59:10 +0000 (15:59 +0200)]
block: cope with WRITE ZEROES failing in blkdev_issue_zeroout()
sd_config_write_same() ignores ->max_ws_blocks == 0 and resets it to
permit trying WRITE SAME on older SCSI devices, unless ->no_write_same
is set. Because REQ_OP_WRITE_ZEROES is implemented in terms of WRITE
SAME, blkdev_issue_zeroout() may fail with -EREMOTEIO:
$ fallocate -zn -l 1k /dev/sdg
fallocate: fallocate failed: Remote I/O error
$ fallocate -zn -l 1k /dev/sdg # OK
$ fallocate -zn -l 1k /dev/sdg # OK
The following calls succeed because sd_done() sets ->no_write_same in
response to a sense that would become BLK_STS_TARGET/-EREMOTEIO, causing
__blkdev_issue_zeroout() to fall back to generating ZERO_PAGE bios.
This means blkdev_issue_zeroout() must cope with WRITE ZEROES failing
and fall back to manually zeroing, unless BLKDEV_ZERO_NOFALLBACK is
specified. For BLKDEV_ZERO_NOFALLBACK case, return -EOPNOTSUPP if
sd_done() has just set ->no_write_same thus indicating lack of offload
support.
Fixes: c20cfc27a473 ("block: stop using blkdev_issue_write_same for zeroing")
Cc: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ilya Dryomov [Mon, 16 Oct 2017 13:59:09 +0000 (15:59 +0200)]
block: factor out __blkdev_issue_zero_pages()
blkdev_issue_zeroout() will use this in !BLKDEV_ZERO_NOFALLBACK case.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ilya Dryomov [Wed, 18 Oct 2017 12:38:38 +0000 (14:38 +0200)]
block: move CAP_SYS_ADMIN check in blkdev_roset()
Check for CAP_SYS_ADMIN before calling into the driver, similar to
blkdev_flushbuf(). This is safer and can spare a check in the driver.
(Currently BLKROSET is overridden by md and rbd, rbd is missing the
check. md has the check, but it covers a lot more than BLKROSET.)
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dmitry Monakhov [Wed, 25 Oct 2017 00:44:57 +0000 (18:44 -0600)]
block: Invalidate cache on discard v2
It is reasonable drop page cache on discard, otherwise that pages may
be written by writeback second later, so thin provision devices will
not be happy. This seems to be a security leak in case of secure discard case.
Also add check for queue_discard flag on early stage.
Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Javier González [Tue, 24 Oct 2017 13:56:13 +0000 (15:56 +0200)]
lightnvm: pblk: remove leftover testing function
A previous patch inadvertently left an unused test function in the
header, kill it.
Fixes: 8bd400204bd5 ("lightnvm: pblk: cleanup unused and static functions")
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Nitzan Carmi [Sun, 22 Oct 2017 09:37:04 +0000 (09:37 +0000)]
nvme-rdma: Add debug message when reaches timeout
Signed-off-by: Nitzan Carmi <nitzanc@mellanox.com>
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Max Gurtovoy [Mon, 23 Oct 2017 09:59:27 +0000 (12:59 +0300)]
nvme-rdma: align nvme_rdma_device structure
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Thu, 19 Oct 2017 23:11:39 +0000 (16:11 -0700)]
nvme-fc: correct io timeout behavior
The transport io timeout behavior wasn't quite correct. It ignored
that the io error handler is supposed to be synchronous so it possibly
allowed the blk request to be restarted while the io associated was
still aborting. Timeouts on reserved commands, those used for
association create, were never timing out thus they hung out forever.
To correct:
If an io is times out while a remoteport is not connected, just
restart the io timer. The lack of connectivity will simultaneously
be resetting the controller, so the reset path will abort and terminate
the io.
If an io is times out while it was marked for transport abort, just
reset the io timer. The abort process is underway and will complete
the io.
Otherwise, if an io times out, abort the io. If the abort was
unsuccessful (unlikely) give up and return not handled.
If the abort was successful, as the abort process is underway it will
terminate the io, so rather than synchronously waiting, just restart
the io timer.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
James Smart [Thu, 19 Oct 2017 23:11:38 +0000 (16:11 -0700)]
nvme-fc: correct io termination handling
The io completion handling for i/o's that are failing due to
to a transport error or association termination had issues, causing
io failures (DNR set so retries didn't kick in) or long stalls.
Change the io completion handler for the following items:
When an io has been completed due to a transport abort (based on an
exchange error) or when marked as aborted as part of an association
termination (FCOP_FLAGS_TERMIO), set the NVME completion status to
NVME_SC_ABORTED. By default, do not set DNR on the status so that a
retry can be attempted after association recreate.
In cases where an io is failed (non-successful nvme status including
aborted), if the controller is being deleted (blk_queue_dying) or
the io was part of the ios used for association creation (ctrl state
is NEW or RECONNECTING), then additionally set the DNR bit so the io
will not be retried. If the failed io was part of association creation,
the failure will tear down the partially completioned association and
typically restart a new reconnect attempt (another create association
later).
Rearranged code flow to remove a largely unneeded local variable.
Signed-off-by: James Smart <james.smart@broadcom.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Tue, 17 Oct 2017 01:24:20 +0000 (18:24 -0700)]
nvme-pci: add SGL support
This adds SGL support for NVMe PCIe driver, based on an earlier patch
from Rajiv Shanmugam Madeswaran <smrajiv15 at gmail.com>. This patch
refactors the original code and adds new module parameter sgl_threshold
to determine whether to use SGL or PRP for IOs.
The usage of SGLs is controlled by the sgl_threshold module parameter,
which allows to conditionally use SGLs if average request segment
size (avg_seg_size) is greater than sgl_threshold. In the original patch,
the decision of using SGLs was dependent only on the IO size,
with the new approach we consider not only IO size but also the
number of physical segments present in the IO.
We calculate avg_seg_size based on request payload bytes and number
of physical segments present in the request.
For e.g.:-
1. blk_rq_nr_phys_segments = 2 blk_rq_payload_bytes = 8k
avg_seg_size = 4K use sgl if avg_seg_size >= sgl_threshold.
2. blk_rq_nr_phys_segments = 2 blk_rq_payload_bytes = 64k
avg_seg_size = 32K use sgl if avg_seg_size >= sgl_threshold.
3. blk_rq_nr_phys_segments = 16 blk_rq_payload_bytes = 64k
avg_seg_size = 4K use sgl if avg_seg_size >= sgl_threshold.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Christoph Hellwig [Wed, 18 Oct 2017 11:10:01 +0000 (13:10 +0200)]
nvme: use ida_simple_{get,remove} for the controller instance
Switch to the ida_simple_* helpers instead of opencoding them.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Roy Shterman [Wed, 18 Oct 2017 10:46:07 +0000 (13:46 +0300)]
nvmet: Change max_nsid in subsystem due to ns_disable if needed
In case we disable namespaces which has the nsid like
subsystem max_nsid we need to search for the next largest nsid
in this subsystem. If the subsystem don't has more namespaces
we set it to 0, else we take nsid from the last namespace in
namespaces list because the list is sorted while inserting.
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Roy Shterman <roys@lightbitslabs.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
[hch: slight refactor]
Signed-off-by: Christoph Hellwig <hch@lst.de>