Jianchao Wang [Fri, 14 Dec 2018 01:28:20 +0000 (09:28 +0800)]
blk-mq: replace and kill blk_mq_request_issue_directly
Replace blk_mq_request_issue_directly with blk_mq_try_issue_directly
in blk_insert_cloned_request and kill it as nobody uses it any more.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jianchao Wang [Fri, 14 Dec 2018 01:28:19 +0000 (09:28 +0800)]
blk-mq: issue directly with bypass 'false' in blk_mq_sched_insert_requests
It is not necessary to issue request directly with bypass 'true'
in blk_mq_sched_insert_requests and handle the non-issued requests
itself. Just set bypass to 'false' and let blk_mq_try_issue_directly
handle them totally. Remove the blk_rq_can_direct_dispatch check,
because blk_mq_try_issue_directly can handle it well.If request is
direct-issued unsuccessfully, insert the reset.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jianchao Wang [Fri, 14 Dec 2018 01:28:18 +0000 (09:28 +0800)]
blk-mq: refactor the code of issue request directly
Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly
into one interface to unify the interfaces to issue requests
directly. The merged interface takes over the requests totally,
it could insert, end or do nothing based on the return value of
.queue_rq and 'bypass' parameter. Then caller needn't any other
handling any more and then code could be cleaned up.
And also the commit
c616cbee ( blk-mq: punt failed direct issue
to dispatch list ) always inserts requests to hctx dispatch list
whenever get a BLK_STS_RESOURCE or BLK_STS_DEV_RESOURCE, this is
overkill and will harm the merging. We just need to do that for
the requests that has been through .queue_rq. This patch also
could fix this.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 13 Dec 2018 20:32:14 +0000 (21:32 +0100)]
block: remove the bio_integrity_advance export
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 13 Dec 2018 20:32:13 +0000 (21:32 +0100)]
block: remove the bioset_integrity_free export
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 13 Dec 2018 20:32:12 +0000 (21:32 +0100)]
block: remove the unused bio_set_pages_dirty and bio_check_pages_dirty exports
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 13 Dec 2018 20:32:11 +0000 (21:32 +0100)]
block: remove the unused bio_iov_iter_get_pages export
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 13 Dec 2018 20:32:10 +0000 (21:32 +0100)]
block: remove the blk_recount_segments export
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Thu, 13 Dec 2018 20:32:09 +0000 (21:32 +0100)]
block: remove the bio_phys_segments export
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Sagi Grimberg [Thu, 13 Dec 2018 20:34:07 +0000 (12:34 -0800)]
nvme: fix kernel paging oops
free the controller discard_page correctly.
Fixes: cb5b7262b011 ("nvme: provide fallback for discard alloc failure")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Thu, 13 Dec 2018 19:59:37 +0000 (12:59 -0700)]
Merge branch 'nvme-4.21' of git://git.infradead.org/nvme into for-4.21/block
Pull NVMe updates from Christoph:
"Here is the second large chunk of nvme updates for 4.21:
- host and target support for NVMe over TCP (Sagi Grimberg,
Roy Shterman, Solganik Alexander)
- error log page support in target (Chaitanya Kulkarni)
plus small fixes and improvements from Jens Axboe and Chengguang Xu."
* 'nvme-4.21' of git://git.infradead.org/nvme: (33 commits)
nvme-rdma: support separate queue maps for read and write
nvme-tcp: support separate queue maps for read and write
nvme-fabrics: allow user to set nr_write_queues for separate queue maps
nvme-fabrics: add missing nvmf_ctrl_options documentation
blk-mq-rdma: pass in queue map to blk_mq_rdma_map_queues
nvmet: update smart log with num err log entries
nvmet: add error log page cmd handler
nvmet: add error log support for file backend
nvmet: add error log support for bdev backend
nvmet: add error log support for admin-cmd
nvmet: add error log support for rdma backend
nvmet: add error log support for fabrics-cmd
nvmet: add error log support in the core
nvmet: add interface to update error-log page
nvmet: add error-log definitions
nvme: add error log page slot definition
nvme: remove nvme_common command cdw10 array
nvmet: remove unused variable
nvme: provide fallback for discard alloc failure
nvme: add __exit annotation
...
Guoju Fang [Thu, 13 Dec 2018 14:53:57 +0000 (22:53 +0800)]
bcache: print number of keys in trace_bcache_journal_write
Sometimes flush journal may be very frequent, so it's useful to dump
number of keys every time write journal.
Signed-off-by: Guoju Fang <fangguoju@gmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Thu, 13 Dec 2018 14:53:56 +0000 (22:53 +0800)]
bcache: set writeback_percent in a flexible range
Because CUTOFF_WRITEBACK is defined as 40, so before the changes of
dynamic cutoff writeback values, writeback_percent is limited to [0,
CUTOFF_WRITEBACK]. Any value larger than CUTOFF_WRITEBACK will be fixed
up to 40.
Now cutof writeback limit is a dynamic value bch_cutoff_writeback, so
the range of writeback_percent can be a more flexible range as [0,
bch_cutoff_writeback]. The flexibility is, it can be expended to a
larger or smaller range than [0, 40], depends on how value
bch_cutoff_writeback is specified.
The default value is still strongly recommended to most of users for
most of workloads. But for people who want to do research on bcache
writeback perforamnce tuning, they may have chance to specify more
flexible writeback_percent in range [0, 70].
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Thu, 13 Dec 2018 14:53:55 +0000 (22:53 +0800)]
bcache: make cutoff_writeback and cutoff_writeback_sync tunable
Currently the cutoff writeback and cutoff writeback sync thresholds are
defined by CUTOFF_WRITEBACK (40) and CUTOFF_WRITEBACK_SYNC (70) as
static values. Most of time these they work fine, but when people want
to do research on bcache writeback mode performance tuning, there is no
chance to modify the soft and hard cutoff writeback values.
This patch introduces two module parameters bch_cutoff_writeback_sync
and bch_cutoff_writeback which permit people to tune the values when
loading bcache.ko. If they are not specified by module loading, current
values CUTOFF_WRITEBACK_SYNC and CUTOFF_WRITEBACK will be used as
default and nothing changes.
When people want to tune this two values,
- cutoff_writeback can be set in range [1, 70]
- cutoff_writeback_sync can be set in range [1, 90]
- cutoff_writeback always <= cutoff_writeback_sync
The default values are strongly recommended to most of users for most of
workloads. Anyway, if people wants to take their own risk to do research
on new writeback cutoff tuning for their own workload, now they can make
it.
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Thu, 13 Dec 2018 14:53:54 +0000 (22:53 +0800)]
bcache: add MODULE_DESCRIPTION information
This patch moves MODULE_AUTHOR and MODULE_LICENSE to end of super.c, and
add MODULE_DESCRIPTION("Bcache: a Linux block layer cache").
This is preparation for adding module parameters.
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Thu, 13 Dec 2018 14:53:53 +0000 (22:53 +0800)]
bcache: option to automatically run gc thread after writeback
The option gc_after_writeback is disabled by default, because garbage
collection will discard SSD data which drops cached data.
Echo 1 into /sys/fs/bcache/<UUID>/internal/gc_after_writeback will
enable this option, which wakes up gc thread when writeback accomplished
and all cached data is clean.
This option is helpful for people who cares writing performance more. In
heavy writing workload, all cached data can be clean only happens when
writeback thread cleans all cached data in I/O idle time. In such
situation a following gc running may help to shrink bcache B+ tree and
discard more clean data, which may be helpful for future writing
requests.
If you are not sure whether this is helpful for your own workload,
please leave it as disabled by default.
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Coly Li [Thu, 13 Dec 2018 14:53:52 +0000 (22:53 +0800)]
bcache: introduce force_wake_up_gc()
Garbage collection thread starts to work when c->sectors_to_gc is
negative value, otherwise nothing will happen even the gc thread is
woken up by wake_up_gc().
force_wake_up_gc() sets c->sectors_to_gc to -1 before calling
wake_up_gc(), then gc thread may have chance to run if no one else sets
c->sectors_to_gc to a positive value before gc_should_run().
This routine can be called where the gc thread is woken up and required
to run in force.
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shenghui Wang [Thu, 13 Dec 2018 14:53:51 +0000 (22:53 +0800)]
bcache: cannot set writeback_running via sysfs if no writeback kthread created
"echo 1 > writeback_running" marks writeback_running even if no
writeback kthread created as "d_strtoul(writeback_running)" will simply
set dc-> writeback_running without checking the existence of
dc->writeback_thread.
Add check for setting writeback_running via sysfs: if no writeback
kthread available, reject setting to 1.
v2 -> v3:
* Make message on wrong assignment more clear.
* Print name of bcache device instead of name of backing device.
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shenghui Wang [Thu, 13 Dec 2018 14:53:50 +0000 (22:53 +0800)]
bcache: do not mark writeback_running too early
A fresh backing device is not attached to any cache_set, and
has no writeback kthread created until first attached to some
cache_set.
But bch_cached_dev_writeback_init run
"
dc->writeback_running = true;
WARN_ON(test_and_clear_bit(BCACHE_DEV_WB_RUNNING,
&dc->disk.flags));
"
for any newly formatted backing devices.
For a fresh standalone backing device, we can get something like
following even if no writeback kthread created:
------------------------
/sys/block/bcache0/bcache# cat writeback_running
1
/sys/block/bcache0/bcache# cat writeback_rate_debug
rate: 512.0k/sec
dirty: 0.0k
target: 0.0k
proportional: 0.0k
integral: 0.0k
change: 0.0k/sec
next io: -15427384ms
The none ZERO fields are misleading as no alive writeback kthread yet.
Set dc->writeback_running false as no writeback thread created in
bch_cached_dev_writeback_init().
We have writeback thread created and woken up in bch_cached_dev_writeback
_start(). Set dc->writeback_running true before bch_writeback_queue()
called, as a writeback thread will check if dc->writeback_running is true
before writing back dirty data, and hung if false detected.
After the change, we can get the following output for a fresh standalone
backing device:
-----------------------
/sys/block/bcache0/bcache$ cat writeback_running
0
/sys/block/bcache0/bcache# cat writeback_rate_debug
rate: 0.0k/sec
dirty: 0.0k
target: 0.0k
proportional: 0.0k
integral: 0.0k
change: 0.0k/sec
next io: 0ms
v1 -> v2:
Set dc->writeback_running before bch_writeback_queue() called,
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shenghui Wang [Thu, 13 Dec 2018 14:53:49 +0000 (22:53 +0800)]
bcache: update comment in sysfs.c
We have struct cached_dev allocated by kzalloc in register_bcache(),
which initializes all the fields of cached_dev with 0s. And commit
ce4c3e19e520 ("bcache: Replace bch_read_string_list() by
__sysfs_match_string()") has remove the string "default".
Update the comment.
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shenghui Wang [Thu, 13 Dec 2018 14:53:48 +0000 (22:53 +0800)]
bcache: update comment for bch_data_insert
commit
220bb38c21b8 ("bcache: Break up struct search") introduced
changes to struct search and s->iop. bypass/bio are fields of struct
data_insert_op now. Update the comment.
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shenghui Wang [Thu, 13 Dec 2018 14:53:47 +0000 (22:53 +0800)]
bcache: do not check if debug dentry is ERR or NULL explicitly on remove
debugfs_remove and debugfs_remove_recursive will check if the dentry
pointer is NULL or ERR, and will do nothing in that case.
Remove the check in cache_set_free and bch_debug_init.
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Shenghui Wang [Thu, 13 Dec 2018 14:53:46 +0000 (22:53 +0800)]
bcache: add comment for cache_set->fill_iter
We have the following define for btree iterator:
struct btree_iter {
size_t size, used;
#ifdef CONFIG_BCACHE_DEBUG
struct btree_keys *b;
#endif
struct btree_iter_set {
struct bkey *k, *end;
} data[MAX_BSETS];
};
We can see that the length of data[] field is static MAX_BSETS, which is
defined as 4 currently.
But a btree node on disk could have too many bsets for an iterator to fit
on the stack - maybe far more that MAX_BSETS. Have to dynamically allocate
space to host more btree_iter_sets.
bch_cache_set_alloc() will make sure the pool cache_set->fill_iter can
allocate an iterator equipped with enough room that can host
(sb.bucket_size / sb.block_size)
btree_iter_sets, which is more than static MAX_BSETS.
bch_btree_node_read_done() will use that pool to allocate one iterator, to
host many bsets in one btree node.
Add more comment around cache_set->fill_iter to make code less confusing.
Signed-off-by: Shenghui Wang <shhuiw@foxmail.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Sagi Grimberg [Wed, 12 Dec 2018 07:38:58 +0000 (23:38 -0800)]
nvme-rdma: support separate queue maps for read and write
llow NVMF_OPT_NR_WRITE_QUEUES to describe additional write queues. In
addition, implement .map_queues that will apply 2 queue maps for read
and write queue sets.
Note that with the separate queue map, HCTX_TYPE_READ will always use
nr_io_queues and HCTX_TYPE_DEFAULT will use nr_write_queues.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Wed, 12 Dec 2018 07:38:57 +0000 (23:38 -0800)]
nvme-tcp: support separate queue maps for read and write
Allow NVMF_OPT_NR_WRITE_QUEUES to describe additional write queues. In
addition, implement .map_queues that will apply 2 queue maps for read
and write queue sets.
Note that with the separate queue map, HCTX_TYPE_READ will always use
nr_io_queues and HCTX_TYPE_DEFAULT will use nr_write_queues.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Wed, 12 Dec 2018 07:38:56 +0000 (23:38 -0800)]
nvme-fabrics: allow user to set nr_write_queues for separate queue maps
This argument will specify how many I/O queues will be connected in
create_ctrl in addition to nr_io_queues. With this configuration, I/O
that carries payload from the host to the target, will use the default
hctx queue map, and I/O that involves target to host transfers will use
the read hctx queue map.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Wed, 12 Dec 2018 07:38:55 +0000 (23:38 -0800)]
nvme-fabrics: add missing nvmf_ctrl_options documentation
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Wed, 12 Dec 2018 07:38:54 +0000 (23:38 -0800)]
blk-mq-rdma: pass in queue map to blk_mq_rdma_map_queues
Will be used by nvme-rdma for queue map separation support.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:48 +0000 (15:11 -0800)]
nvmet: update smart log with num err log entries
Now that we have error log page implementation update smart log command
handler to provide number of error log entries in the lifetime of the
controller field.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:47 +0000 (15:11 -0800)]
nvmet: add error log page cmd handler
Now that we have support for all the major parts of the target we add
a NVMe error log page handler so that host can read the log page.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:43 +0000 (15:11 -0800)]
nvmet: add error log support for file backend
This patch adds support for the file backend to populate the
error log entries. Here we map the errno to the NVMe status codes.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:42 +0000 (15:11 -0800)]
nvmet: add error log support for bdev backend
This patch adds the support for the block device backend to populate the
error log entries. Here we map the blk_status_t to the NVMe status.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:46 +0000 (15:11 -0800)]
nvmet: add error log support for admin-cmd
This patch adds the support to maintain the error log page for admin
commands.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:45 +0000 (15:11 -0800)]
nvmet: add error log support for rdma backend
This patch adds the support to maintain the error log page for rdma
transport, we mainly focus here on the NVME_INVALID_FIELD errors.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:44 +0000 (15:11 -0800)]
nvmet: add error log support for fabrics-cmd
This patch adds the support to maintain error log page for the fabrics
prop get, prop set, and admin connect commands. Here we also update the
discovery.c and add update set/get features and parse functions to
support error log page.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:41 +0000 (15:11 -0800)]
nvmet: add error log support in the core
This patch adds the support to maintain error log page for the
nvmet-core.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:40 +0000 (15:11 -0800)]
nvmet: add interface to update error-log page
This patch adds nvmet_req based interface to the nvmet-core so that
we can update the error log page. We update error log page in
the request completion path when status is not set to NVME_SC_SUCCESS.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:39 +0000 (15:11 -0800)]
nvmet: add error-log definitions
This patch adds necessary fields in the target data structures to
support error log page. For a target controller, we add a new error log
field to maintain the error log, at any given point we maintain error
entries equal to NVMET_ERROR_LOG_SLOTS for each controller. In the
following patch, we also update the error log page entry in the I/O
completion path so we introduce a spinlock for synchronization of the
log.
For nvmet_req, we add a new field error_loc to hold the location of
the error in the command when the actual error occurs for each request
and a starting LBA if applicable.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:38 +0000 (15:11 -0800)]
nvme: add error log page slot definition
This patch adds the NVMe error slot definition from the spec.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chaitanya Kulkarni [Wed, 12 Dec 2018 23:11:37 +0000 (15:11 -0800)]
nvme: remove nvme_common command cdw10 array
This is a preparation patch which removes the nvme common command cdw10
array and replace with individual fields. This is needed for the nvmet
error log page implementation make is error log page entry offset
assignment easier.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Thu, 13 Dec 2018 07:01:54 +0000 (23:01 -0800)]
nvmet: remove unused variable
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Jens Axboe [Wed, 12 Dec 2018 16:18:11 +0000 (09:18 -0700)]
nvme: provide fallback for discard alloc failure
When boxes are run near (or to) OOM, we have a problem with the discard
page allocation in nvme. If we fail allocating the special page, we
return busy, and it'll get retried. But since ordering is honored for
dispatch requests, we can keep retrying this same IO and failing. Behind
that IO could be requests that want to free memory, but they never get
the chance.
Allocate a fixed discard page per controller for a safe fallback, and use
that if the initial allocation fails.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Chengguang Xu [Mon, 10 Dec 2018 23:24:34 +0000 (07:24 +0800)]
nvme: add __exit annotation
Add __exit annotation to cleanup helper which is only
called once in the module.
Signed-off-by: Chengguang Xu <cgxu519@gmx.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:17 +0000 (17:52 -0800)]
nvme-tcp: add NVMe over TCP host driver
This patch implements the NVMe over TCP host driver. It can be used to
connect to remote NVMe over Fabrics subsystems over good old TCP/IP.
The driver implements the TP 8000 of how nvme over fabrics capsules and
data are encapsulated in nvme-tcp pdus and exchaged on top of a TCP byte
stream. nvme-tcp header and data digest are supported as well.
To connect to all NVMe over Fabrics controllers reachable on a given taget
port over TCP use the following command:
nvme connect-all -t tcp -a $IPADDR
This requires the latest version of nvme-cli with TCP support.
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Roy Shterman <roys@lightbitslabs.com>
Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:16 +0000 (17:52 -0800)]
nvmet: allow configfs tcp trtype configuration
Reviewed-by: Max Gurtovoy <maxg@mellanox.com>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:15 +0000 (17:52 -0800)]
nvmet-tcp: add NVMe over TCP target driver
This patch implements the TCP transport driver for the NVMe over Fabrics
target stack. This allows exporting NVMe over Fabrics functionality over
good old TCP/IP.
The driver implements the TP 8000 of how nvme over fabrics capsules and
data are encapsulated in nvme-tcp pdus and exchaged on top of a TCP byte
stream. nvme-tcp header and data digest are supported as well.
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Roy Shterman <roys@lightbitslabs.com>
Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:14 +0000 (17:52 -0800)]
nvme-tcp: Add protocol header
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:13 +0000 (17:52 -0800)]
nvme-fabrics: allow user passing data digest
Data digest is a nvme-tcp specific feature, but nothing prevents other
transports reusing the concept so do not associate with tcp transport
solely.
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:12 +0000 (17:52 -0800)]
nvme-fabrics: allow user passing header digest
Header digest is a nvme-tcp specific feature, but nothing prevents other
transports reusing the concept so do not associate with tcp transport
solely.
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:11 +0000 (17:52 -0800)]
nvmet: Add install_queue callout
nvmet-tcp will implement it to allocate queue commands which
are only known at nvmf connect time (sq size).
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:10 +0000 (17:52 -0800)]
datagram: introduce skb_copy_and_hash_datagram_iter helper
Introduce a helper to copy datagram into an iovec iterator
but also update a predefined hash. This is useful for
consumers of skb_copy_datagram_iter to also support inflight
data digest without having to finish to copy and only then
traverse the iovec and calculate the digest hash.
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:09 +0000 (17:52 -0800)]
iov_iter: introduce hash_and_copy_to_iter helper
Allow consumers that want to use iov iterator helpers and also update
a predefined hash calculation online when copying data. This is useful
when copying incoming network buffers to a local iterator and calculate
a digest on the incoming stream. nvme-tcp host driver that will be
introduced in following patches is the first consumer via
skb_copy_and_hash_datagram_iter.
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:08 +0000 (17:52 -0800)]
datagram: consolidate datagram copy to iter helpers
skb_copy_datagram_iter and skb_copy_and_csum_datagram are essentialy
the same but with a couple of differences: The first is the copy
operation used which either a simple copy or a csum_and_copy, and the
second are the behavior on the "short copy" path where simply copy
needs to return the number of bytes successfully copied while csum_and_copy
needs to fault immediately as the checksum is partial.
Introduce __skb_datagram_iter that additionally accepts:
1. copy operation function pointer
2. private data that goes with the copy operation
3. fault_short flag to indicate the action on short copy
Suggested-by: David S. Miller <davem@davemloft.net>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:07 +0000 (17:52 -0800)]
iov_iter: pass void csum pointer to csum_and_copy_to_iter
The single caller to csum_and_copy_to_iter is skb_copy_and_csum_datagram
and we are trying to unite its logic with skb_copy_datagram_iter by passing
a callback to the copy function that we want to apply. Thus, we need
to make the checksum pointer private to the function.
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:06 +0000 (17:52 -0800)]
datagram: open-code copy_page_to_iter
This will be useful to consolidate skb_copy_and_hash_datagram_iter and
skb_copy_and_csum_datagram to a single code path.
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Sagi Grimberg [Tue, 4 Dec 2018 01:52:05 +0000 (17:52 -0800)]
ath6kl: add ath6kl_ prefix to crypto_type
Prevent a namespace conflict as in following patches as skbuff.h will
include the crypto API.
Acked-by: David S. Miller <davem@davemloft.net>
Cc: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sagi Grimberg <sagi@lightbitslabs.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Dennis Zhou [Tue, 11 Dec 2018 23:03:08 +0000 (18:03 -0500)]
blkcg: handle dying request_queue when associating a blkg
Between v3 [1] and v4 [2] of the blkg association series, the
association point moved from generic_make_request_checks(), which is
called after the request enters the queue, to bio_set_dev(), which is when
the bio is formed before submit_bio(). When the request_queue goes away,
the blkgs supporting the request_queue are destroyed and then the
q->root_blkg is set to %NULL.
This patch adds a %NULL check to blkg_tryget_closest() to prevent the
NPE caused by the above. It also adds a guard to see if the
request_queue is dying when creating a blkg to prevent creating a blkg
for a dead request_queue.
[1] https://lore.kernel.org/lkml/
20180911184137.35897-1-dennisszhou@gmail.com/
[2] https://lore.kernel.org/lkml/
20181126211946.77067-1-dennis@kernel.org/
Fixes: 5cdf2e3fea5e ("blkcg: associate blkg when associating a device")
Reported-and-tested-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Dennis Zhou <dennis@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Wed, 12 Dec 2018 11:44:34 +0000 (19:44 +0800)]
block: deactivate blk_stat timer in wbt_disable_default()
rwb_enabled() can't be changed when there is any inflight IO.
wbt_disable_default() may set rwb->wb_normal as zero, however the
blk_stat timer may still be pending, and the timer function will update
wrb->wb_normal again.
This patch introduces blk_stat_deactivate() and applies it in
wbt_disable_default(), then the following IO hang triggered when running
parted & switching io scheduler can be fixed:
[ 369.937806] INFO: task parted:3645 blocked for more than 120 seconds.
[ 369.938941] Not tainted
4.20.0-rc6-00284-g906c801e5248 #498
[ 369.939797] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 369.940768] parted D 0 3645 3239 0x00000000
[ 369.941500] Call Trace:
[ 369.941874] ? __schedule+0x6d9/0x74c
[ 369.942392] ? wbt_done+0x5e/0x5e
[ 369.942864] ? wbt_cleanup_cb+0x16/0x16
[ 369.943404] ? wbt_done+0x5e/0x5e
[ 369.943874] schedule+0x67/0x78
[ 369.944298] io_schedule+0x12/0x33
[ 369.944771] rq_qos_wait+0xb5/0x119
[ 369.945193] ? karma_partition+0x1c2/0x1c2
[ 369.945691] ? wbt_cleanup_cb+0x16/0x16
[ 369.946151] wbt_wait+0x85/0xb6
[ 369.946540] __rq_qos_throttle+0x23/0x2f
[ 369.947014] blk_mq_make_request+0xe6/0x40a
[ 369.947518] generic_make_request+0x192/0x2fe
[ 369.948042] ? submit_bio+0x103/0x11f
[ 369.948486] ? __radix_tree_lookup+0x35/0xb5
[ 369.949011] submit_bio+0x103/0x11f
[ 369.949436] ? blkg_lookup_slowpath+0x25/0x44
[ 369.949962] submit_bio_wait+0x53/0x7f
[ 369.950469] blkdev_issue_flush+0x8a/0xae
[ 369.951032] blkdev_fsync+0x2f/0x3a
[ 369.951502] do_fsync+0x2e/0x47
[ 369.951887] __x64_sys_fsync+0x10/0x13
[ 369.952374] do_syscall_64+0x89/0x149
[ 369.952819] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 369.953492] RIP: 0033:0x7f95a1e729d4
[ 369.953996] Code: Bad RIP value.
[ 369.954456] RSP: 002b:
00007ffdb570dd48 EFLAGS:
00000246 ORIG_RAX:
000000000000004a
[ 369.955506] RAX:
ffffffffffffffda RBX:
000055c2139c6be0 RCX:
00007f95a1e729d4
[ 369.956389] RDX:
0000000000000001 RSI:
0000000000001261 RDI:
0000000000000004
[ 369.957325] RBP:
0000000000000002 R08:
0000000000000000 R09:
000055c2139c6ce0
[ 369.958199] R10:
0000000000000000 R11:
0000000000000246 R12:
000055c2139c0380
[ 369.959143] R13:
0000000000000004 R14:
0000000000000100 R15:
0000000000000008
Cc: stable@vger.kernel.org
Cc: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 12 Dec 2018 01:39:41 +0000 (18:39 -0700)]
sbitmap: flush deferred clears for resize and shallow gets
We're missing a deferred clear off the shallow get, which can cause
a hang. Additionally, when we resize the sbitmap, we should also
flush deferred clears for good measure.
Ensure we have full coverage on batch clears, even for paths where
we would not be doing deferred clear. This makes it less error
prone for future additions.
Reported-by: Bart Van Assche <bvanassche@acm.org>
Tested-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Igor Konopko [Tue, 11 Dec 2018 19:16:27 +0000 (20:16 +0100)]
lightnvm: pblk: do not overwrite ppa list with meta list
Ehen using pblk with 0 sized metadata both ppa list and meta list
points to the same memory since pblk_dma_meta_size() returns 0 in
that case.
This patch fix that issue by ensuring that pblk_dma_meta_size()
always returns space equal to sizeof(struct pblk_sec_meta) and thus
ppa list and meta list points to different memory address.
Even that in that case drive does not really care about meta_list
pointer, this is the easiest way to fix that issue without introducing
changes in many places in the code just for 0 sized metadata case.
The same approach needs to be also done for pblk_get_sec_meta()
since we also cannot point to the same memory address in meta buffer
when we are using it for pblk recovery process
Reported-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Tested-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Igor Konopko [Tue, 11 Dec 2018 19:16:26 +0000 (20:16 +0100)]
lightnvm: pblk: support packed metadata
pblk performs recovery of open lines by storing the LBA in the per LBA
metadata field. Recovery therefore only works for drives that has this
field.
This patch adds support for packed metadata, which store l2p mapping
for open lines in last sector of every write unit and enables drives
without per IO metadata to recover open lines.
After this patch, drives with OOB size <16B will use packed metadata
and metadata size larger than16B will continue to use the device per
IO metadata.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Igor Konopko [Tue, 11 Dec 2018 19:16:25 +0000 (20:16 +0100)]
lightnvm: disable interleaved metadata
Currently pblk only check the size of I/O metadata and does not take
into account if this metadata is in a separate buffer or interleaved
in a single metadata buffer.
In reality only the first scenario is supported, where second mode will
break pblk functionality during any IO operation.
This patch prevents pblk to be instantiated in case device only
supports interleaved metadata.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Igor Konopko [Tue, 11 Dec 2018 19:16:24 +0000 (20:16 +0100)]
lightnvm: dynamic DMA pool entry size
Currently lightnvm and pblk uses single DMA pool, for which the entry
size always is equal to PAGE_SIZE. The contents of each entry allocated
from the DMA pool consists of a PPA list (8bytes * 64), leaving
56bytes * 64 space for metadata. Since the metadata field can be bigger,
such as 128 bytes, the static size does not cover this use-case.
This patch adds support for I/O metadata above 56 bytes by changing DMA
pool size based on device meta size and allows pblk to use OOB metadata
>=16B.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Igor Konopko [Tue, 11 Dec 2018 19:16:23 +0000 (20:16 +0100)]
lightnvm: pblk: add helpers for OOB metadata
pblk currently assumes that size of OOB metadata on drive is always
equal to size of pblk_sec_meta struct. This commit add helpers which will
allow to handle different sizes of OOB metadata on drive in the future.
After this patch only OOB metadata equal to 16 bytes is supported.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Igor Konopko [Tue, 11 Dec 2018 19:16:22 +0000 (20:16 +0100)]
lightnvm: pblk: move lba list to partial read context
Currently DMA allocated memory is reused on partial read
for lba_list_mem and lba_list_media arrays. In preparation
for dynamic DMA pool sizes we need to move this arrays
into pblk_pr_ctx structures.
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Igor Konopko <igor.j.konopko@intel.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Javier González [Tue, 11 Dec 2018 19:16:21 +0000 (20:16 +0100)]
lightnvm: pblk: avoid ref warning on cache creation
The current kref implementation around pblk global caches triggers a
false positive on refcount_inc_checked() (when called) as the kref is
initialized to 0. Instead of usint kref_inc() on a 0 reference, which is
in principle correct, use kref_init() to avoid the check. This is also
more explicit about what actually happens on cache creation.
In the process, do a small refactoring to use kref helpers.
Fixes: 1864de94ec9d6 "lightnvm: pblk: stop recreating global caches"
Signed-off-by: Javier González <javier@cnexlabs.com>
Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Matias Bjørling [Tue, 11 Dec 2018 19:16:20 +0000 (20:16 +0100)]
lightnvm: simplify geometry enumeration
Currently the geometry of an OCSSD is enumerated using a two step
approach:
First, nvm_register is called, the OCSSD identify command is issued,
and second the geometry sos and csecs values are read either from the
OCSSD identify if it is a 1.2 drive, or from the NVMe namespace data
structure if it is a 2.0 device.
This patch recombines it into a single step, such that nvm_register can
use the csecs and sos fields independent of which version is used. This
enables one to dynamically size the lightnvm subsystem dma pool.
Reviewed-by: Igor Konopko <igor.j.konopko@intel.com>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Javier González [Tue, 11 Dec 2018 19:16:19 +0000 (20:16 +0100)]
lightnvm: pblk: add comments wrt locking in recovery path
pblk's recovery path is single threaded and therefore a number of
assumptions regarding concurrency can be made. To avoid confusion, make
this explicit with a couple of comments in the code.
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hua Su [Tue, 11 Dec 2018 19:16:18 +0000 (20:16 +0100)]
lightnvm: pblk: add lock protection to list operations
Protect the list_add on the pblk_line_init_bb() error
path in case this code is used for some other purpose
in the future.
Signed-off-by: Hua Su <suhua.tanke@gmail.com>
Reviewed-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hua Su [Tue, 11 Dec 2018 19:16:17 +0000 (20:16 +0100)]
lightnvm: pblk: fix spelling in comment
Signed-off-by: Hua Su <suhua.tanke@gmail.com>
Updated description.
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:16 +0000 (20:16 +0100)]
lightnvm: pblk: remove dead code in pblk_recov_l2p
Remove the call to pblk_line_replace_data as it returns
directly because we have not set l_mg->data_next yet.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:15 +0000 (20:16 +0100)]
lightnvm: pblk: fix pblk_lines_init error handling path
The chunk metadata is allocated with vmalloc, so we need to use
vfree to free it.
Fixes: 090ee26fd512 ("lightnvm: use internal allocation for chunk log page")
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:14 +0000 (20:16 +0100)]
lightnvm: pblk: remove unused macro
ADDR_POOL_SIZE is not used anymore, so remove the macro.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:13 +0000 (20:16 +0100)]
lightnvm: pblk: set conservative threshold for user writes
In a worst-case scenario (random writes), OP% of sectors
in each line will be invalid, and we will then need
to move data out of 100/OP% lines to free a single line.
So, to prevent the possibility of running out of lines,
temporarily block user writes when there is less than
100/OP% free lines.
Also ensure that pblk creation does not produce instances
with insufficient over provisioning.
Insufficient over-provising is not a problem on real hardware,
but often an issue when running QEMU simulations (with few lines).
100 lines is enough to create a sane instance with the standard
(11%) over provisioning.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:12 +0000 (20:16 +0100)]
lightnvm: pblk: stop writes gracefully when running out of lines
If mapping fails (i.e. when running out of lines), handle the error
and stop writing.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:11 +0000 (20:16 +0100)]
lightnvm: pblk: account for write error sectors in emeta
Lines inflicted with write errors lines might be recovered
if they have not been recycled after write error garbage collection.
Ensure that the emeta accounting of valid lbas is correct
for such lines to avoid recovery inconsistencies.
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:10 +0000 (20:16 +0100)]
lightnvm: pblk: fix resubmission of overwritten write err lbas
Make sure we only look up valid lba addresses on the resubmission path.
If an lba is invalidated in the write buffer, that sector will be
submitted to disk (as it is already mapped to a ppa), and that write
might fail, resulting in a crash when trying to look up the lba in the
mapping table (as the lba is marked as invalid).
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Reviewed-by: Javier González <javier@javigon.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Hans Holmberg [Tue, 11 Dec 2018 19:16:09 +0000 (20:16 +0100)]
lightnvm: pblk: fix chunk close trace event check
The check for chunk closes suffers from an off-by-one issue, leading
to chunk close events not being traced.
Fixes: 4c44abf43d00 ("lightnvm: pblk: add trace events for chunk states")
Signed-off-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Geert Uytterhoeven [Tue, 11 Dec 2018 19:16:08 +0000 (20:16 +0100)]
lightnvm: Fix uninitialized return value in nvm_get_chunk_meta()
With gcc 4.1:
drivers/lightnvm/core.c: In function ‘nvm_get_bb_meta’:
drivers/lightnvm/core.c:977: warning: ‘ret’ may be used uninitialized in this function
and
drivers/nvme/host/lightnvm.c: In function ‘nvme_nvm_get_chk_meta’:
drivers/nvme/host/lightnvm.c:580: warning: ‘ret’ may be used uninitialized in this function
Indeed, if (for the former) the number of channels or LUNs is zero, or
(for both) the passed number of chunks is zero, ret will be returned
uninitialized.
Fix this by preinitializing ret to zero.
Fixes: aff3fb18f957de93 ("lightnvm: move bad block and chunk state logic to core")
Fixes: a294c199455187d1 ("lightnvm: implement get log report chunk helpers")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Zhoujie Wu [Tue, 11 Dec 2018 19:16:07 +0000 (20:16 +0100)]
lightnvm: pblk: ignore the smeta oob area scan
The smeta area l2p mapping is empty, and actually the
recovery procedure only need to restore data sector's l2p
mapping. So ignore the smeta oob scan.
Signed-off-by: Zhoujie Wu <zjwu@marvell.com>
Reviewed-by: Javier González <javier@javigon.com>
Reviewed-by: Hans Holmberg <hans.holmberg@cnexlabs.com>
Signed-off-by: Matias Bjørling <mb@lightnvm.io>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mike Snitzer [Tue, 11 Dec 2018 14:10:26 +0000 (09:10 -0500)]
dm: fix request-based dm's use of dm_wait_for_completion
The md->wait waitqueue is used by both bio-based and request-based DM.
Commit
dbd3bbd291 ("dm rq: leverage blk_mq_queue_busy() to check for
outstanding IO") lost sight of the requirement that
dm_wait_for_completion() must work with all types of DM devices.
Fix md_in_flight() to call the blk-mq or bio-based method accordingly.
Fixes: dbd3bbd291 ("dm rq: leverage blk_mq_queue_busy() to check for outstanding IO")
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Sun, 9 Dec 2018 18:21:45 +0000 (11:21 -0700)]
nvme: fix irq vs io_queue calculations
Guenter reported an boot hang issue on HPPA after we default to 0 poll
queues. We have two issues in the queue count calculations:
1) We don't separate the poll queues from the read/write queues. This is
important, since the former doesn't need interrupts.
2) The adjust logic is broken.
Adjust the poll queue count before doing nvme_calc_io_queues(). The poll
queue count is only limited by the IO queue count we were able to get
from the controller, not failures in the IRQ allocation loop. This
leaves nvme_calc_io_queues() just adjusting the read/write queue map.
Reported-by: Reported-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 10 Dec 2018 22:45:53 +0000 (15:45 -0700)]
dm: fix inflight IO check
After switching to percpu inflight counters, the inflight check
is totally buggy. It's perfectly valid for some counters to be
non-zero while having a total inflight IO count of 0, that's how
these kinds of counters work (inc on one CPU, dec on another).
Fix the md_in_flight() check to sum all counters before returning
a false positive, potentially.
While at it, remove the inflight read for IO completion. We don't
need it, just wake anyone that's waiting for the IO count to drop
to zero. The caller needs to re-check that value anyway when woken,
which it does.
Fixes: 6f75723190d8 ("dm: remove the pending IO accounting")
Acked-by: Mike Snitzer <snitzer@redhat.com>
Reported-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 10 Dec 2018 21:45:19 +0000 (14:45 -0700)]
mtip32xx: use BLK_STS_DEV_RESOURCE for device resources
For cases where we can only fail with IO in-flight, we should be using
BLK_STS_DEV_RESOURCE instead of BLK_STS_RESOURCE. The latter refers to
system wide resource constraints.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Arnd Bergmann [Mon, 10 Dec 2018 21:34:39 +0000 (22:34 +0100)]
mtip32xx: avoid using semaphores
The "cmd_slot_unal" semaphore is never used in a blocking way
but only as an atomic counter. Change the code to using
atomic_dec_if_positive() as a better API.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mikulas Patocka [Thu, 6 Dec 2018 16:41:22 +0000 (11:41 -0500)]
dm: remove the pending IO accounting
Remove the "pending" atomic counters, that duplicate block-core's
in_flight counters, and update md_in_flight() to look at percpu
in_flight counters.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mikulas Patocka [Thu, 6 Dec 2018 16:41:21 +0000 (11:41 -0500)]
block: return just one value from part_in_flight
The previous patches deleted all the code that needed the second value
returned from part_in_flight - now the kernel only uses the first value.
Consequently, part_in_flight (and blk_mq_in_flight) may be changed so that
it only returns one value.
This patch just refactors the code, there's no functional change.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mikulas Patocka [Thu, 6 Dec 2018 16:41:20 +0000 (11:41 -0500)]
block: switch to per-cpu in-flight counters
Now when part_round_stats is gone, we can switch to per-cpu in-flight
counters.
We use the local-atomic type local_t, so that if part_inc_in_flight or
part_dec_in_flight is reentrantly called from an interrupt, the value will
be correct.
The other counters could be corrupted due to reentrant interrupt, but the
corruption only results in slight counter skew - the in_flight counter
must be exact, so it needs local_t.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mikulas Patocka [Thu, 6 Dec 2018 16:41:19 +0000 (11:41 -0500)]
block: delete part_round_stats and switch to less precise counting
We want to convert to per-cpu in_flight counters.
The function part_round_stats needs the in_flight counter every jiffy, it
would be too costly to sum all the percpu variables every jiffy, so it
must be deleted. part_round_stats is used to calculate two counters -
time_in_queue and io_ticks.
time_in_queue can be calculated without part_round_stats, by adding the
duration of the I/O when the I/O ends (the value is almost as exact as the
previously calculated value, except that time for in-progress I/Os is not
counted).
io_ticks can be approximated by increasing the value when I/O is started
or ended and the jiffies value has changed. If the I/Os take less than a
jiffy, the value is as exact as the previously calculated value. If the
I/Os take more than a jiffy, io_ticks can drift behind the previously
calculated value.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mike Snitzer [Thu, 6 Dec 2018 16:41:18 +0000 (11:41 -0500)]
block: stop passing 'cpu' to all percpu stats methods
All of part_stat_* and related methods are used with preempt disabled,
so there is no need to pass cpu around to allow of them. Just call
smp_processor_id() as needed.
Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mike Snitzer [Thu, 6 Dec 2018 16:41:17 +0000 (11:41 -0500)]
dm rq: leverage blk_mq_queue_busy() to check for outstanding IO
Now that request-based dm-multipath only supports blk-mq, make use of
the newly introduced blk_mq_queue_busy() to check for outstanding IO --
rather than (ab)using the block core's in_flight counters.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Mikulas Patocka [Thu, 6 Dec 2018 16:41:16 +0000 (11:41 -0500)]
dm: dont rewrite dm_disk(md)->part0.in_flight
generic_start_io_acct and generic_end_io_acct already update the variable
in_flight using atomic operations, so we don't have to overwrite them
again.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 10 Dec 2018 00:45:40 +0000 (17:45 -0700)]
Merge tag 'v4.20-rc6' into for-4.21/block
Pull in v4.20-rc6 to resolve the conflict in NVMe, but also to get the
two corruption fixes. We're going to be overhauling the direct dispatch
path, and we need to do that on top of the changes we made for that
in mainline.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 10 Dec 2018 00:43:20 +0000 (17:43 -0700)]
sbitmap: silence bogus lockdep IRQ warning
Ming reports that lockdep spews the following trace. What this
essentially says is that the sbitmap swap_lock was used inconsistently
in IRQ enabled and disabled context, and that is usually indicative of a
bug that will cause a deadlock.
For this case, it's a false positive. The swap_lock is used from process
context only, when we swap the bits in the word and cleared mask. We
also end up doing that when we are getting a driver tag, from the
blk_mq_mark_tag_wait(), and from there we hold the waitqueue lock with
IRQs disabled. However, this isn't from an actual IRQ, it's still
process context.
In lieu of a better way to fix this, simply always disable interrupts
when grabbing the swap_lock if lockdep is enabled.
[ 100.967642] ================start test sanity/001================
[ 101.238280] null: module loaded
[ 106.093735]
[ 106.094012] =====================================================
[ 106.094854] WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
[ 106.095759] 4.20.0-rc3_5d2ee7122c73_for-next+ #1 Not tainted
[ 106.096551] -----------------------------------------------------
[ 106.097386] fio/1043 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
[ 106.098231]
000000004c43fa71
(&(&sb->map[i].swap_lock)->rlock){+.+.}, at: sbitmap_get+0xd5/0x22c
[ 106.099431]
[ 106.099431] and this task is already holding:
[ 106.100229]
000000007eec8b2f
(&(&hctx->dispatch_wait_lock)->rlock){....}, at:
blk_mq_dispatch_rq_list+0x4c1/0xd7c
[ 106.101630] which would create a new lock dependency:
[ 106.102326] (&(&hctx->dispatch_wait_lock)->rlock){....} ->
(&(&sb->map[i].swap_lock)->rlock){+.+.}
[ 106.103553]
[ 106.103553] but this new dependency connects a SOFTIRQ-irq-safe lock:
[ 106.104580] (&sbq->ws[i].wait){..-.}
[ 106.104582]
[ 106.104582] ... which became SOFTIRQ-irq-safe at:
[ 106.105751] _raw_spin_lock_irqsave+0x4b/0x82
[ 106.106284] __wake_up_common_lock+0x119/0x1b9
[ 106.106825] sbitmap_queue_wake_up+0x33f/0x383
[ 106.107456] sbitmap_queue_clear+0x4c/0x9a
[ 106.108046] __blk_mq_free_request+0x188/0x1d3
[ 106.108581] blk_mq_free_request+0x23b/0x26b
[ 106.109102] scsi_end_request+0x345/0x5d7
[ 106.109587] scsi_io_completion+0x4b5/0x8f0
[ 106.110099] scsi_finish_command+0x412/0x456
[ 106.110615] scsi_softirq_done+0x23f/0x29b
[ 106.111115] blk_done_softirq+0x2a7/0x2e6
[ 106.111608] __do_softirq+0x360/0x6ad
[ 106.112062] run_ksoftirqd+0x2f/0x5b
[ 106.112499] smpboot_thread_fn+0x3a5/0x3db
[ 106.113000] kthread+0x1d4/0x1e4
[ 106.113457] ret_from_fork+0x3a/0x50
[ 106.113969]
[ 106.113969] to a SOFTIRQ-irq-unsafe lock:
[ 106.114672] (&(&sb->map[i].swap_lock)->rlock){+.+.}
[ 106.114674]
[ 106.114674] ... which became SOFTIRQ-irq-unsafe at:
[ 106.116000] ...
[ 106.116003] _raw_spin_lock+0x33/0x64
[ 106.116676] sbitmap_get+0xd5/0x22c
[ 106.117134] __sbitmap_queue_get+0xe8/0x177
[ 106.117731] __blk_mq_get_tag+0x1e6/0x22d
[ 106.118286] blk_mq_get_tag+0x1db/0x6e4
[ 106.118756] blk_mq_get_driver_tag+0x161/0x258
[ 106.119383] blk_mq_dispatch_rq_list+0x28e/0xd7c
[ 106.120043] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.120607] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.121234] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.121781] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.122366] blk_mq_run_hw_queue+0x151/0x187
[ 106.122887] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.123492] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.124042] blk_flush_plug_list+0x392/0x3d7
[ 106.124557] blk_finish_plug+0x37/0x4f
[ 106.125019] read_pages+0x3ef/0x430
[ 106.125446] __do_page_cache_readahead+0x18e/0x2fc
[ 106.126027] force_page_cache_readahead+0x121/0x133
[ 106.126621] page_cache_sync_readahead+0x35f/0x3bb
[ 106.127229] generic_file_buffered_read+0x410/0x1860
[ 106.127932] __vfs_read+0x319/0x38f
[ 106.128415] vfs_read+0xd2/0x19a
[ 106.128817] ksys_read+0xb9/0x135
[ 106.129225] do_syscall_64+0x140/0x385
[ 106.129684] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.130292]
[ 106.130292] other info that might help us debug this:
[ 106.130292]
[ 106.131226] Chain exists of:
[ 106.131226] &sbq->ws[i].wait -->
&(&hctx->dispatch_wait_lock)->rlock -->
&(&sb->map[i].swap_lock)->rlock
[ 106.131226]
[ 106.132865] Possible interrupt unsafe locking scenario:
[ 106.132865]
[ 106.133659] CPU0 CPU1
[ 106.134194] ---- ----
[ 106.134733] lock(&(&sb->map[i].swap_lock)->rlock);
[ 106.135318] local_irq_disable();
[ 106.136014] lock(&sbq->ws[i].wait);
[ 106.136747]
lock(&(&hctx->dispatch_wait_lock)->rlock);
[ 106.137742] <Interrupt>
[ 106.138110] lock(&sbq->ws[i].wait);
[ 106.138625]
[ 106.138625] *** DEADLOCK ***
[ 106.138625]
[ 106.139430] 3 locks held by fio/1043:
[ 106.139947] #0:
0000000076ff0fd9 (rcu_read_lock){....}, at:
hctx_lock+0x29/0xe8
[ 106.140813] #1:
000000002feb1016 (&sbq->ws[i].wait){..-.}, at:
blk_mq_dispatch_rq_list+0x4ad/0xd7c
[ 106.141877] #2:
000000007eec8b2f
(&(&hctx->dispatch_wait_lock)->rlock){....}, at:
blk_mq_dispatch_rq_list+0x4c1/0xd7c
[ 106.143267]
[ 106.143267] the dependencies between SOFTIRQ-irq-safe lock and the
holding lock:
[ 106.144351] -> (&sbq->ws[i].wait){..-.} ops: 82 {
[ 106.144926] IN-SOFTIRQ-W at:
[ 106.145314] _raw_spin_lock_irqsave+0x4b/0x82
[ 106.146042] __wake_up_common_lock+0x119/0x1b9
[ 106.146785] sbitmap_queue_wake_up+0x33f/0x383
[ 106.147567] sbitmap_queue_clear+0x4c/0x9a
[ 106.148379] __blk_mq_free_request+0x188/0x1d3
[ 106.149148] blk_mq_free_request+0x23b/0x26b
[ 106.149864] scsi_end_request+0x345/0x5d7
[ 106.150546] scsi_io_completion+0x4b5/0x8f0
[ 106.151367] scsi_finish_command+0x412/0x456
[ 106.152157] scsi_softirq_done+0x23f/0x29b
[ 106.152855] blk_done_softirq+0x2a7/0x2e6
[ 106.153537] __do_softirq+0x360/0x6ad
[ 106.154280] run_ksoftirqd+0x2f/0x5b
[ 106.155020] smpboot_thread_fn+0x3a5/0x3db
[ 106.155828] kthread+0x1d4/0x1e4
[ 106.156526] ret_from_fork+0x3a/0x50
[ 106.157267] INITIAL USE at:
[ 106.157713] _raw_spin_lock_irqsave+0x4b/0x82
[ 106.158542] prepare_to_wait_exclusive+0xa8/0x215
[ 106.159421] blk_mq_get_tag+0x34f/0x6e4
[ 106.160186] blk_mq_get_request+0x48e/0xaef
[ 106.160997] blk_mq_make_request+0x27e/0xbd2
[ 106.161828] generic_make_request+0x4d1/0x873
[ 106.162661] submit_bio+0x20c/0x253
[ 106.163379] mpage_bio_submit+0x44/0x4b
[ 106.164142] mpage_readpages+0x3c2/0x407
[ 106.164919] read_pages+0x13a/0x430
[ 106.165633] __do_page_cache_readahead+0x18e/0x2fc
[ 106.166530] force_page_cache_readahead+0x121/0x133
[ 106.167439] page_cache_sync_readahead+0x35f/0x3bb
[ 106.168337] generic_file_buffered_read+0x410/0x1860
[ 106.169255] __vfs_read+0x319/0x38f
[ 106.169977] vfs_read+0xd2/0x19a
[ 106.170662] ksys_read+0xb9/0x135
[ 106.171356] do_syscall_64+0x140/0x385
[ 106.172120] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.173051] }
[ 106.173308] ... key at: [<
ffffffff85094600>] __key.26481+0x0/0x40
[ 106.174219] ... acquired at:
[ 106.174646] _raw_spin_lock+0x33/0x64
[ 106.175183] blk_mq_dispatch_rq_list+0x4c1/0xd7c
[ 106.175843] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.176518] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.177262] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.177900] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.178591] blk_mq_run_hw_queue+0x151/0x187
[ 106.179207] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.179926] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.180571] blk_flush_plug_list+0x392/0x3d7
[ 106.181187] blk_finish_plug+0x37/0x4f
[ 106.181737] __se_sys_io_submit+0x171/0x304
[ 106.182346] do_syscall_64+0x140/0x385
[ 106.182895] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.183607]
[ 106.183830] -> (&(&hctx->dispatch_wait_lock)->rlock){....} ops: 1 {
[ 106.184691] INITIAL USE at:
[ 106.185119] _raw_spin_lock+0x33/0x64
[ 106.185838] blk_mq_dispatch_rq_list+0x4c1/0xd7c
[ 106.186697] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.187551] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.188481] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.189307] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.190189] blk_mq_run_hw_queue+0x151/0x187
[ 106.190989] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.191902] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.192739] blk_flush_plug_list+0x392/0x3d7
[ 106.193535] blk_finish_plug+0x37/0x4f
[ 106.194269] __se_sys_io_submit+0x171/0x304
[ 106.195059] do_syscall_64+0x140/0x385
[ 106.195794] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.196705] }
[ 106.196950] ... key at: [<
ffffffff84880620>] __key.51231+0x0/0x40
[ 106.197853] ... acquired at:
[ 106.198270] lock_acquire+0x280/0x2f3
[ 106.198806] _raw_spin_lock+0x33/0x64
[ 106.199337] sbitmap_get+0xd5/0x22c
[ 106.199850] __sbitmap_queue_get+0xe8/0x177
[ 106.200450] __blk_mq_get_tag+0x1e6/0x22d
[ 106.201035] blk_mq_get_tag+0x1db/0x6e4
[ 106.201589] blk_mq_get_driver_tag+0x161/0x258
[ 106.202237] blk_mq_dispatch_rq_list+0x5b9/0xd7c
[ 106.202902] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.203572] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.204316] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.204956] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.205649] blk_mq_run_hw_queue+0x151/0x187
[ 106.206269] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.206997] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.207644] blk_flush_plug_list+0x392/0x3d7
[ 106.208264] blk_finish_plug+0x37/0x4f
[ 106.208814] __se_sys_io_submit+0x171/0x304
[ 106.209415] do_syscall_64+0x140/0x385
[ 106.209965] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.210684]
[ 106.210904]
[ 106.210904] the dependencies between the lock to be acquired
[ 106.210905] and SOFTIRQ-irq-unsafe lock:
[ 106.212541] -> (&(&sb->map[i].swap_lock)->rlock){+.+.} ops: 1969 {
[ 106.213393] HARDIRQ-ON-W at:
[ 106.213840] _raw_spin_lock+0x33/0x64
[ 106.214570] sbitmap_get+0xd5/0x22c
[ 106.215282] __sbitmap_queue_get+0xe8/0x177
[ 106.216086] __blk_mq_get_tag+0x1e6/0x22d
[ 106.216876] blk_mq_get_tag+0x1db/0x6e4
[ 106.217627] blk_mq_get_driver_tag+0x161/0x258
[ 106.218465] blk_mq_dispatch_rq_list+0x28e/0xd7c
[ 106.219326] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.220198] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.221138] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.221975] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.222874] blk_mq_run_hw_queue+0x151/0x187
[ 106.223686] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.224597] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.225444] blk_flush_plug_list+0x392/0x3d7
[ 106.226255] blk_finish_plug+0x37/0x4f
[ 106.227006] read_pages+0x3ef/0x430
[ 106.227717] __do_page_cache_readahead+0x18e/0x2fc
[ 106.228595] force_page_cache_readahead+0x121/0x133
[ 106.229491] page_cache_sync_readahead+0x35f/0x3bb
[ 106.230373] generic_file_buffered_read+0x410/0x1860
[ 106.231277] __vfs_read+0x319/0x38f
[ 106.231986] vfs_read+0xd2/0x19a
[ 106.232666] ksys_read+0xb9/0x135
[ 106.233350] do_syscall_64+0x140/0x385
[ 106.234097] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.235012] SOFTIRQ-ON-W at:
[ 106.235460] _raw_spin_lock+0x33/0x64
[ 106.236195] sbitmap_get+0xd5/0x22c
[ 106.236913] __sbitmap_queue_get+0xe8/0x177
[ 106.237715] __blk_mq_get_tag+0x1e6/0x22d
[ 106.238488] blk_mq_get_tag+0x1db/0x6e4
[ 106.239244] blk_mq_get_driver_tag+0x161/0x258
[ 106.240079] blk_mq_dispatch_rq_list+0x28e/0xd7c
[ 106.240937] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.241806] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.242751] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.243579] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.244469] blk_mq_run_hw_queue+0x151/0x187
[ 106.245277] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.246191] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.247044] blk_flush_plug_list+0x392/0x3d7
[ 106.247859] blk_finish_plug+0x37/0x4f
[ 106.248749] read_pages+0x3ef/0x430
[ 106.249463] __do_page_cache_readahead+0x18e/0x2fc
[ 106.250357] force_page_cache_readahead+0x121/0x133
[ 106.251263] page_cache_sync_readahead+0x35f/0x3bb
[ 106.252157] generic_file_buffered_read+0x410/0x1860
[ 106.253084] __vfs_read+0x319/0x38f
[ 106.253808] vfs_read+0xd2/0x19a
[ 106.254488] ksys_read+0xb9/0x135
[ 106.255186] do_syscall_64+0x140/0x385
[ 106.255943] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.256867] INITIAL USE at:
[ 106.257300] _raw_spin_lock+0x33/0x64
[ 106.258033] sbitmap_get+0xd5/0x22c
[ 106.258747] __sbitmap_queue_get+0xe8/0x177
[ 106.259542] __blk_mq_get_tag+0x1e6/0x22d
[ 106.260320] blk_mq_get_tag+0x1db/0x6e4
[ 106.261072] blk_mq_get_driver_tag+0x161/0x258
[ 106.261902] blk_mq_dispatch_rq_list+0x28e/0xd7c
[ 106.262762] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.263626] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.264571] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.265409] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.266302] blk_mq_run_hw_queue+0x151/0x187
[ 106.267111] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.268028] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.268878] blk_flush_plug_list+0x392/0x3d7
[ 106.269694] blk_finish_plug+0x37/0x4f
[ 106.270432] read_pages+0x3ef/0x430
[ 106.271139] __do_page_cache_readahead+0x18e/0x2fc
[ 106.272040] force_page_cache_readahead+0x121/0x133
[ 106.272932] page_cache_sync_readahead+0x35f/0x3bb
[ 106.273811] generic_file_buffered_read+0x410/0x1860
[ 106.274709] __vfs_read+0x319/0x38f
[ 106.275407] vfs_read+0xd2/0x19a
[ 106.276074] ksys_read+0xb9/0x135
[ 106.276764] do_syscall_64+0x140/0x385
[ 106.277500] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 106.278417] }
[ 106.278676] ... key at: [<
ffffffff85094640>] __key.26212+0x0/0x40
[ 106.279586] ... acquired at:
[ 106.280026] lock_acquire+0x280/0x2f3
[ 106.280559] _raw_spin_lock+0x33/0x64
[ 106.281101] sbitmap_get+0xd5/0x22c
[ 106.281610] __sbitmap_queue_get+0xe8/0x177
[ 106.282221] __blk_mq_get_tag+0x1e6/0x22d
[ 106.282809] blk_mq_get_tag+0x1db/0x6e4
[ 106.283368] blk_mq_get_driver_tag+0x161/0x258
[ 106.284018] blk_mq_dispatch_rq_list+0x5b9/0xd7c
[ 106.284685] blk_mq_do_dispatch_sched+0x23a/0x287
[ 106.285371] blk_mq_sched_dispatch_requests+0x379/0x3fc
[ 106.286135] __blk_mq_run_hw_queue+0x137/0x17e
[ 106.286806] __blk_mq_delay_run_hw_queue+0x80/0x25f
[ 106.287515] blk_mq_run_hw_queue+0x151/0x187
[ 106.288149] blk_mq_sched_insert_requests+0x13f/0x175
[ 106.289041] blk_mq_flush_plug_list+0x7d6/0x81b
[ 106.289912] blk_flush_plug_list+0x392/0x3d7
[ 106.290590] blk_finish_plug+0x37/0x4f
[ 106.291238] __se_sys_io_submit+0x171/0x304
[ 106.291864] do_syscall_64+0x140/0x385
[ 106.292534] entry_SYSCALL_64_after_hwframe+0x49/0xbe
Reported-by: Ming Lei <ming.lei@redhat.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Linus Torvalds [Sun, 9 Dec 2018 23:31:00 +0000 (15:31 -0800)]
Linux 4.20-rc6
Linus Torvalds [Sun, 9 Dec 2018 23:12:33 +0000 (15:12 -0800)]
Merge git://git./linux/kernel/git/davem/net
Pull networking fixes from David Miller:
"A decent batch of fixes here. I'd say about half are for problems that
have existed for a while, and half are for new regressions added in
the 4.20 merge window.
1) Fix 10G SFP phy module detection in mvpp2, from Baruch Siach.
2) Revert bogus emac driver change, from Benjamin Herrenschmidt.
3) Handle BPF exported data structure with pointers when building
32-bit userland, from Daniel Borkmann.
4) Memory leak fix in act_police, from Davide Caratti.
5) Check RX checksum offload in RX descriptors properly in aquantia
driver, from Dmitry Bogdanov.
6) SKB unlink fix in various spots, from Edward Cree.
7) ndo_dflt_fdb_dump() only works with ethernet, enforce this, from
Eric Dumazet.
8) Fix FID leak in mlxsw driver, from Ido Schimmel.
9) IOTLB locking fix in vhost, from Jean-Philippe Brucker.
10) Fix SKB truesize accounting in ipv4/ipv6/netfilter frag memory
limits otherwise namespace exit can hang. From Jiri Wiesner.
11) Address block parsing length fixes in x25 from Martin Schiller.
12) IRQ and ring accounting fixes in bnxt_en, from Michael Chan.
13) For tun interfaces, only iface delete works with rtnl ops, enforce
this by disallowing add. From Nicolas Dichtel.
14) Use after free in liquidio, from Pan Bian.
15) Fix SKB use after passing to netif_receive_skb(), from Prashant
Bhole.
16) Static key accounting and other fixes in XPS from Sabrina Dubroca.
17) Partially initialized flow key passed to ip6_route_output(), from
Shmulik Ladkani.
18) Fix RTNL deadlock during reset in ibmvnic driver, from Thomas
Falcon.
19) Several small TCP fixes (off-by-one on window probe abort, NULL
deref in tail loss probe, SNMP mis-estimations) from Yuchung
Cheng"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (93 commits)
net/sched: cls_flower: Reject duplicated rules also under skip_sw
bnxt_en: Fix _bnxt_get_max_rings() for 57500 chips.
bnxt_en: Fix NQ/CP rings accounting on the new 57500 chips.
bnxt_en: Keep track of reserved IRQs.
bnxt_en: Fix CNP CoS queue regression.
net/mlx4_core: Correctly set PFC param if global pause is turned off.
Revert "net/ibm/emac: wrong bit is used for STA control"
neighbour: Avoid writing before skb->head in neigh_hh_output()
ipv6: Check available headroom in ip6_xmit() even without options
tcp: lack of available data can also cause TSO defer
ipv6: sr: properly initialize flowi6 prior passing to ip6_route_output
mlxsw: spectrum_switchdev: Fix VLAN device deletion via ioctl
mlxsw: spectrum_router: Relax GRE decap matching check
mlxsw: spectrum_switchdev: Avoid leaking FID's reference count
mlxsw: spectrum_nve: Remove easily triggerable warnings
ipv4: ipv6: netfilter: Adjust the frag mem limit when truesize changes
sctp: frag_point sanity check
tcp: fix NULL ref in tail loss probe
tcp: Do not underestimate rwnd_limited
net: use skb_list_del_init() to remove from RX sublists
...
Linus Torvalds [Sun, 9 Dec 2018 23:09:55 +0000 (15:09 -0800)]
Merge branch 'x86-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar:
"Three fixes: a boot parameter re-(re-)fix, a retpoline build artifact
fix and an LLVM workaround"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/vdso: Drop implicit common-page-size linker flag
x86/build: Fix compiler support check for CONFIG_RETPOLINE
x86/boot: Clear RSDP address in boot_params for broken loaders
Linus Torvalds [Sun, 9 Dec 2018 22:21:33 +0000 (14:21 -0800)]
Merge branch 'perf-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull kprobes fixes from Ingo Molnar:
"Two kprobes fixes: a blacklist fix and an instruction patching related
corruption fix"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
kprobes/x86: Blacklist non-attachable interrupt functions
kprobes/x86: Fix instruction patching corruption when copying more than one RIP-relative instruction
Linus Torvalds [Sun, 9 Dec 2018 22:03:56 +0000 (14:03 -0800)]
Merge branch 'efi-urgent-for-linus' of git://git./linux/kernel/git/tip/tip
Pull EFI fixes from Ingo Molnar:
"Two fixes: a large-system fix and an earlyprintk fix with certain
resolutions"
* 'efi-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/earlyprintk/efi: Fix infinite loop on some screen widths
x86/efi: Allocate e820 buffer before calling efi_exit_boot_service
Or Gerlitz [Sun, 9 Dec 2018 16:10:24 +0000 (18:10 +0200)]
net/sched: cls_flower: Reject duplicated rules also under skip_sw
Currently, duplicated rules are rejected only for skip_hw or "none",
hence allowing users to push duplicates into HW for no reason.
Use the flower tables to protect for that.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Paul Blakey <paulb@mellanox.com>
Reported-by: Chris Mi <chrism@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>