memremap: add scheduling point to devm_memremap_pages
authorMichal Hocko <mhocko@suse.com>
Tue, 3 Oct 2017 23:16:23 +0000 (16:16 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 4 Oct 2017 00:54:25 +0000 (17:54 -0700)
devm_memremap_pages is initializing struct pages in for_each_device_pfn
and that can take quite some time.  We have even seen a soft lockup
triggering on a non preemptive kernel

  NMI watchdog: BUG: soft lockup - CPU#61 stuck for 22s! [kworker/u641:11:1808]
  [...]
  RIP: 0010:[<ffffffff8118b6b7>]  [<ffffffff8118b6b7>] devm_memremap_pages+0x327/0x430
  [...]
  Call Trace:
    pmem_attach_disk+0x2fd/0x3f0 [nd_pmem]
    nvdimm_bus_probe+0x64/0x110 [libnvdimm]
    driver_probe_device+0x1f7/0x420
    bus_for_each_drv+0x52/0x80
    __device_attach+0xb0/0x130
    bus_probe_device+0x87/0xa0
    device_add+0x3fc/0x5f0
    nd_async_device_register+0xe/0x40 [libnvdimm]
    async_run_entry_fn+0x43/0x150
    process_one_work+0x14e/0x410
    worker_thread+0x116/0x490
    kthread+0xc7/0xe0
    ret_from_fork+0x3f/0x70

fix this by adding cond_resched every 1024 pages.

Link: http://lkml.kernel.org/r/20170918121410.24466-4-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Johannes Thumshirn <jthumshirn@suse.de>
Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
Cc: Dan Williams <dan.j.williams@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
kernel/memremap.c

index 6bcbfbf1a8fdfd2f1008cde707db9a798a68cdc6..403ab9cdb949a0483bd82c811a3383eed77e5246 100644 (file)
@@ -350,7 +350,7 @@ void *devm_memremap_pages(struct device *dev, struct resource *res,
        pgprot_t pgprot = PAGE_KERNEL;
        struct dev_pagemap *pgmap;
        struct page_map *page_map;
-       int error, nid, is_ram;
+       int error, nid, is_ram, i = 0;
 
        align_start = res->start & ~(SECTION_SIZE - 1);
        align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
@@ -448,6 +448,8 @@ void *devm_memremap_pages(struct device *dev, struct resource *res,
                list_del(&page->lru);
                page->pgmap = pgmap;
                percpu_ref_get(ref);
+               if (!(++i % 1024))
+                       cond_resched();
        }
        devres_add(dev, page_map);
        return __va(res->start);