From: Hugh Dickins Date: Fri, 6 Nov 2015 02:50:34 +0000 (-0800) Subject: tmpfs: avoid a little creat and stat slowdown X-Git-Url: http://git.cdn.openwrt.org/?a=commitdiff_plain;h=d0424c429f8e0555a337d71e0a13f2289c636ec9;p=openwrt%2Fstaging%2Fblogic.git tmpfs: avoid a little creat and stat slowdown LKP reports that v4.2 commit afa2db2fb6f1 ("tmpfs: truncate prealloc blocks past i_size") causes a 14.5% slowdown in the AIM9 creat-clo benchmark. creat-clo does just what you'd expect from the name, and creat's O_TRUNC on 0-length file does indeed get into more overhead now shmem_setattr() tests "0 <= 0" instead of "0 < 0". I'm not sure how much we care, but I think it would not be too VW-like to add in a check for whether any pages (or swap) are allocated: if none are allocated, there's none to remove from the radix_tree. At first I thought that check would be good enough for the unmaps too, but no: we should not skip the unlikely case of unmapping pages beyond the new EOF, which were COWed from holes which have now been reclaimed, leaving none. This gives me an 8.5% speedup: on Haswell instead of LKP's Westmere, and running a debug config before and after: I hope those account for the lesser speedup. And probably someone has a benchmark where a thousand threads keep on stat'ing the same file repeatedly: forestall that report by adjusting v4.3 commit 44a30220bc0a ("shmem: recalculate file inode when fstat") not to take the spinlock in shmem_getattr() when there's no work to do. Signed-off-by: Hugh Dickins Reported-by: Ying Huang Tested-by: Ying Huang Cc: Josef Bacik Cc: Yu Zhao Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/shmem.c b/mm/shmem.c index 6529226b3814..3b8b73928398 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -548,12 +548,12 @@ static int shmem_getattr(struct vfsmount *mnt, struct dentry *dentry, struct inode *inode = dentry->d_inode; struct shmem_inode_info *info = SHMEM_I(inode); - spin_lock(&info->lock); - shmem_recalc_inode(inode); - spin_unlock(&info->lock); - + if (info->alloced - info->swapped != inode->i_mapping->nrpages) { + spin_lock(&info->lock); + shmem_recalc_inode(inode); + spin_unlock(&info->lock); + } generic_fillattr(inode, stat); - return 0; } @@ -586,10 +586,16 @@ static int shmem_setattr(struct dentry *dentry, struct iattr *attr) } if (newsize <= oldsize) { loff_t holebegin = round_up(newsize, PAGE_SIZE); - unmap_mapping_range(inode->i_mapping, holebegin, 0, 1); - shmem_truncate_range(inode, newsize, (loff_t)-1); + if (oldsize > holebegin) + unmap_mapping_range(inode->i_mapping, + holebegin, 0, 1); + if (info->alloced) + shmem_truncate_range(inode, + newsize, (loff_t)-1); /* unmap again to remove racily COWed private pages */ - unmap_mapping_range(inode->i_mapping, holebegin, 0, 1); + if (oldsize > holebegin) + unmap_mapping_range(inode->i_mapping, + holebegin, 0, 1); } }