btrfs: only track ref_heads in delayed_ref_updates
authorJosef Bacik <jbacik@fb.com>
Mon, 3 Dec 2018 15:20:32 +0000 (10:20 -0500)
committerDavid Sterba <dsterba@suse.com>
Mon, 17 Dec 2018 13:51:46 +0000 (14:51 +0100)
We use this number to figure out how many delayed refs to run, but
__btrfs_run_delayed_refs really only checks every time we need a new
delayed ref head, so we always run at least one ref head completely no
matter what the number of items on it.  Fix the accounting to only be
adjusted when we add/remove a ref head.

In addition to using this number to limit the number of delayed refs
run, a future patch is also going to use it to calculate the amount of
space required for delayed refs space reservation.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: David Sterba <dsterba@suse.com>
fs/btrfs/delayed-ref.c

index b3e4c9fcb664a7040d687e7e012195b28cbaff06..48725fa757a3e28bb085731823fec315cff1421f 100644 (file)
@@ -251,8 +251,6 @@ static inline void drop_delayed_ref(struct btrfs_trans_handle *trans,
        ref->in_tree = 0;
        btrfs_put_delayed_ref(ref);
        atomic_dec(&delayed_refs->num_entries);
-       if (trans->delayed_ref_updates)
-               trans->delayed_ref_updates--;
 }
 
 static bool merge_ref(struct btrfs_trans_handle *trans,
@@ -467,7 +465,6 @@ inserted:
        if (ref->action == BTRFS_ADD_DELAYED_REF)
                list_add_tail(&ref->add_list, &href->ref_add_list);
        atomic_inc(&root->num_entries);
-       trans->delayed_ref_updates++;
        spin_unlock(&href->lock);
        return ret;
 }