From: Tang Chen Date: Fri, 18 Apr 2014 22:07:15 +0000 (-0700) Subject: Documentation/vm/numa_memory_policy.txt: fix wrong document in numa_memory_policy.txt X-Git-Url: http://git.cdn.openwrt.org/?a=commitdiff_plain;h=8f28ed92d9314b98dc2033df770f5e6b85c5ffb7;p=openwrt%2Fstaging%2Fblogic.git Documentation/vm/numa_memory_policy.txt: fix wrong document in numa_memory_policy.txt In document numa_memory_policy.txt, the following examples for flag MPOL_F_RELATIVE_NODES are incorrect. For example, consider a task that is attached to a cpuset with mems 2-5 that sets an Interleave policy over the same set with MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the interleave now occurs over nodes 3,5-6. If the cpuset's mems then change to 0,2-3,5, then the interleave occurs over nodes 0,3,5. According to the comment of the patch adding flag MPOL_F_RELATIVE_NODES, the nodemasks the user specifies should be considered relative to the current task's mems_allowed. (https://lkml.org/lkml/2008/2/29/428) And according to numa_memory_policy.txt, if the user's nodemask includes nodes that are outside the range of the new set of allowed nodes, then the remap wraps around to the beginning of the nodemask and, if not already set, sets the node in the mempolicy nodemask. So in the example, if the user specifies 2-5, for a task whose mems_allowed is 3-7, the nodemasks should be remapped the third, fourth, fifth, sixth node in mems_allowed. like the following: mems_allowed: 3 4 5 6 7 relative index: 0 1 2 3 4 5 So the nodemasks should be remapped to 3,5-7, but not 3,5-6. And for a task whose mems_allowed is 0,2-3,5, the nodemasks should be remapped to 0,2-3,5, but not 0,3,5. mems_allowed: 0 2 3 5 relative index: 0 1 2 3 4 5 Signed-off-by: Tang Chen Cc: Randy Dunlap Acked-by: David Rientjes Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/Documentation/vm/numa_memory_policy.txt b/Documentation/vm/numa_memory_policy.txt index 4e7da6543424..badb0507608f 100644 --- a/Documentation/vm/numa_memory_policy.txt +++ b/Documentation/vm/numa_memory_policy.txt @@ -174,7 +174,6 @@ Components of Memory Policies allocation fails, the kernel will search other nodes, in order of increasing distance from the preferred node based on information provided by the platform firmware. - containing the cpu where the allocation takes place. Internally, the Preferred policy uses a single node--the preferred_node member of struct mempolicy. When the internal @@ -275,9 +274,9 @@ Components of Memory Policies For example, consider a task that is attached to a cpuset with mems 2-5 that sets an Interleave policy over the same set with MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the - interleave now occurs over nodes 3,5-6. If the cpuset's mems + interleave now occurs over nodes 3,5-7. If the cpuset's mems then change to 0,2-3,5, then the interleave occurs over nodes - 0,3,5. + 0,2-3,5. Thanks to the consistent remapping, applications preparing nodemasks to specify memory policies using this flag should