The idle notifier, called by enter_idle(), enters into rcu read
side critical section but at that time we already switched into
the RCU-idle window (rcu_idle_enter() has been called). And it's
illegal to use rcu_read_lock() in that state.
This results in rcu reporting its bad mood:
[ 1.275635] WARNING: at include/linux/rcupdate.h:194 __atomic_notifier_call_chain+0xd2/0x110()
[ 1.275635] Hardware name: AMD690VM-FMH
[ 1.275635] Modules linked in:
[ 1.275635] Pid: 0, comm: swapper Not tainted 3.0.0-rc6+ #252
[ 1.275635] Call Trace:
[ 1.275635] [<
ffffffff81051c8a>] warn_slowpath_common+0x7a/0xb0
[ 1.275635] [<
ffffffff81051cd5>] warn_slowpath_null+0x15/0x20
[ 1.275635] [<
ffffffff817d6f22>] __atomic_notifier_call_chain+0xd2/0x110
[ 1.275635] [<
ffffffff817d6f71>] atomic_notifier_call_chain+0x11/0x20
[ 1.275635] [<
ffffffff810018a0>] enter_idle+0x20/0x30
[ 1.275635] [<
ffffffff81001995>] cpu_idle+0xa5/0x110
[ 1.275635] [<
ffffffff817a7465>] rest_init+0xe5/0x140
[ 1.275635] [<
ffffffff817a73c8>] ? rest_init+0x48/0x140
[ 1.275635] [<
ffffffff81cc5ca3>] start_kernel+0x3d1/0x3dc
[ 1.275635] [<
ffffffff81cc5321>] x86_64_start_reservations+0x131/0x135
[ 1.275635] [<
ffffffff81cc5412>] x86_64_start_kernel+0xed/0xf4
[ 1.275635] ---[ end trace
a22d306b065d4a66 ]---
Fix this by entering rcu extended quiescent state later, just before
the CPU goes to sleep.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
/* endless idle loop with no priority at all */
while (1) {
- tick_nohz_idle_enter_norcu();
+ tick_nohz_idle_enter();
while (!need_resched()) {
rmb();
enter_idle();
/* Don't trace irqs off for idle */
stop_critical_timings();
+
+ /* enter_idle() needs rcu for notifiers */
+ rcu_idle_enter();
+
if (cpuidle_idle_call())
pm_idle();
+
+ rcu_idle_exit();
start_critical_timings();
/* In many cases the interrupt that ended idle
__exit_idle();
}
- tick_nohz_idle_exit_norcu();
+ tick_nohz_idle_exit();
preempt_enable_no_resched();
schedule();
preempt_disable();