The original code in Linux kernel is:
static inline void __raw_spin_lock_irq(raw_spinlock_t *lock)
{
local_irq_disable();
preempt_disable();
spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
}
I think there is no execution path can preempt current path after local IRQ is disabled.
Because all common hard IRQs are disabled, there should be no softirq occur and also no tick to kick schedule wheel. I think current path is safe. So why there is a preempt_disable()
?
As far as I can tell, preempt_disable()
calls were added to quite a few locking primitives, including spin_lock_irq
, by Dave Miller on December 4th, 2002, and released in 2.5.51. The commit message isn't helpful; it just says "[SPINLOCK]: Fix non-SMP nopping spin/rwlock macros."
I believe the Proper Locking Under a Preemptible Kernel documentation explains this well enough. The final section titled "PREVENTING PREEMPTION USING INTERRUPT DISABLING" begins,
It is possible to prevent a preemption event using local_irq_disable and
local_irq_save. Note, when doing so, you must be very careful ...