linux-kernel – 了解最新(3.0.0及以上版本)Linux内核中CONFIG_SMP,Spinlocks和CONFIG_PREEMPT之间的链接

前端之家收集整理的这篇文章主要介绍了linux-kernel – 了解最新(3.0.0及以上版本)Linux内核中CONFIG_SMP,Spinlocks和CONFIG_PREEMPT之间的链接前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
为了给你完整的上下文,我的讨论开始于观察我在基于ARM皮层A8的SoC上运行SMP linux(3.0.1-rt11),这是一个单处理器.我很想知道通过禁用SMP支持是否会有任何性能优势.如果是,它将对我的驱动程序和中断处理程序产生什么影响.

我做了一些阅读,并遇到了两个相关的主题:自旋锁和内核抢占.我没有做更多的谷歌搜索和阅读,但这次我得到的是一些陈旧和矛盾的答案.所以我想让我试试stackoverflow.

我的怀疑/问题的起源是来自Linux设备驱动程序第3版第5章的这一段:

Spinlocks are,by their nature,intended for use on multiprocessor
systems,although a uniprocessor workstation running a preemptive
kernel behaves like SMP,
as far as concurrency is concerned. If a
nonpreemptive uniprocessor system ever went into a spin on a lock,it
would spin forever; no other thread would ever be able to obtain the
cpu to release the lock. For this reason,spinlock operations on
uniprocessor systems without preemption enabled are optimized to do
nothing,with the exception of the ones that change the IRQ masking
status. Because of preemption,even if you never expect your code to
run on an SMP system,you still need to implement proper locking.

我的疑惑/问题是:

a)默认情况下Linux内核是否在内核空间中抢先一步?如果是,此抢占仅限于进程或中断处理程序是否也可以被抢占?

b)Linux内核(在ARM上)是否支持嵌套中断?如果是,每个中断处理程序(上半部分)是否都有自己的堆栈,或者它们共享相同的4k / 8k内核模式堆栈?

c)如果我禁用SMP(Config_SMP)并且抢占(Config_preempt)将在我的驱动程序中旋转锁定并且中断处理程序有意义吗?

d)内核如何处理在执行上半部分时产生的中断,即它们是被禁用还是被屏蔽

经过一些谷歌搜索我发现了这个:

For kernels compiled without CONFIG_SMP,and without CONFIG_PREEMPT
spinlocks do not exist at all. This is an excellent design decision:
when no-one else can run at the same time,there is no reason to have
a lock.

If the kernel is compiled without CONFIG_SMP,but CONFIG_PREEMPT is
set,then spinlocks simply disable preemption,which is sufficient to
prevent any races. For most purposes,we can think of preemption as
equivalent to SMP,and not worry about it separately.

但是source上没有内核版本或日期.任何人都可以确认它是否仍然适用于最新的Linux内核?

解决方法

a)Linux是否抢先取决于您是否以这种方式配置它
使用CONFIG_PREEMPT.没有默认值.如果你运行make config,你将不得不选择.

b)中断在Linux上嵌套;在处理中断时,其他中断可能会中断.在ARM和许多其他架构上都是如此.它们都在同一个堆栈上.当然,用户空间堆栈不用于中断!

c)如果禁用SMP和抢占,如果它们是常规自旋锁,则代码中的自旋锁将减少为无操作,并且IRQ自旋锁(spin_lock_irqsave / spin_lock_irqrestore)将变为中断禁用/启用.因此,后者仍然是必不可少的;它们可以阻止运行代码的任务与运行代码的中断之间的竞争.

d)“上半部分”传统上是指中断服务程序.驱动程序的上半部分代码由中断运行.下半部分由任务调用(读取或写入数据或其他).中断处理的细节是体系结构特定的.

I most recently worked very closely with Linux interrupts on a particular MIPS architecture. On that particular board,there were 128 interrupt lines maskable via two 64 bit words. The kernel implemented a priority scheme on top of this,so before executing a handler for a given interrupt,the lower ones were masked via updates of those 2×64 bit registers. I implemented a modification so that the interrupt priorities could be set arbitrarily,rather than by hardware position,and dynamically by writing values into a /proc entry. Moreover,I put in a hack whereby a portion of the numeric IRQ priority overlapped with real-time priority of tasks. So RT tasks (i.e. user space threads) assigned to certain range of priority levels were able to implicitly suppress a certain range of interrupts while running. This was very useful in preventing badly-behaved interrupts from interfering with critical tasks (for instance,an interrupt service routine in the IDE driver code used for the compact flash,which executes busy loops due to a badly designed hardware interface,causing flash writes to become the de-facto highest priority activity in the system.) So anyway,IRQ masking behavior isn’t written in stone,if you’re in control of the kernel used by customers.

问题中引用的陈述仅适用于常规自旋锁(spin_lock函数/宏)而不是IRQ自旋锁(spin_lock_irqsave).在单处理器上的可抢占内核中,spin_lock只需要禁用抢占,这足以在spin_unlock之前将所有其他任务保留在内核之外.但spin_lock_irqsave必须禁用中断.

猜你在找的Linux相关文章