int a; volatile int b;
如果线程1执行:
a = 5; b = 6;
然后在这两个指令之间插入一个StoreStore屏障,“a”正在刷新回主存储器.
现在如果线程2:
if(b == 6) a++;
在其间插入一个LoadLoad屏障,我们保证如果新值’b’可见,那么新值’a’也是可见的.但实际上是如何实现的? LoadLoad是否使cpu缓存/寄存器无效?或者只是指示cpu从cpu再次获取从volatile读取的变量的值?
我已经找到关于LoadLoad屏障的信息(http://gee.cs.oswego.edu/dl/jmm/cookbook.html):
LoadLoad Barriers The sequence: Load1; LoadLoad; Load2 ensures that
Load1’s data are loaded before data accessed by Load2 and all
subsequent load instructions are loaded. In general,explicit LoadLoad
barriers are needed on processors that perform speculative loads
and/or out-of-order processing in which waiting load instructions can
bypass waiting stores. On processors that guarantee to always preserve
load ordering,the barriers amount to no-ops.
但它并没有真正解释这是如何实现的.
解决方法
Doug lists the StoreStore,LoadLoad and LoadStore
所以本质上唯一需要的障碍是用于x86架构的StoreLoad.那么在低层次上如何实现呢?
这是博客摘录
nop ;*synchronization entry mov 0x10(%rsi),%rax ;*getfield x
而对于易失性的写作:
xchg %ax,%ax movq $0xab,0x10(%rbx) lock addl $0x0,(%rsp) ;*putfield x
锁指令是道格食谱所列的StoreLoad.但是锁定指令也将所有读取与其他进程同步为listed
Locked instructions can be used to synchronize data written by one
processor and read by another processor.
这减少了为易失性负载发出LoadLoad LoadStore障碍的开销.
所有这一切,我会重申一下assylias指出的.发生这种情况对于开发人员来说不是很重要(如果您对另一个故事的处理器/编译器实现者感兴趣). volatile关键字是一种界面说法
>您将获得由另一个线程编写的最新更新>您不会被JIT编译器优化烧毁.