x264中的聚合性存取优化

前端之家收集整理的这篇文章主要介绍了x264中的聚合性存取优化前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

Write combining

聚合性存取

Filed under: gcc,speed,ugly code,x264 ::


Let’s say we need to copy a few variables from one array to another. The obvIoUs way is something like this:

byte array1[4] = {1,2,3,4};
byte array2[4];
int i;
for(i = 0; i < 4; i++) array2[i] = array1[i];

But this is suboptimal for many reasons. For one,we’re doing 8-bit reads and writes,which on 32-bit systems may actually be slower than 32-bit reads and writes;

i.e. a single 32-bit read/write may be faster than a single 8-bit read/write. But the main issue is that we could be doing this:

DECLARE_ALIGNED_4(byte array1[4] = {1,4});
DECLARE_ALIGNED_4(byte array2[4]);
*(uint32_t*)array2 = *(uint32_t*)array1;

In a single operation instead of 4,we just copied the whole array. Faster speed-wise and shorter code-wise,too. The alignment is to ensure that we don’t copy

between unaligned arrays,which could crash on non-x86 architectures (e.g. PowerPC) and would also go slightly slower on x86 (but still faster than the uncombined

write). But,one might ask,can’t the compiler do this? Well,there are many reasons it doesn’t happen. We’ll start from the easiest case and go to the hardest

case.

试想我们要从一个数组拷贝几个变量到另一个数组,通常会这样写代码
byte array1[4] = {1,4};
byte array2[4];
int i;
for(i = 0; i < 4; i++) array2[i] = array1[i];
但这不是一个最优的方案。首先,我们是在做一个8-bit的存取操作,这在一个32-bit系统上是要比32-bit的存取操作要慢的。而更重要的原因是我们可以如此优化
DECLARE_ALIGNED_4(byte array1[4] = {1,4});
DECLARE_ALIGNED_4(byte array2[4]);
*(uint32_t*)array2 = *(uint32_t*)array1;
这样一次操作就可以完成4个字节的拷贝,而不是分成4次操作。这样的代码更快,更简洁漂亮。4字节对齐保证我们操作不会在非x86系统上崩溃,也保证在x86系统上比不对齐时的操作更快,

即使不对齐也比分成4次操作时更快。你可能会问,编译器会帮我们做这个工作吗?答案是,有很多原因导致编译器无法帮你做这个工作。我们从简单的到难的逐个解释。


The easiest case is a simple zeroing of a struct (say s={a,b} where a and b are 16-bit integers). The struct is likely to be aligned by the compiler to begin with and

writing zero to {a,b} is the same as writing a 32-bit zero to the whole struct. But GCC doesn’t even optimize this; it still assigns the zeroes separately! How

stupid.

The second-easiest case is the generalization of this; if you’re dealing with arrays in which the function is directly accessing them (rather than pointers to arrays,

which it might not know whether they’re aligned or not) and assigning zero or constant value,write-combining is trivial. But again,GCC doesn’t do it.

最简单的情况是一个简单的结构体的置0操作 ,s={a,b},a和b都是16位的整数。似乎这种情况下,编译器会让结构体字节对齐,然后和写入一个32位整数一样,一次性置0。但是gcc根本不

会这么做,而是分两次置0,多愚蠢!

较难点的情况是给数组进行赋值的时候,gcc也是如此愚蠢的操作。

Now,we get to the harder stuff. What if we’re copying between two arrays,both of which are directly accessed? Now,we have to be able to detect this sequential

copying and merge it. This basically is a simple form of autovectorization; its no surprise at all that GCC doesn’t do this.

The hardest,and in fact nearly impossible case is the one in which we’re dealing with pointers to arrays as arguments; the compiler really has no reliable way of

knowing that the pointers are aligned (though we as programmers might know that they always are). There are cases where it could make accurate derivations (by

annotating pointers passed between functions) as to whether they are aligned or not,in which case it might be able to do write combining; this would of course be very

difficult. Of course,on x86,its still worthwhile to combine even if there’s a misalignment risk,since it will only go slightly slower rather than crash.

考虑一下更复杂的情况,当我们拷贝两个数组的时候,我们直接存取的是什么?我们必须能够搞清楚这些拷贝操作,想办法把分开的操作合在一起,这就是一个简单的自动矢量化。所以对于

gcc不会自动做优化一点都不用感到惊讶。

最复杂的情况是在处理指针参数时,编译器无法知道指针指向的数据的字节对齐方式

The end result of this kind of operation is a massive speed boost in such functions; for example,in the section where motion vectors are cached (in

macroblock_cache_save) I got over double the speed by converting 16-bit copies to write-combined copies. This of course is only on a 32-bit system; on a 64-bit system

we could do even better. The code of course uses 64-bit so that a 64-bit compiled binary will do it as best it can. The compiler is smart enough to split the copies on

32-bit systems,of course.

这种优化操作会极大的提升速度。例如在运动向量的快速缓存的操作中(macroblock_cache_save函数中),我把16-bit的操作改成聚合拷贝操作,速度就提升了一倍。这还只是在32位系统,

如果在64位系统上,效果会更明显。

We could actually do even better if we were willing to use MMX or SSE,since MMX could be used for 64-bit copies on 32-bit systems and SSE could be used for 128-bit

copies. Unfortunately,this would completely sacrifice portability and at this point the speed boost would be pretty small from the current merged copies.

One of the big tricks currently is the ability to treat two motion vectors as one,and since all motion vectors come in pairs (X and Y,16-bit signed integers each),

its quite easy to manipulate them as pairs. This allowed me to drastically speed up a lot of manipulation involved in motion vector prediction and general copying and

storing. The result of all the issues described in the article is this massive diff.

如果用MMX和SSE指令,效果会更好,因为MMX指令可以在32-bit系统上做64-bit的拷贝操作,而SSE可以做128-bit操作。但这会降低代码的可移植性,而性能的提升却很微小。
现在我们用的一个技巧是把两个移动向量合在一起处理,因为所有的移动向量都是成对出现,所以很容易把他们成对的处理。这样的操作可以在移动向量相关的存取操作中极大的提升速度。

这里讨论的优化工作的最大成果就在于此。

原文链接:https://www.f2er.com/javaschema/285861.html

猜你在找的设计模式相关文章