对于例如{1,2,3,4,5,6}的缓冲器输入,我预期立体声反转为{2,1,6,5}.
我的代码以这种方式正确地反转了样本,但我听到的只是某种低通滤波,而不是样本的立体反转.
我的输入缓冲区中的前4个值是:
0.000104
0.000101
0.000080
0.000113
输出是:
0.000101
0.000104
0.000113
0.000080
我是否误解了输入/输出缓冲区的结构方式?
void First::FirstKernel::Process( const Float32 *inSourceP,Float32 *inDestP,UInt32 inSamplesToProcess,UInt32 inNumChannels,bool &ioSilence ) { if (!ioSilence) { const Float32 *sourceP = inSourceP; Float32 *destP = inDestP; for (int i = inSamplesToProcess/2; i>0; --i) { *(destP+1) = *sourceP; *destP = *(sourceP+1); sourceP = sourceP +2; destP = destP +2; } } }
解决方法
无论如何,您应该继承AUEffectBase类并重写ProcessBufferLists()方法.然后,您将获得一个适当的AudioBufferList结构,其中包含每个音频通道的非隔行缓冲区.与使用内核相比,它还可以更好地控制渲染过程.
编辑:好的,事实证明,内核回调总是传递1个音频通道.另外,如我最初建议的那样覆盖Render()并不是最好的方法.根据AUEffectBase.h源代码中的注释:
If your unit processes N to N channels,and there are no interactions between channels,
it can override NewKernel to create a mono processing object per channel. Otherwise,
don’t override NewKernel,and instead,override ProcessBufferLists.
由于AUEffectBase不是“标准”AudioUnit代码的一部分,因此您需要将cpp / h文件添加到项目中.它们可以在AudioUnits / AUPublic / OtherBases文件夹中的AudioUnit SDK根目录下找到.所以对于你的插件,它看起来像这样:
MyEffect.h:
#include "AUEffectBase.h" class MyEffect : public AUEffectBase { public: // Constructor,other overridden methods,etc. virtual OSStatus ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,const AudioBufferList &inBuffer,AudioBufferList &outBuffer,UInt32 inFramesToProcess); private: // Private member variables,methods };
MyEffect.cpp:
// Other stuff .... OSStatus MyEffect::ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,UInt32 inFramesToProcess) { const float *srcBufferL = (Float32 *)inBuffer.mBuffers[0].mData; const float *srcBufferR = (Float32 *)inBuffer.mBuffers[1].mData; float *destBufferL = (Float32 *)outBuffer.mBuffers[0].mData; float *destBufferR = (Float32 *)outBuffer.mBuffers[1].mData; for(UInt32 frame = 0; frame < inFramesToProcess; ++frame) { *destBufferL++ = *srcBufferL++; *destBufferR++ = *srcBufferR++; } }