ios – 自定义AVVideoCompositing类无法正常工作

前端之家收集整理的这篇文章主要介绍了ios – 自定义AVVideoCompositing类无法正常工作前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我试图将一个CIFilter应用到AVAsset,然后使用过滤器保存它.我这样做的方式是使用AVAssetExportSession,将VideoComposition设置为具有自定义AVVideoCompositing类的AVMutableVideoComposition对象.

我也将我的AVMutableVideoComposition对象的instructions设置为自定义组合指令类(符合AVMutableVideoCompositionInstruction).该类传递了一个跟踪ID,还有一些其他不重要的变量.

不幸的是,我遇到了一个问题 – 我的自定义视频合成器类(符合AVVideoCompositing)中的startVideoCompositionRequest:功能未被正确调用.

当我将自定义指令类的passthroughTrackID变量设置为轨迹ID时,我的AVVideoCompositing中的startVideoCompositionRequest(request)功能不被调用.

然而,当我没有设置自定义指令类的passthroughTrackID变量时,调用startVideoCompositionRequest(请求)但不正确 – 打印request.sourceTrackIDs导致一个空数组,而request.sourceFrameByTrackID(trackID)导致一个零值.

我发现有趣的是,当尝试使用过滤器导出视频时,cancelAllPendingVideoCompositionRequests:功能总是被调用两次.或者在startVideoCompositionRequest之前被调用一次:一次后,或者​​在没有调用startVideoCompositionRequest的情况下,连续两次.

我已经创建了三个类,用于使用过滤器导出视频.这是实用程序类,它基本上只包括一个导出函数调用所有必需的代码

class VideoFilterExport{

    let asset: AVAsset
    init(asset: AVAsset){
        self.asset = asset
    }

    func export(toURL url: NSURL,callback: (url: NSURL?) -> Void){
        guard let track: AVAssetTrack = self.asset.tracksWithMediaType(AVMediaTypeVideo).first else{callback(url: nil); return}

        let composition = AVMutableComposition()
        let compositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo,preferredTrackID: kCMPersistentTrackID_Invalid)

        do{
            try compositionTrack.insertTimeRange(track.timeRange,ofTrack: track,atTime: kCMTimeZero)
        }
        catch _{callback(url: nil); return}

        let videoComposition = AVMutableVideoComposition(propertiesOfAsset: composition)
        videoComposition.customVideoCompositorClass = VideoFilterCompositor.self
        videoComposition.frameDuration = CMTimeMake(1,30)
        videoComposition.renderSize = compositionTrack.naturalSize

        let instruction = VideoFilterCompositionInstruction(trackID: compositionTrack.trackID)
        instruction.timeRange = CMTimeRangeMake(kCMTimeZero,self.asset.duration)
        videoComposition.instructions = [instruction]

        let session: AVAssetExportSession = AVAssetExportSession(asset: composition,presetName: AVAssetExportPresetMediumQuality)!
        session.videoComposition = videoComposition
        session.outputURL = url
        session.outputFileType = AVFileTypeMPEG4

        session.exportAsynchronouslyWithCompletionHandler(){
            callback(url: url)
        }
    }
}

这是另外两个类 – 我将它们放在一个代码块中,使这个短的时间更短

// Video Filter Composition Instruction Class - from what I gather,// AVVideoCompositionInstruction is used only to pass values to
// the AVVideoCompositing class

class VideoFilterCompositionInstruction : AVMutableVideoCompositionInstruction{

    let trackID: CMPersistentTrackID
    let filters: ImageFilterGroup
    let context: CIContext


    // When I leave this line as-is,startVideoCompositionRequest: isn't called.
    // When commented out,startVideoCompositionRequest(request) is called,but there
    // are no valid CVPixelBuffers provided by request.sourceFrameByTrackID(below value)
    override var passthroughTrackID: CMPersistentTrackID{get{return self.trackID}}
    override var requiredSourceTrackIDs: [NSValue]{get{return []}}
    override var containsTweening: Bool{get{return false}}


    init(trackID: CMPersistentTrackID,filters: ImageFilterGroup,context: CIContext){
        self.trackID = trackID
        self.filters = filters
        self.context = context

        super.init()

        //self.timeRange = timeRange
        self.enablePostProcessing = true
    }

    required init?(coder aDecoder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

}


// My custom AVVideoCompositing class. This is where the problem lies -
// although I don't know if this is the root of the problem

class VideoFilterCompositor : NSObject,AVVideoCompositing{

    var requiredPixelBufferAttributesForRenderContext: [String : AnyObject] = [
        kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32BGRA),// The video is in 32 BGRA
        kCVPixelBufferOpenGLESCompatibilityKey as String : NSNumber(bool: true),kCVPixelBufferOpenGLCompatibilityKey as String : NSNumber(bool: true)
    ]
    var sourcePixelBufferAttributes: [String : AnyObject]? = [
        kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32BGRA),kCVPixelBufferOpenGLESCompatibilityKey as String : NSNumber(bool: true),kCVPixelBufferOpenGLCompatibilityKey as String : NSNumber(bool: true)
    ]

    let renderQueue = dispatch_queue_create("co.getblix.videofiltercompositor.renderingqueue",DISPATCH_QUEUE_SERIAL)

    override init(){
        super.init()
    }

    func startVideoCompositionRequest(request: AVAsynchronousVideoCompositionRequest){
       // This code block is never executed when the
       // passthroughTrackID variable is in the above class  

        autoreleasepool(){
            dispatch_async(self.renderQueue){
                guard let instruction = request.videoCompositionInstruction as? VideoFilterCompositionInstruction else{
                    request.finishWithError(NSError(domain: "getblix.co",code: 760,userInfo: nil))
                    return
                }
                guard let pixels = request.sourceFrameByTrackID(instruction.passthroughTrackID) else{
                    // This code block is executed when I comment out the
                    // passthroughTrackID variable in the above class            

                    request.finishWithError(NSError(domain: "getblix.co",code: 761,userInfo: nil))
                    return
                }
                // I have not been able to get the code to reach this point
                // This function is either not called,or the guard
                // statement above executes

                let image = CIImage(CVPixelBuffer: pixels)
                let filtered: CIImage = //apply the filter here

                let width = CVPixelBufferGetWidth(pixels)
                let height = CVPixelBufferGetHeight(pixels)
                let format = CVPixelBufferGetPixelFormatType(pixels)

                var newBuffer: CVPixelBuffer?
                CVPixelBufferCreate(kcfAllocatorDefault,width,height,format,nil,&newBuffer)

                if let buffer = newBuffer{
                    instruction.context.render(filtered,toCVPixelBuffer: buffer)
                    request.finishWithComposedVideoFrame(buffer)
                }
                else{
                    request.finishWithComposedVideoFrame(pixels)
                }
            }
        }
    }

    func renderContextChanged(newRenderContext: AVVideoCompositionRenderContext){
        // I don't have any code in this block
    }

    // This is interesting - this is called twice,// Once before startVideoCompositionRequest is called,// And once after. In the case when startVideoCompositionRequest
    // Is not called,this is simply called twice in a row
    func cancelAllPendingVideoCompositionRequests(){
        dispatch_barrier_async(self.renderQueue){
            print("Cancelled")
        }
    }
}

我一直在看这个Apple’s AVCustomEdit sample project很多的指导,但是我似乎无法找到这个发生的原因.

如何让request.sourceFrameByTrackID:功能正常调用,并为每个帧提供一个有效的CVPixelBuffer?

解决方法

事实证明,定制 AVVideoCompositionInstruction类(问题中的VideoFilterCompositionInstruction)中的 requiredSourceTrackIDs变量必须设置为包含跟踪ID的数组
override var requiredSourceTrackIDs: [NSValue]{
  get{
    return [
      NSNumber(value: Int(self.trackID))
    ]
  }
}

所以最终的自定义组合指令类是

class VideoFilterCompositionInstruction : AVMutableVideoCompositionInstruction{
    let trackID: CMPersistentTrackID
    let filters: [CIFilter]
    let context: CIContext

    override var passthroughTrackID: CMPersistentTrackID{get{return self.trackID}}
    override var requiredSourceTrackIDs: [NSValue]{get{return [NSNumber(value: Int(self.trackID))]}}
    override var containsTweening: Bool{get{return false}}

    init(trackID: CMPersistentTrackID,filters: [CIFilter],context: CIContext){
        self.trackID = trackID
        self.filters = filters
        self.context = context

        super.init()

        self.enablePostProcessing = true
    }

    required init?(coder aDecoder: NSCoder){
        fatalError("init(coder:) has not been implemented")
    }
}

此实用程序的所有代码is also on GitHub

猜你在找的iOS相关文章