Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/103.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 尝试在swift中合并多个视频文件时出现内存警告_Ios_Swift_Memory Management_Avfoundation - Fatal编程技术网

Ios 尝试在swift中合并多个视频文件时出现内存警告

Ios 尝试在swift中合并多个视频文件时出现内存警告,ios,swift,memory-management,avfoundation,Ios,Swift,Memory Management,Avfoundation,我试图合并2个视频一起使用swift。然而,当我试图运行这段代码时,我会收到一个内存警告,有时还会崩溃 我的直觉是,出于某种原因,我提前退出了dispatch_组并完成了编写 然而,我也注意到有时候我并没有走那么远 我还注意到,我的samples.count有时非常庞大,这似乎很奇怪,因为每个视频的长度都不超过30秒 我被困在哪里开始解决这个问题tbh。欢迎指点 dispatch_group_enter(self.videoProcessingGroup)

我试图合并2个视频一起使用swift。然而,当我试图运行这段代码时,我会收到一个内存警告,有时还会崩溃

我的直觉是,出于某种原因,我提前退出了dispatch_组并完成了编写

然而,我也注意到有时候我并没有走那么远

我还注意到,我的samples.count有时非常庞大,这似乎很奇怪,因为每个视频的长度都不超过30秒

我被困在哪里开始解决这个问题tbh。欢迎指点

 dispatch_group_enter(self.videoProcessingGroup)               
                asset.requestContentEditingInputWithOptions(options, completionHandler: {(contentEditingInput: PHContentEditingInput?, info: [NSObject : AnyObject]) -> Void in

                    let avAsset = contentEditingInput?.audiovisualAsset


                    let reader = try! AVAssetReader.init(asset: avAsset!)
                    let videoTrack = avAsset?.tracksWithMediaType(AVMediaTypeVideo).first

                    let readerOutputSettings: [String:Int] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]

                    let readerOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: readerOutputSettings)
                    reader.addOutput(readerOutput)
                    reader.startReading()

                    //Create the samples
                    var samples:[CMSampleBuffer] = []

                    var sample: CMSampleBufferRef?

                    sample = readerOutput.copyNextSampleBuffer()

                    while (sample != nil)
                    {
                        autoreleasepool {
                            samples.append(sample!)
                            sample = readerOutput.copyNextSampleBuffer()
                        }
                    }

                    for i in 0...samples.count - 1 {
                        // Get the presentation time for the frame

                        var append_ok:Bool = false

                        autoreleasepool {
                            if  let pixelBufferPool = adaptor.pixelBufferPool {
                                let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
                                let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
                                    kCFAllocatorDefault,
                                    pixelBufferPool,
                                    pixelBufferPointer
                                )

                                let frameTime = CMTimeMake(Int64(frameCount), 30)

                                if var buffer = pixelBufferPointer.memory where status == 0 {
                                    buffer = CMSampleBufferGetImageBuffer(samples[i])!
                                    append_ok = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
                                    pixelBufferPointer.destroy()
                                } else {
                                    NSLog("Error: Failed to allocate pixel buffer from pool")
                                }

                                pixelBufferPointer.dealloc(1)
                                dispatch_group_leave(self.videoProcessingGroup)
                            }
                        }

                    }
                })




        //Finish the session:
        dispatch_group_notify(videoProcessingGroup, dispatch_get_main_queue(), {
            videoWriterInput.markAsFinished()
            videoWriter.finishWritingWithCompletionHandler({
                print("Write Ended")

                // Return writer
                print("Created asset writer for \(size.width)x\(size.height) video")
            })
        })
dispatch\u group\u enter(self.videoProcessingGroup)
asset.requestContentEditingInputWithOptions(选项,completionHandler:{(contentEditingInput:PHContentEditingInput?),信息:[NSObject:AnyObject])->中的Void
设avAsset=contenteditingput?.audiovisualset
let reader=try!avassetrader.init(资产:avAsset!)
让videoTrack=avAsset?.tracksWithMediaType(AVMediaTypeVideo)。首先
let readerOutputSettings:[String:Int]=[kCVPixelBufferPixelFormatTypeKey作为String:Int(kCVPixelFormatType_32BGRA)]
让readerOutput=AvassetraderTrackOutput(曲目:videoTrack!,输出设置:readerOutputSettings)
reader.addOutput(readerOutput)
reader.startreding()
//创建示例
变量样本:[CMSampleBuffer]=[]
变量样本:CMSampleBufferRef?
sample=readerOutput.copyNextSampleBuffer()
while(示例!=nil)
{
自动释放池{
示例。附加(示例!)
sample=readerOutput.copyNextSampleBuffer()
}
}
对于0中的i…样本数。计数-1{
//获取框架的演示时间
var append_ok:Bool=false
自动释放池{
如果让pixelBufferPool=Adapter.pixelBufferPool{
设pixelBufferPointer=UnsafemeutablePointer.alloc(1)
let状态:CVReturn=CVPixelBufferPoolCreatePixelBuffer(
KCO默认值,
像素缓冲池,
像素缓冲指针
)
设frameTime=CMTimeMake(Int64(frameCount),30)
如果var buffer=pixelBufferPointer.memory,其中status==0{
buffer=CMSampleBufferGetImageBuffer(样本[i])!
append\u ok=adapter.appendPixelBuffer(缓冲区,带PresentationTime:frameTime)
pixelBufferPointer.destroy()
}否则{
NSLog(“错误:无法从池中分配像素缓冲区”)
}
pixelBufferPointer.dealloc(1)
调度组离开(自视频处理组)
}
}
}
})
//完成课程:
调度组通知(videoProcessingGroup,调度组获取主队列(){
videoWriterInput.markAsFinished()
videoWriter.finishWritingWithCompletionHandler({
打印(“写入结束”)
//回信作者
打印(“为\(大小.宽度)x \(大小.高度)视频创建的资产写入器”)
})
})

通常,您无法将视频资源的所有帧都放入iOS设备的内存中,甚至在台式机上:

var samples:[CMSampleBuffer] = []
即使视频是30秒长也不行。e、 g.以每秒30帧的速度,解码为BGRA的720p、30秒视频需要
30*30*1280*720*4字节=3.2GB
。每个帧
3.5MB
!如果使用1080p或更高的帧速率,情况会更糟

您需要一帧一帧地逐步合并文件,在任何给定时间在内存中保留尽可能少的帧


然而,对于像合并这样简单的操作,您不需要自己处理帧。您可以创建一个
AVMutableComposition
,附加单个
AVAsset
s,然后使用
avassetxportsession

导出合并文件,谢谢我查看了AVAssetExport,但我也需要在某个点合并一些图像。因此,我选择了这条路线。您可以使用
avmutablevideocompositionlayerlinstruction
s来实现这一点,尽管有时直接修改帧似乎更容易。