录制的视频上的iPhone水印。
在我的应用程序中,我需要捕获一个视频并在该视频上添加水印。水印应为文本(时间和注释)。我看到一个使用“QTKit”框架的代码。然而,我读到该框架不适用于iPhone录制的视频上的iPhone水印。,iphone,watermark,video-watermarking,Iphone,Watermark,Video Watermarking,在我的应用程序中,我需要捕获一个视频并在该视频上添加水印。水印应为文本(时间和注释)。我看到一个使用“QTKit”框架的代码。然而,我读到该框架不适用于iPhone 提前感谢。使用AVFoundation。我建议使用avcapturedevideodataoutput抓取帧,然后用水印图像覆盖捕获的帧,最后将捕获和处理的帧写入文件用户AVAssetWriter 搜索堆栈溢出,这里有大量奇妙的例子,详细介绍了如何完成我提到的每一件事情。我还没有看到任何一个给出代码示例的例子来精确描述您想要的效果,
提前感谢。使用
AVFoundation
。我建议使用avcapturedevideodataoutput
抓取帧,然后用水印图像覆盖捕获的帧,最后将捕获和处理的帧写入文件用户AVAssetWriter
搜索堆栈溢出,这里有大量奇妙的例子,详细介绍了如何完成我提到的每一件事情。我还没有看到任何一个给出代码示例的例子来精确描述您想要的效果,但是您应该能够非常轻松地进行混合和匹配
编辑:
请看以下链接:
-这篇文章可能只是因为包含了相关代码的性质而有所帮助
AVCaptureDataOutput
将以CMSampleBufferRef
s的形式返回图像。
使用以下代码将它们转换为CGImageRef
s:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
从那里你可以转换成UIImage
UIImage *img = [UIImage imageWithCGImage:yourCGImage];
然后使用
[img drawInRect:CGRectMake(x,y,height,width)];
要将帧绘制到上下文中,请在其上绘制水印的PNG,然后使用AVAssetWriter
将处理后的图像添加到输出视频中。我建议您实时添加它们,这样您就不会用大量的图像填满内存
-这篇文章展示了如何在给定的持续时间内将处理过的UIImage添加到视频中
这会让你很好地为视频添加水印。记住要进行良好的内存管理,因为泄漏以20-30fps的速度传入的图像是使应用程序崩溃的好方法。添加水印非常简单。您只需要使用CALayer和AVVideoCompositionCoreAnimationTool。代码可以按照相同的顺序复制和组装。为了更好地理解,我刚刚尝试在两者之间插入一些评论 假设您已经录制了视频,那么我们将首先创建AVURLAsset:
AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:outputFileURL options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
ofTrack:clipVideoTrack
atTime:kCMTimeZero error:nil];
[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]];
仅此代码就可以导出视频,但我们希望先添加带有水印的层。请注意,有些代码可能看起来是多余的,但这是一切工作所必需的
首先,我们使用水印图像创建图层:
UIImage *myImage = [UIImage imageNamed:@"icon.png"];
CALayer *aLayer = [CALayer layer];
aLayer.contents = (id)myImage.CGImage;
aLayer.frame = CGRectMake(5, 25, 57, 57); //Needed for proper display. We are using the app icon (57x57). If you use 0,0 you will not see it
aLayer.opacity = 0.65; //Feel free to alter the alpha here
如果我们不想要图像而想要文本:
CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = @"Text goes here";
titleLayer.font = @"Helvetica";
titleLayer.fontSize = videoSize.height / 6;
//?? titleLayer.shadowOpacity = 0.5;
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, videoSize.width, videoSize.height / 6); //You may need to adjust this for proper display
以下代码按正确顺序对图层进行排序:
CGSize videoSize = [videoAsset naturalSize];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:aLayer];
[parentLayer addSublayer:titleLayer]; //ONLY IF WE ADDED TEXT
现在,我们正在创建合成并添加插入层的说明:
AVMutableVideoComposition* videoComp = [[AVMutableVideoComposition videoComposition] retain];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
/// instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];
现在我们准备出口:
_assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];//AVAssetExportPresetPassthrough
_assetExport.videoComposition = videoComp;
NSString* videoName = @"mynewwatermarkedvideo.mov";
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL *exportUrl = [NSURL fileURLWithPath:exportPath];
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath])
{
[[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}
_assetExport.outputFileType = AVFileTypeQuickTimeMovie;
_assetExport.outputURL = exportUrl;
_assetExport.shouldOptimizeForNetworkUse = YES;
[strRecordedFilename setString: exportPath];
[_assetExport exportAsynchronouslyWithCompletionHandler:
^(void ) {
[_assetExport release];
//YOUR FINALIZATION CODE HERE
}
];
[audioAsset release];
[videoAsset release];
只需下载代码并使用它。它位于Apple开发者文档页面中 以下是如何在录制的视频中插入动画(图像/幻灯片/帧数组)和静态图像水印
它使用CAKeyframeAnimation为帧设置动画,并使用AVMutableCompositionTrack、AVAssetExportSession和AVMutableVideoComposition以及AVMutableVideoCompositionInstruction将所有内容组合在一起。对于objective-c,@Julio给出的答案已经很好了 以下是与Swift 3.0相同的代码基础: 水印&生成方形或裁剪视频,如Instagram 从文档目录获取输出文件&创建avurlaste
//output file
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
let outputPath = documentsURL?.appendingPathComponent("squareVideo.mov")
if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
do {
try FileManager.default.removeItem(atPath: (outputPath?.path)!)
}
catch {
print ("Error deleting file")
}
}
//input file
let asset = AVAsset.init(url: filePath)
print (asset)
let composition = AVMutableComposition.init()
composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
//input clip
let clipVideoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
//rotate to potrait
let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
let t1 = CGAffineTransform(translationX: clipVideoTrack.naturalSize.height, y: -(clipVideoTrack.naturalSize.width - clipVideoTrack.naturalSize.height) / 2)
let t2: CGAffineTransform = t1.rotated(by: .pi/2)
let finalTransform: CGAffineTransform = t2
transformer.setTransform(finalTransform, at: kCMTimeZero)
instruction.layerInstructions = [transformer]
videoComposition.instructions = [instruction]
使用水印图像创建图层:
//adding the image layer
let imglogo = UIImage(named: "video_button")
let watermarkLayer = CALayer()
watermarkLayer.contents = imglogo?.cgImage
watermarkLayer.frame = CGRect(x: 5, y: 25 ,width: 57, height: 57)
watermarkLayer.opacity = 0.85
let textLayer = CATextLayer()
textLayer.string = "Nodat"
textLayer.foregroundColor = UIColor.red.cgColor
textLayer.font = UIFont.systemFont(ofSize: 50)
textLayer.alignmentMode = kCAAlignmentCenter
textLayer.bounds = CGRect(x: 5, y: 25, width: 100, height: 20)
使用文本作为水印而不是图像创建图层:
//adding the image layer
let imglogo = UIImage(named: "video_button")
let watermarkLayer = CALayer()
watermarkLayer.contents = imglogo?.cgImage
watermarkLayer.frame = CGRect(x: 5, y: 25 ,width: 57, height: 57)
watermarkLayer.opacity = 0.85
let textLayer = CATextLayer()
textLayer.string = "Nodat"
textLayer.foregroundColor = UIColor.red.cgColor
textLayer.font = UIFont.systemFont(ofSize: 50)
textLayer.alignmentMode = kCAAlignmentCenter
textLayer.bounds = CGRect(x: 5, y: 25, width: 100, height: 20)
按水印的正确顺序在视频上添加层
let videoSize = clipVideoTrack.naturalSize
let parentlayer = CALayer()
let videoLayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
videoLayer.frame = CGRect(x: 0, y: 0, width: videoSize.height, height: videoSize.height)
parentlayer.addSublayer(videoLayer)
parentlayer.addSublayer(watermarkLayer)
parentlayer.addSublayer(textLayer) //for text layer only
以方形格式裁剪视频-大小为300*300
//make it square
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = CGSize(width: 300, height: 300) //change it as per your needs.
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderScale = 1.0
//Magic line for adding watermark to the video
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videoLayer], in: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30))
旋转到纵向
//output file
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first
let outputPath = documentsURL?.appendingPathComponent("squareVideo.mov")
if FileManager.default.fileExists(atPath: (outputPath?.path)!) {
do {
try FileManager.default.removeItem(atPath: (outputPath?.path)!)
}
catch {
print ("Error deleting file")
}
}
//input file
let asset = AVAsset.init(url: filePath)
print (asset)
let composition = AVMutableComposition.init()
composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
//input clip
let clipVideoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
//rotate to potrait
let transformer = AVMutableVideoCompositionLayerInstruction(assetTrack: clipVideoTrack)
let t1 = CGAffineTransform(translationX: clipVideoTrack.naturalSize.height, y: -(clipVideoTrack.naturalSize.width - clipVideoTrack.naturalSize.height) / 2)
let t2: CGAffineTransform = t1.rotated(by: .pi/2)
let finalTransform: CGAffineTransform = t2
transformer.setTransform(finalTransform, at: kCMTimeZero)
instruction.layerInstructions = [transformer]
videoComposition.instructions = [instruction]
导出视频的最后一步
let exporter = AVAssetExportSession.init(asset: asset, presetName: AVAssetExportPresetMediumQuality)
exporter?.outputFileType = AVFileTypeQuickTimeMovie
exporter?.outputURL = outputPath
exporter?.videoComposition = videoComposition
exporter?.exportAsynchronously() { handler -> Void in
if exporter?.status == .completed {
print("Export complete")
DispatchQueue.main.async(execute: {
completion(outputPath)
})
return
} else if exporter?.status == .failed {
print("Export failed - \(String(describing: exporter?.error))")
}
completion(nil)
return
}
这将以正方形大小导出视频,并将水印作为文本或图像
感谢您使用中的swift示例代码将CALayer添加到视频中,我做了一些小更改以修复以下错误:
Error Domain=AVFoundationErrorDomain Code=-11841 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The video could not be composed., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x2830559b0 {Error Domain=NSOSStatusErrorDomain Code=-17390 "(null)"}}
解决方案是在设置层指令时使用构图的视频轨迹而不是原始视频轨迹,如以下swift 5代码所示:
static func addSketchLayer(url: URL, sketchLayer: CALayer, block: @escaping (Result<URL, VideoExportError>) -> Void) {
let composition = AVMutableComposition()
let vidAsset = AVURLAsset(url: url)
let videoTrack = vidAsset.tracks(withMediaType: AVMediaType.video)[0]
let duration = vidAsset.duration
let vid_timerange = CMTimeRangeMake(start: CMTime.zero, duration: duration)
let videoRect = CGRect(origin: .zero, size: videoTrack.naturalSize)
let transformedVideoRect = videoRect.applying(videoTrack.preferredTransform)
let size = transformedVideoRect.size
let compositionvideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))!
try? compositionvideoTrack.insertTimeRange(vid_timerange, of: videoTrack, at: CMTime.zero)
compositionvideoTrack.preferredTransform = videoTrack.preferredTransform
let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
videolayer.opacity = 1.0
sketchLayer.contentsScale = 1
let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
sketchLayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
parentlayer.addSublayer(videolayer)
parentlayer.addSublayer(sketchLayer)
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
layercomposition.renderScale = 1.0
layercomposition.renderSize = CGSize(width: size.width, height: size.height)
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayers: [videolayer], in: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: composition.duration)
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionvideoTrack)
layerinstruction.setTransform(compositionvideoTrack.preferredTransform, at: CMTime.zero)
instruction.layerInstructions = [layerinstruction] as [AVVideoCompositionLayerInstruction]
layercomposition.instructions = [instruction] as [AVVideoCompositionInstructionProtocol]
let compositionAudioTrack:AVMutableCompositionTrack? = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let audioTracks = vidAsset.tracks(withMediaType: AVMediaType.audio)
for audioTrack in audioTracks {
try? compositionAudioTrack?.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: CMTime.zero)
}
let movieDestinationUrl = URL(fileURLWithPath: NSTemporaryDirectory() + "/exported.mp4")
try? FileManager().removeItem(at: movieDestinationUrl)
let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality)!
assetExport.outputFileType = AVFileType.mp4
assetExport.outputURL = movieDestinationUrl
assetExport.videoComposition = layercomposition
assetExport.exportAsynchronously(completionHandler: {
switch assetExport.status {
case AVAssetExportSessionStatus.failed:
print(assetExport.error ?? "unknown error")
block(.failure(.failed))
case AVAssetExportSessionStatus.cancelled:
print(assetExport.error ?? "unknown error")
block(.failure(.canceled))
default:
block(.success(movieDestinationUrl))
}
})
}
enum VideoExportError: Error {
case failed
case canceled
}
static func addSketchLayer(url:url,sketchLayer:CALayer,block:@escaping(Result)->Void){
let composition=AVMutableComposition()
让vidAsset=AVURLAsset(url:url)
让videoTrack=vidAsset.tracks(带MediaType:AVMediaType.video)[0]
let duration=vidAsset.duration
让vid_timerange=CMTimeRangeMake(开始:CMTime.zero,持续时间:持续时间)
让videoRect=CGRect(原点:.0,大小:videoTrack.naturalSize)
让transformedVideoRect=videoRect.applying(videoTrack.preferredTransform)
let size=transformedVideoRect.size
let compositionvideoTrack:AVMutableCompositionTrack=composition.addMutableTrack(with MediaType:AVMediaType.video,preferredTrackID:CMPersistentTrackID(kCMPersistentTrackID_无效))!
try?compositionvideoTrack.insertTimeRange(vid_timerange,of:videoTrack,at:CMTime.zero)
compositionvideoTrack.preferredTransform=videoTrack.preferredTransform
让videolayer=CALayer()
videolayer.frame=CGRect(x:0,y:0,width:size.width,height:size.height)
videolayer.opacity=1.0
sketchLayer.ContentsCale=1
让parentlayer=CALayer()
parentlayer.frame=CGRect(x:0,y:0,width:size.width,height:size.height)
sketchLayer.frame=CGRect(x:0,y:0,宽度:size.width,高度:size.height)
parentlayer.addSublayer(视频层)
parentlayer.addSublayer(sketchLayer)
让layercomposition=AVMutableVideoComposition()
layercomposition.frameDuration=CMTimeMake(值:1,时间刻度:30)
layercomposition.renderScale=1.0
layercomposition.renderSize=CGSize(宽度:size.width,高度:siz