Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/363.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何将ffmpeg复杂过滤器转换为ffmpeg python_Python_Ffmpeg_Scripting_Video Editing - Fatal编程技术网

如何将ffmpeg复杂过滤器转换为ffmpeg python

如何将ffmpeg复杂过滤器转换为ffmpeg python,python,ffmpeg,scripting,video-editing,Python,Ffmpeg,Scripting,Video Editing,我正在尝试学习将ffmpeg命令行背景模糊过滤器转换为ffmpeg python格式。 '-lavfi'和[0:v]scale=ih*16/9:-1,boxblur=luma\u radius=min(h\,w)/20:luma\u power=1:chroma\u radius=min(cw\,ch)/20:chroma\u power=1[bg];[bg][0:v]overlay=(W-W)/2:(H-H)/2,crop=H=iw*9/16 中的基本示例是学习简单技巧的好方法,但如何学习完整

我正在尝试学习将ffmpeg命令行背景模糊过滤器转换为
ffmpeg python
格式。
'-lavfi'
[0:v]scale=ih*16/9:-1,boxblur=luma\u radius=min(h\,w)/20:luma\u power=1:chroma\u radius=min(cw\,ch)/20:chroma\u power=1[bg];[bg][0:v]overlay=(W-W)/2:(H-H)/2,crop=H=iw*9/16


中的基本示例是学习简单技巧的好方法,但如何学习完整的转换语法?

不确定您是否理解了这一点。。。但这里有一种方法对我很有效

Tip1:使用库“编码任何过滤器”的先决条件是 理解ffmpeg命令行语法

Tip2:通常,
ffmpeg.filter()
将过滤器名称作为第一个参数。然后是所有过滤条件。此函数用于将下游流返回到刚刚创建的过滤器节点

例如:在问题的ffmpeg命令行示例中。。。阅读它告诉我,您希望缩放视频,然后应用boxblur过滤器,然后进行裁剪

因此,您可以用
ffmpeg-python
术语将其表示为

# create a stream object, Note that any supplied kwargs are passed to ffmpeg verbatim
my_vid_stream = ffmpeg.input(input_file, "lavfi")
# The input() returns a stream object has what is called 'base_object' which represents the outgoing edge of an upstream node and can be used to create more downstream nodes. That is what we will do. This stream base_object has two properties, audio and video .. assign the video stream to a new variable, we will be creating filters to only video stream, as indicated by [0:v] in ffmpeg command line.
my_vid_stream = mystream.video
# ffmpeg.filter() takes the upstream node followed by the name of the filter, followed by the configuration of the filter
# first filter you wanted to apply is 'scale' filter. So...
my_vid_stream = ffmpeg.filter(my_vid_stream,"scale","ih*16/9:-1")
# next to the upstream node create a new filter which does the boxblur operation per your specs. so .. 
my_vid_stream = ffmpeg.filter(my_vid_stream,"boxblur", "min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2")
# finally apply the crop filter to it's upstream node and assign the output stream back to the same variable. so ... 
my_vid_stream = ffmpeg.filter(my_vid_stream, "crop", h="iw*9/16")
# now generate the output node and write it to an output file.
my_vid_stream = ffmpeg.output(my_vid_stream,output_file)
## to see your pipeline in action. call the ffmpeg.run(my_vid_stream)

希望这有助于您或其他努力有效利用此库的人。

我正在研究
FFmpeg-python
添加自定义命令有很大的灵活性。这里我要提到一个例子,在这里我添加了一个循环来覆盖视频,并添加了一个串联过滤器,您可以从这里学习如何添加重置过滤器

        audios = []
        inputs = []
        #generate an empty audio
        e_aud_src = rendering_helper.generate_empty_audio(0.1)
        e_aud = (
            ffmpeg.input(e_aud_src)
                .audio
        )

        for k, i in enumerate(videos):
            inp = ffmpeg.input(i['src'], ss=i['start'],  t=(i['end'] - i['start']))

            inp_f = (inp.filter_multi_output('split')[k]
                        .filter_('scale', width=(i['width'] * Factors().factors['w_factor']), height=(i['height'] * Factors().factors['h_factor'])).filter('setsar', '1/1')
                        .setpts(f"PTS-STARTPTS+{i['showtime']}/TB"))

            audio = ffmpeg.probe(i['src'], select_streams='a')
            if audio['streams'] and i['muted'] == False:
                a = (inp.audio.filter('adelay', f"{i['showtime'] * 1000}|{i['showtime'] * 1000}"))
            else: 
                a = e_aud
            audios.append(a) 

            e_frame = (e_frame.overlay(inp_f, x=(i['xpos'] * Factors().factors['w_factor']), y=(i['ypos'] * Factors().factors['h_factor']), eof_action='pass'))

        
        mix_audios = ffmpeg.filter_(audios, 'amix') if len(audios) > 1 else audios[0]
        inp_con = ffmpeg.concat(e_frame, mix_audios, v=1, a=1)
        return inp_con

首先,我对ffmpeg一无所知。但在简要阅读了API文档之后,似乎很多您想要使用的过滤函数都没有实现(即boxblur)。您可以使用
ffmpeg.filter()
函数自己实现它们,以应用自定义过滤器。希望这有帮助!如果您可以发布一个示例命令行参数,也许我可以提供帮助