Selur's Little Message Board

Full Version: Parallel video encoding in Hybrid
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Hello Selur,

   What do you think to add parallel video encoding in Hybrid ?

    As you know the parallel audio encoding is already enabled if job->Queue->Parallel subjob processing is enabled.
    In this case if there are 2 audio streams, Hybrid will encode them in parallel launching 2 tasks.

   Theoretically it is possible do the same thing in Hybrid using Vapoursynth script.

   For example supposing that the video to encode using Vapoursynth has 10000 frames.

   Usually the Vapoursynth script generated by Hybrid will look like

Code:
clip = core.lsmas.LWLibavSource(source="sample.mkv", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0)
frame = clip.get_frame(0)
 
   In this case Hybrid could generate 2 scripts like
 
Code:
clip = core.lsmas.LWLibavSource(source="sample.mkv", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0)
clip = clip[:5000]
frame = clip.get_frame(0)
 
Code:
clip = core.lsmas.LWLibavSource(source="sample.mkv", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0)
clip = clip[5000:]
frame = clip.get_frame(0)
 
 and encode in parallel with 2 tasks using vspipe.

 After that you can merge the 2 encoded videos with mkvmerge.exe to obtain the usual sample_new.mkv.

 What do you think ?

Dan
Isn't what you are looking for, what Hybrid calls 'chunked video encoding'.
Quote:Chunked encoding:*experimental*
When enabled Hybrid will try to use split the video stream into chunks. Each chunk consists of one scene (which is at least 10 frames long). Encode each chunk separately and then join the chunks before muxing.
This may especially slow encoding which does not utilize the cpu could be sped up.

Important to note about this option is:
- the scene change analysis is based on Vapoursynth, so without Vapoursynth no chunking
- chunking is not possible with two pass encoding
- yes, this will likely lower the compression ratio, as it might a. hinder rate control b. force the usage of more 'key'-frames.
- yes, progress indication is likely not to be that helpful as it is atm. (not sure how/when/whether I will try to change this)

Cu Selur

Ps.: added this in the"2021.12.05.1" release Wink It's only really interessting for aomenc or similar, where the multithreading support of the encoder is bad. (using machine learning filters will probably crash horribly due to vram usage etc.; same with memory hungry Vapoursynth script)
Quote:This implies that will be generated 10 chunks.
At most. Hybrid will analyze the output for scene changes (at the start of the job creation) and combine those until min length is archived.
(Yes, if your clip has no scene changes, you will just get one chunk.)

Quote:How many task will be used by Hybrid to encode these chunks ?
Not sure since it's ages since I wrote this. Tongue
(I used this mainly with aomenc to stress test a new cpu / system. Smile)
Since I haven't used this for quite some time, I'm not even sure it still works.

I can do some testing tomorrow. Smile

Cu Selur
(02.10.2024, 20:10)Selur Wrote: [ -> ]Since I haven't used this for quite some time, I'm not even sure it still works.

I tested it. 

It will be generated a number of tasks that is depending on scene change and minimum chunk length. 
The number of tasks encoded in parallel depend on value of Jobs->Misc->Parallel subjob count.

The current implementation crash during the merge if in the path there are spaces.
To work in the txt including the clips to merge it is necessary to use the character quote '

for example

 
Code:
file 'video1.mp4'
file 'video2.mp4'

This is also explained here: Concatenate Videos Together Using ffmpeg

A part this problem, it seems working.

Dan
Ah, there did I hide the 'Parallel subjob count' Spinbox. Smile
Adding ticks (') should be easy to fix.

Will fix it later when I'm home and send you a link.

Cu Selur
Uploaded a new dev (and a delodify test) version, which now add ticks around the file paths in the ffmpeg concat file.

Cu Selur
Thanks, it would be useful if you increase the max chunk size from 1024 to something near 9k/10k.

Dan
I assume you mean the max of minChunkSceneSize. => no problem
Send you links via pm.

Cu Selur
I noted that you add the trim() of the clip at the end of the script.

Suppose that there is a script using a lot of functions with high usage of CPU/GPU.

Vapoursynth will apply these functions to all the clip length or, since at the end there is the trim() only on the trimmed clip ?

I think that it is better to move the trim() at the beginning of the script.

Dan
Quote:Vapoursynth will apply these functions to all the clip length or, since at the end there is the trim() only on the trimmed clip ?
Running a script with:
Code:
clip = core.std.Trim(clip, 0, 100)
clip = RealESRGAN(clip=clip, model=5, device_index=0) # 2560x1408
vs.
Code:
clip = RealESRGAN(clip=clip, model=5, device_index=0) # 2560x1408
clip = core.std.Trim(clip, 0, 100)
has the same speed here. Smile

=> So, no reason to change anything.

Cu Selur
Pages: 1 2