Parallel video encoding in Hybrid - Printable Version +- Selur's Little Message Board (https://forum.selur.net) +-- Forum: Talk, Talk, Talk (https://forum.selur.net/forum-5.html) +--- Forum: Small Talk (https://forum.selur.net/forum-7.html) +--- Thread: Parallel video encoding in Hybrid (/thread-3890.html) Pages:
1
2
|
Parallel video encoding in Hybrid - Dan64 - 02.10.2024 Hello Selur, What do you think to add parallel video encoding in Hybrid ? As you know the parallel audio encoding is already enabled if job->Queue->Parallel subjob processing is enabled. In this case if there are 2 audio streams, Hybrid will encode them in parallel launching 2 tasks. Theoretically it is possible do the same thing in Hybrid using Vapoursynth script. For example supposing that the video to encode using Vapoursynth has 10000 frames. Usually the Vapoursynth script generated by Hybrid will look like clip = core.lsmas.LWLibavSource(source="sample.mkv", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0) In this case Hybrid could generate 2 scripts like clip = core.lsmas.LWLibavSource(source="sample.mkv", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0) clip = core.lsmas.LWLibavSource(source="sample.mkv", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0) and encode in parallel with 2 tasks using vspipe. After that you can merge the 2 encoded videos with mkvmerge.exe to obtain the usual sample_new.mkv. What do you think ? Dan RE: Parallel video encoding in Hybrid - Selur - 02.10.2024 Isn't what you are looking for, what Hybrid calls 'chunked video encoding'. Quote:Chunked encoding:*experimental* Cu Selur Ps.: added this in the"2021.12.05.1" release It's only really interessting for aomenc or similar, where the multithreading support of the encoder is bad. (using machine learning filters will probably crash horribly due to vram usage etc.; same with memory hungry Vapoursynth script) RE: Parallel video encoding in Hybrid - Selur - 02.10.2024 Quote:This implies that will be generated 10 chunks.At most. Hybrid will analyze the output for scene changes (at the start of the job creation) and combine those until min length is archived. (Yes, if your clip has no scene changes, you will just get one chunk.) Quote:How many task will be used by Hybrid to encode these chunks ?Not sure since it's ages since I wrote this. (I used this mainly with aomenc to stress test a new cpu / system. ) Since I haven't used this for quite some time, I'm not even sure it still works. I can do some testing tomorrow. Cu Selur RE: Parallel video encoding in Hybrid - Dan64 - 02.10.2024 (02.10.2024, 20:10)Selur Wrote: Since I haven't used this for quite some time, I'm not even sure it still works. I tested it. It will be generated a number of tasks that is depending on scene change and minimum chunk length. The number of tasks encoded in parallel depend on value of Jobs->Misc->Parallel subjob count. The current implementation crash during the merge if in the path there are spaces. To work in the txt including the clips to merge it is necessary to use the character quote ' for example file 'video1.mp4' This is also explained here: Concatenate Videos Together Using ffmpeg A part this problem, it seems working. Dan RE: Parallel video encoding in Hybrid - Selur - 02.10.2024 Ah, there did I hide the 'Parallel subjob count' Spinbox. Adding ticks (') should be easy to fix. Will fix it later when I'm home and send you a link. Cu Selur RE: Parallel video encoding in Hybrid - Selur - 02.10.2024 Uploaded a new dev (and a delodify test) version, which now add ticks around the file paths in the ffmpeg concat file. Cu Selur RE: Parallel video encoding in Hybrid - Dan64 - 03.10.2024 Thanks, it would be useful if you increase the max chunk size from 1024 to something near 9k/10k. Dan RE: Parallel video encoding in Hybrid - Selur - 03.10.2024 I assume you mean the max of minChunkSceneSize. => no problem Send you links via pm. Cu Selur RE: Parallel video encoding in Hybrid - Dan64 - 03.10.2024 I noted that you add the trim() of the clip at the end of the script. Suppose that there is a script using a lot of functions with high usage of CPU/GPU. Vapoursynth will apply these functions to all the clip length or, since at the end there is the trim() only on the trimmed clip ? I think that it is better to move the trim() at the beginning of the script. Dan RE: Parallel video encoding in Hybrid - Selur - 03.10.2024 Quote:Vapoursynth will apply these functions to all the clip length or, since at the end there is the trim() only on the trimmed clip ?Running a script with: clip = core.std.Trim(clip, 0, 100) clip = RealESRGAN(clip=clip, model=5, device_index=0) # 2560x1408 => So, no reason to change anything. Cu Selur |