32k is just a random limit for the chunk size. (I needed to set some limit)
At the moment the amount of chunks, is determined by:
a. the length of the video
b. the number of scene changes
c. the minimal chunk size (= minimal number of frames in a chunk). This needed to avoid creating tons of chunks due to scene changes.
What Hybrid does is run a scene change detection.
Go through these scene changes and create a list of trims, which respecting 'minimal chunk size'.
For each of the resulting trims, Hybrid will later create a separate chunk to encode.
void JobGeneration::collectedSceneChanges(const QStringList& sceneChanges)
{
int minSceneChangeSize = m_currentParameters.modelHandler()->getGlobalDataModel()->intValue(QString("minChunkSceneSize"));
m_sceneChangeTrims.clear();
int current = -1;
int previous = -1;
int count = sceneChanges.count();
if (count == 0) {
sendMessage(HERROR, QString("Failed detecting scene changes,.."));
return;
}
count = 0;
foreach(QString line, sceneChanges) {
line = line.trimmed();
if (line.isEmpty() || line.startsWith(HASH)) {
continue;
}
current++;
if (!line.startsWith(QString("i"))) {
continue;
}
if (previous == -1) {
previous = current;
continue;
}
if ((current-previous) < minSceneChangeSize) {
continue;
}
count++;
m_sceneChangeTrims << QString("%1,%2").arg(previous).arg(current-1);
previous = current;
}
sendMessage(HLOG, QString("Found %1 scene changes,..").arg(count));
m_sceneChangeTrims << QString("%1,%2").arg(previous).arg(current);
this->createJob();
}
Yes, other code to create trims could be written, but I don't plan on doing that atm. .
(no time, trying to get Hybrids build and deploy scripts working atm.)
I also don't see why using a decent chunk size and limiting the number of parallel processed sub jobs isn't enough.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
(26.10.2025, 13:53)Selur Wrote: 32k is just a random limit for the chunk size. (I needed to set some limit)
At the moment the amount of chunks, is determined by:
a. the length of the video
b. the number of scene changes
c. the minimal chunk size (= minimal number of frames in a chunk). This needed to avoid creating tons of chunks due to scene changes.
I also don't see why using a decent chunk size and limiting the number of parallel processed sub jobs isn't enough.
Cu Selur
Limiting the number of parallel processed is enough, but what about enabling also "Parallel subjob processing", this additional option may interfere with the parallel process on chunks ?
Quote: Limiting the number of parallel processed is enough, but what about enabling also "Parallel subjob processing", this additional option may interfere with the parallel process on chunks ?
"Parallel subjob processing" limits the number of parallel processed sub jobs, since each chunk is a sub job this already does what you want.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
26.10.2025, 15:54 (This post was last modified: 26.10.2025, 17:09 by Selur.)
That's "Parallel Jobs' not 'Parallel subjob count' (which is under Jobs->Misc).
Per source, one job with xy sub jobs are created. (extract, convert, mux,...)
'Parallel Jobs' allows to process multiple jobs/sources in parallel.
'Parallel subjobs' allows during the processing ob a job to process multiple of its sub jobs in parallel.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
I tried to use chunk encoding, was generated a script named "tempSceneChangeDetectionVapoursynthFile", with the following code
# adjusting color space from YUV420P8 to RGB24 for vsHAVC
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="709", range_in_s="limited", range_s="full")
# adding colors using HAVC
clip = havc.HAVC_main(clip=clip, Preset="veryslow", ColorModel="Video+Artistic", CombMethod="ChromaBound Adaptive", VideoTune="vivid", ColorTemp="low", ColorFix="retinex/red", FrameInterp=0, ColorMap="red->brown", ColorTune="medium", BlackWhiteTune="light", BlackWhiteMode=0, BlackWhiteBlend=True, EnableDeepEx=False, enable_fp16=True)
# Resizing using 10 - bicubic spline
clip = core.fmtc.resample(clip=clip, kernel="spline16", w=320, h=136, interlaced=False, interlacedd=False) # resolution 320x136 before RGB24 after RGB48
# adjusting color space from RGB48 to YUV420P8 for vsSCXvidFilter
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="709", range_in_s="full", range_s="limited", dither_type="error_diffusion") # additional resize to allow target color sampling
clip = core.scxvid.Scxvid(clip=clip, log="E:/COLOR/TO_COLORIZE/I Giovani Leoni (1958)/scene_change_detection.txt", use_slices=True)
# set output frame rate to 23.976fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
# output
clip.set_output()
The problem here, is that for generate the scene-change, is performed the full colorization, while, the scene change should be generated using only the input clip.
In this way cannot be used to speed-up the colorization process.