Curious as I am, the idea nagged me and I had to think about it some more and test this
.
I even wrote some code in Hybrid to:
1. do a scene change dectection on a source (took ~2min on an 45min SD source with a speed of ~600fps)
2. create a chunk for each of these scene changes (scene that are less than 10 frames get joined with the next until at least 10 frames make up a chunk)
3. encode X chunks in parallel (using aoemenc)
-> this allowed to max out the cpu with any aoemenc settings
4. join the chunks at the end
5. do the normal muxing&co
This worked, but (as expected) totally killed the progress indication.
So far so good, but then I thought about a few things that are problematic with this approach:
a. any filtering which might change the scene detection
b. any filtering which would change the frame count
since they would require that the scene detection is run on the filtered source and not like I did on the unfiltered source.
Also I don't see how 2pass encoding could be properly meaningfully done with this unless you do the first pass on the whole source, and do the second pass only on the chunks (which would require to manipulate the stats file).
qp file creation for chapters would also need to be disabled, but should not cause a problem since you would probably end up with may more key frames then you would normally get.
(Hybrid also tends to freeze a bit during navigation with 2500+ subjobs.)
Seeing these restrictions and that using 'Tile(colomns/rows)' 6/2 with 'CPU utilization' 0 already should help a lot with the cpu usage I'll probably won't finish this and dump the written code since all the restrictions that come with the chunking really seem to not make it seem worth the effort.
-> So, sorry I'm back at my inital "this will not happen in Hybrid, there are no plans to support this."
Cu Selur