31.03.2020, 05:19
From my personal tests i see that at least ProRres 10bit video source transcoded to same true ProRes 10bit with all hidden dynamic range data. It don't looks like upscaled from 8 bit depth. In 8 bit depth processing hidden dynamic range data usually just clipped.
So at least processing done in 12 or 16 bit? Or how does it works inside?
Here is another test example. Gradient with applied Levels adjustment to illustrate 8bit depth limits. There are some tiny degradation in FFmpeg Hybrid, so yeah, it is not 10000000% perfect. But same time it don't looks like crappy 8 bit processing as well.
Rendered from Resolve to 12bit ProRes444, applied Levels in Resolve
Rendered from Hybrid to 12bit ProRes444, applied Levels in Resolve
Rendered from Resolve to 8bit Tiff, applied Levels in Resolve
So at least processing done in 12 or 16 bit? Or how does it works inside?
Here is another test example. Gradient with applied Levels adjustment to illustrate 8bit depth limits. There are some tiny degradation in FFmpeg Hybrid, so yeah, it is not 10000000% perfect. But same time it don't looks like crappy 8 bit processing as well.
Rendered from Resolve to 12bit ProRes444, applied Levels in Resolve
Rendered from Hybrid to 12bit ProRes444, applied Levels in Resolve
Rendered from Resolve to 8bit Tiff, applied Levels in Resolve