Qhether you want 10bit or 8bit output depends on what you need. 10bit H.264 usually has no hardware decoding support (10bit h.265 usually has). 10bit will deliver better quality at the same bit rate.
Vapoursynth in Hybrid will deliver the color space you specify in the encoder you use, the target format&co you chose in HDR10ToSDR only is for that filter.
Note that there is no strict standard how to convert from HDR to SDR, so which method you chose is your choice, so you might want to check the colors in the Vapoursynth preview and adjust the settings.
Personally, I usually either use 'HDR10 to SDR (DG)' or 'ToneMap (Placebo)', the latter is slower, but depending on the setting I prefer it.
Yes, it's enough to use one of the filters in the 'HDR to SDR'-tab to adjust the colors, but you probably might want to add a matrix conversion to change from bt2020 to bt709 for general compatibility.
I intend to encode to x265 10bit bit-depth. The question was really whether to use 8bit or 10bit YUV420 color format (P8 vs P10) since the default choice is P8 in HDR10ToSDR.
The second line of choices in HDR10ToSDR has the three target areas (format/matrix/range), so it would seem redundant to use another matrix conversion filter, right? The default choice there is 709.
So, with the default settings for both filters, ToneMap gives a very dull image and HDR10ToSDR gives a blown out image. Neither are accurate to a streaming source I am using as a reference.
Any suggestions on best settings to use to get to the intended image?
Also, when working with a UHD BluRay remux, generating a preview takes a very long time (several minutes). It looks like it is loading the entire file every time it refreshes. Is there a way to keep it in a cache so it loads faster?
12.06.2023, 20:29 (This post was last modified: 12.06.2023, 20:45 by Selur.)
The default settings are not suited for everything/anything.
Like I wrote, there is no standard which defines 'this is THE way to convert HDR to SDR' it all comes down to preferences it's basically color grading.
Quote: generating a preview takes a very long time (several minutes)
What hardware are you using?
What source filter are you using? (assuming your hardware can decode your source through the gpu this might help )
What does your script look like?
Filtering 4k content requires some serious computing power, especially when using software based filters.
Also keep in mind you won't get the same colors the HDR version has with the SDR version, if that was possible there would be no need for HDR.
for example: (left original, right filterd using ToneMap(placebo)
Cu Selur
Ps.: When doing HDR to SDR you should have a HDR monitor, and open the HDR content in a player and the filter preview to properly see what you try to emulate.
The rendering of the filter seems to be fairly quick once the file is loaded. But every time I refresh the preview it reads the entire file, which in this case is 74GB.
You are using LWLibavSource in software decoding mode. (which for SD content usually is faster than hardware decoding, but for UHD and higher resolutions you hardware decoding is usually faster)
I would recommand to either enable the hardware decoding (Filtering->Vapoursynth->Misc->Source->Libav hardware decoding mode) or use DGDecNV. Unlike LWLibavSource creates a physical index file once upon loading the file and then reuses this, LWLibavSource creates the index anew each time.
Thanks, DGDecNV did the trick. That was certainly annoying.
Using DGHDRtoSDR I was able to pretty closely replicate the streaming reference image (which is SDR) by setting White to 2800 lol.
To clarify, does DGDecNV hardware (GPU) only apply to VS filters? x265 should be purely software (CPU), correct?
Going off topic from the original thread, but how do I make Hybrid use 100% CPU power? During this encode it's using ~50%.
Do I adjust --frame-threads and --pools from their defaults?