This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

If I am encoding from UHD HDR to HD SDR x265 10bit, do I want YUV420P8 or YUV420P10?
#1
If I am encoding from UHD HDR to HD SDR x265 10bit, do I want YUV420P8 or YUV420P10?

x265 -> Base -> Color Sampling/Space specifies i420 and 10bit.

From Filtering -> Vapoursynth -> Color -> HDR to SDR -> HDR10ToSDR it would seem that the target color format I want would be YUV420P10 for 10bit. 

However, the FourCC for P10 is Y3[11][10] and the FourCC for P8 is i420. Do I use the P8 8bit color format for 10bit bit-depth?

Also, is HDR10ToSDR the only filter I need to apply to change the color matrix and tone map for proper SDR?
Reply
#2
Qhether you want 10bit or 8bit output depends on what you need. 10bit H.264 usually has no hardware decoding support (10bit h.265 usually has). 10bit will deliver better quality at the same bit rate.

Vapoursynth in Hybrid will deliver the color space you specify in the encoder you use, the target format&co you chose in HDR10ToSDR only is for that filter.
Note that there is no strict standard how to convert from HDR to SDR, so which method you chose is your choice, so you might want to check the colors in the Vapoursynth preview and adjust the settings.
Personally, I usually either use 'HDR10 to SDR (DG)' or 'ToneMap (Placebo)', the latter is slower, but depending on the setting I prefer it.

Yes, it's enough to use one of the filters in the 'HDR to SDR'-tab to adjust the colors, but you probably might want to add a matrix conversion to change from bt2020 to bt709 for general compatibility.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#3
I intend to encode to x265 10bit bit-depth. The question was really whether to use 8bit or 10bit YUV420 color format (P8 vs P10) since the default choice is P8 in HDR10ToSDR.

The second line of choices in HDR10ToSDR has the three target areas (format/matrix/range), so it would seem redundant to use another matrix conversion filter, right? The default choice there is 709.
Reply
#4
Yes, if you aim for 10bit output, use P10.
And yes, when using HDR10ToSR as your filter of choice, no additional color matrix conversion is needed.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#5
Perfect, thanks!
Reply
#6
So, with the default settings for both filters, ToneMap gives a very dull image and HDR10ToSDR gives a blown out image. Neither are accurate to a streaming source I am using as a reference. 
Any suggestions on best settings to use to get to the intended image?

Also, when working with a UHD BluRay remux, generating a preview takes a very long time (several minutes). It looks like it is loading the entire file every time it refreshes. Is there a way to keep it in a cache so it loads faster?
Reply
#7
The default settings are not suited for everything/anything. Smile
Like I wrote, there is no standard which defines 'this is THE way to convert HDR to SDR' it all comes down to preferences it's basically color grading. Smile

Quote: generating a preview takes a very long time (several minutes)
What hardware are you using?
What source filter are you using? (assuming your hardware can decode your source through the gpu this might help Wink)
What does your script look like?
Filtering 4k content requires some serious computing power, especially when using software based filters. Smile
Also keep in mind you won't get the same colors the HDR version has with the SDR version, if that was possible there would be no need for HDR.
for example: (left original, right filterd using ToneMap(placebo)
[Image: grafik.png]

Cu Selur

Ps.: When doing HDR to SDR you should have a HDR monitor, and open the HDR content in a player and the filter preview to properly see what you try to emulate.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#8
The rendering of the filter seems to be fairly quick once the file is loaded. But every time I refresh the preview it reads the entire file, which in this case is 74GB.
   
Reply
#9
You are using LWLibavSource in software decoding mode. (which for SD content usually is faster than hardware decoding, but for UHD and higher resolutions you hardware decoding is usually faster)
I would recommand to either enable the hardware decoding (Filtering->Vapoursynth->Misc->Source->Libav hardware decoding mode) or use DGDecNV. Unlike LWLibavSource creates a physical index file once upon loading the file and then reuses this, LWLibavSource creates the index anew each time. Wink
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#10
Thanks, DGDecNV did the trick. That was certainly annoying.

Using DGHDRtoSDR I was able to pretty closely replicate the streaming reference image (which is SDR) by setting White to 2800 lol.

To clarify, does DGDecNV hardware (GPU) only apply to VS filters? x265 should be purely software (CPU), correct?

Going off topic from the original thread, but how do I make Hybrid use 100% CPU power? During this encode it's using ~50%.
Do I adjust --frame-threads and --pools from their defaults?


Attached Files Thumbnail(s)
   
Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)