This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

NNEDI3 - rfactor settings
#1
I am converting some minidv footage to 4k and applying a number of filters.  

When I resize I set the hieght to 2160 and allow the width to be automatically calculated to 2880.  Further, I set the ouptut par to 1:1, understanding that the input PAR is 8:9.  In VS, I use the NNEID3 resizer and check GPU.  I have an intel arc380 graphics card.

I Bob deinterlace using QTGMC and apply some quality filters.  The output is as I want, but it is very slow to process (~2.11fps).  In providing the vs spring to AI (Gemini in my case), it notes that the way I am doing it is by grossing up the size by a factor of 8 (rfactor=8) only to downsize it immediately thereafter.  

Is there a way to adjust this in the Hybrid GUI.   Here is the script I generate.   Please do not hesitate to point out any other inefficiencies that you find in my script.  Thank you
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import ctypes
import sys
import os
core = vs.core
# Limit thread count to 16
core.num_threads = 16
# Import scripts folder
scriptPath = 'C:/Program Files/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Support Files
Dllref = ctypes.windll.LoadLibrary("C:/Program Files/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
# loading plugins
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SharpenFilter/CAS/CAS.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/NNEDI3CL.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/FFT3DFilter/fft3dfilter.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/libmvtools_sf_em64t.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/TCanny.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/vszip.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/CTMF/CTMF.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/NEO_FFT3DFilter/neo-fft3d.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/EEDI3m.dll")# vsQTGMC
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/vsznedi3.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DeinterlaceFilter/Bwdif/Bwdif.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/libllvmexpr.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/ZSmooth/zsmooth.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
# Import scripts
import edi_rpow2
import degrain
import dehalo
import qtgmc
import validate
# Source: 'T:\OneDrive\AVI Master Files\2005MiniDV\01 Jan\.2005-01-05_19-28-49.AVI'
# Current color space: YUV411P8, bit depth: 8, resolution: 720x480, frame rate: 29.97fps, scanorder: bottom field first, yuv luminance scale: limited, matrix: 470bg, format: DV
# Loading T:\OneDrive\AVI Master Files\2005MiniDV\01 Jan\.2005-01-05_19-28-49.AVI using LWLibavSource
clip = core.lsmas.LWLibavSource(source="T:/OneDrive/AVI Master Files/2005MiniDV/01 Jan/.2005-01-05_19-28-49.AVI", format="YUV411P8", stream_index=0, cache=0, prefer_hw=0)
frame = clip.get_frame(0)
# setting color matrix to 470bg.
clip = core.std.SetFrameProps(clip, _Matrix=vs.MATRIX_BT470_BG)
# setting color transfer (vs.TRANSFER_BT601), if it is not set.
if validate.transferIsInvalid(clip):
clip = core.std.SetFrameProps(clip=clip, _Transfer=vs.TRANSFER_BT601)
# setting color primaries info (to vs.PRIMARIES_BT470_BG), if it is not set.
if validate.primariesIsInvalid(clip):
clip = core.std.SetFrameProps(clip=clip, _Primaries=vs.PRIMARIES_BT470_BG)
# setting color range to TV (limited) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_LIMITED)
# making sure frame rate is set to 29.97fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
# making sure the detected scan type is set (detected: bottom field first)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_BOTTOM) # bff
# adjusting color space from YUV411P8 to YUV444P16 for vsQTGMC
clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P16)
# Deinterlacing using QTGMC
clip = qtgmc.QTGMC(Input=clip, Preset="Slower", InputType=0, TFF=False, TR2=1, Sharpness=1.0, SourceMatch=0, Lossless=0, EZDenoise=1.00, NoisePreset="Fast") # new fps: 59.94
# Making sure content is preceived as frame based
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_PROGRESSIVE) # progressive
# ColorMatrix: adjusting color matrix from 470bg to 709
# adjusting luma range to 'limited' due to post clipping
clip = core.resize.Bicubic(clip, matrix_in_s="470bg", matrix_s="709", range_in=0, range=0)
# applying FineDeHalo to remove halos
clip = dehalo.FineDehalo(clip, thlimi=53, darkstr=1.50)
# removing grain using TemporalDegrain2
clip = degrain.TemporalDegrain2(clip, degrainPlane=4, meAlgPar=False, postFFT=1, ppSAD1=9, ppSAD2=6, ppSCD1=4, thSCD2=100, fftThreads=4)
# resizing using NNEDI3CL
# current: 720x480 target: 2880x2160 -> pow: 8
clip = edi_rpow2.nnedi3cl_rpow2(clip, rfactor=8, nsize=2, nns=2)# 5760x3840
# resizing 5760x3840 to 2880x2160
clip = core.fmtc.resample(clip, w=2880, h=2160, kernel="spline64", interlaced=False, interlacedd=False)# before YUV444P16 after YUV444P16
# contrast sharpening using CAS
clip = core.cas.CAS(clip, sharpness=0.700)
# letterboxing 2880x2160 to 3840x2160
clip = core.std.AddBorders(clip, left=480, right=480, top=0, bottom=0)
# adjusting output color from: YUV444P16 to YUV420P10 for SvtAv1Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, dither_type="error_diffusion")
# set output frame rate to 59.94fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=60000, fpsden=1001)
# output
clip.set_output()
Reply
#2
NNEDI3 can only resize in multiples of 2, so 2, 4, 8, 16.
# resizing using NNEDI3CL
# current: 720x480 target: 2880x2160 -> pow: 8
clip = edi_rpow2.nnedi3cl_rpow2(clip, rfactor=8, nsize=2, nns=2)# 5760x3840
I'm wondering why pow: 4 isn't enough
720*4 =1440 * 2 = 2880, so 4 should be enough

=> this might be a bug.

I'm looking at it.

=> no not a bug, target height is 2880x2160 mit mult 4 the width is 2880, but the height is only 1920. So, to archive the target resolution or a higher resolution the multiplicator must be 8.

I could probably add an option to do the PAR adjustment before the resizing,... this way in your case 720x480 would first be resized to 720x540 and then a multiplicator of 4 would be enough to get your target resolution.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#3
Thank you for looking into this.  Just FYI, I tried auto-adjusting height and setting width to 2880 and also no auto-adjust just entering both 2880 and 2160 but rfactor remained at 8.

Regards.
Reply
#4
Quote:Thank you for looking into this. Just FYI, I tried auto-adjusting height and setting width to 2880 and also no auto-adjust just entering both 2880 and 2160 but rfactor remained at 8.
No surprise, this has nothing to do with auto-adjust
(Which should always be used! If you are not using it you are probably making a mistake or you are trying to avoid adjusting the PAR for some unknown reason.)


=> I uploaded a new dev which has "Filtering->Vapoursynth->Misc->Script->Adjust PAR before resize" as a new option, which when enabled in your case will create something like:
# resizing 720x480 to 720x540 to adjust for PAR before resizing
clip = core.fmtc.resample(clip, w=720, h=540, kernel="spline64", interlaced=False, interlacedd=False) #  before YUV420P8 after YUV420P16
# resizing using NNEDI3CL
# current: 720x540 target: 2880x2160 -> pow: 4
clip = edi_rpow2.nnedi3cl_rpow2(clip, rfactor=4, nsize=2, nns=2) # 2880x2160

Cu Selur

Ps.: I also added a code-block in your initial post and also adjusted the title.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#5
I am getting AI input and trying one thing.  I add the following to beforeCAS in the custom area of VS.

# 0. Load GPU Plugin
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/NNEDI3CL.dll")

# 1. THE STRETCH KILLER (Metadata Reset)
# We force the clip to recognize itself as Square Pixels (1:1) BEFORE upscaling.
# This prevents the "Double Stretch" issue.
clip = core.std.SetFrameProps(clip, _SARNum=1, _SARDen=1)

# 2. THE GPU UPSCALE
# NNEDI3CL handles the high-quality 2x jump (720 -> 1440)
clip = core.nnedi3cl.NNEDI3CL(clip, field=0, dh=True, dw=True, nsize=0, nns=3, device=0)
# Final stretch to hit 4K height (2160)
clip = core.resize.Spline36(clip, 2880, 2160)

# 3. PILLARBOXING
# Center the 4:3 image inside the 3840 wide 4K frame
clip = core.std.AddBorders(clip, left=480, right=480)


I concurrently uncheck the resize box and the Convert output to PAR boxes.  The enconding processes but the output is stretched vertically.  I assume it's a PAR issue and AI keeps telling me that its custom script will fix it but no luck.  For completeness, below is the entire script with these changes.  

# Imports
import vapoursynth as vs
# getting Vapoursynth core
import ctypes
import sys
import os
core = vs.core
# Limit thread count to 16
core.num_threads = 16
# Import scripts folder
scriptPath = 'C:/Program Files/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Support Files
Dllref = ctypes.windll.LoadLibrary("C:/Program Files/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
# loading plugins
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SharpenFilter/CAS/CAS.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/FFT3DFilter/fft3dfilter.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/libmvtools_sf_em64t.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/TCanny.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/vszip.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/CTMF/CTMF.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/NEO_FFT3DFilter/neo-fft3d.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/EEDI3m.dll")# vsQTGMC
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/vsznedi3.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DeinterlaceFilter/Bwdif/Bwdif.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/libllvmexpr.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/ZSmooth/zsmooth.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
# defining beforeCAS-function - START
def beforeCAS(clip):
# 0. Load GPU Plugin
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/NNEDI3CL.dll")

# 1. THE STRETCH KILLER (Metadata Reset)
# We force the clip to recognize itself as Square Pixels (1:1) BEFORE upscaling.
# This prevents the "Double Stretch" issue.
clip = core.std.SetFrameProps(clip, _SARNum=1, _SARDen=1)

# 2. THE GPU UPSCALE
# NNEDI3CL handles the high-quality 2x jump (720 -> 1440)
clip = core.nnedi3cl.NNEDI3CL(clip, field=0, dh=True, dw=True, nsize=0, nns=3, device=0)
# Final stretch to hit 4K height (2160)
clip = core.resize.Spline36(clip, 2880, 2160)

# 3. PILLARBOXING
# Center the 4:3 image inside the 3840 wide 4K frame
clip = core.std.AddBorders(clip, left=480, right=480)
return [clip]
# defining beforeCAS-function - END

# Import scripts
import degrain
import dehalo
import qtgmc
import validate
# Source: 'T:\OneDrive\AVI Master Files\2005MiniDV\01 Jan\.2005-01-17_19-44-48.AVI'
# Current color space: YUV411P8, bit depth: 8, resolution: 720x480, frame rate: 29.97fps, scanorder: bottom field first, yuv luminance scale: limited, matrix: 470bg, format: DV
# Loading T:\OneDrive\AVI Master Files\2005MiniDV\01 Jan\.2005-01-17_19-44-48.AVI using LWLibavSource
clip = core.lsmas.LWLibavSource(source="T:/OneDrive/AVI Master Files/2005MiniDV/01 Jan/.2005-01-17_19-44-48.AVI", format="YUV411P8", stream_index=0, cache=0, prefer_hw=0)
frame = clip.get_frame(0)
# setting color matrix to 470bg.
clip = core.std.SetFrameProps(clip, _Matrix=vs.MATRIX_BT470_BG)
# setting color transfer (vs.TRANSFER_BT601), if it is not set.
if validate.transferIsInvalid(clip):
clip = core.std.SetFrameProps(clip=clip, _Transfer=vs.TRANSFER_BT601)
# setting color primaries info (to vs.PRIMARIES_BT470_BG), if it is not set.
if validate.primariesIsInvalid(clip):
clip = core.std.SetFrameProps(clip=clip, _Primaries=vs.PRIMARIES_BT470_BG)
# setting color range to TV (limited) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_LIMITED)
# making sure frame rate is set to 29.97fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
# making sure the detected scan type is set (detected: bottom field first)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_BOTTOM) # bff
# adjusting color space from YUV411P8 to YUV444P16 for vsQTGMC
clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P16)
# Deinterlacing using QTGMC
clip = qtgmc.QTGMC(Input=clip, Preset="Slow", InputType=0, TFF=False, TR2=1, Sharpness=1.0, SourceMatch=0, Lossless=0) # new fps: 59.94
# Making sure content is preceived as frame based
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_PROGRESSIVE) # progressive
# ColorMatrix: adjusting color matrix from 470bg to 709
# adjusting luma range to 'limited' due to post clipping
clip = core.resize.Bicubic(clip, matrix_in_s="470bg", matrix_s="709", range_in=0, range=0)
# applying FineDeHalo to remove halos
clip = dehalo.FineDehalo(clip, thlimi=53, darkstr=1.50)
# removing grain using TemporalDegrain2
clip = degrain.TemporalDegrain2(clip, degrainPlane=4, meAlgPar=False, postFFT=1, ppSAD1=9, ppSAD2=6, ppSCD1=4, thSCD2=100, fftThreads=4)
[clip] = beforeCAS(clip)
# clip current meta; color space: YUV444P16, bit depth: 16, resolution: 720x480, fps: 59.94, color matrix: 709, yuv luminance scale: limited, scanorder: progressive, full height: true
# contrast sharpening using CAS
clip = core.cas.CAS(clip, sharpness=0.700)
# adjusting output color from: YUV444P16 to YUV420P10 for SvtAv1Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, dither_type="error_diffusion")
# set output frame rate to 59.94fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=60000, fpsden=1001)
# output
clip.set_output()
Reply
#6
Your custom code does not signal to Hybrid that you resized the clip, so Hybrid does not know you resized the your source => this likely will create issues.
Hybrid assumes:
# clip current meta; color space: YUV444P16, bit depth: 16, resolution: 720x480, fps: 59.94, color matrix: 709, yuv luminance scale: limited, scanorder: progressive, full height: true
=> this might break encoding (depending on the chosen encoder), also VBV and other restriction calculations will be wrong.

If you try to work around Hybrid instead of learning how to properly use it, you should not use Hybrid and you can't expect help.

Cu Selur

Ps.: going to sleep now.
PPs.: use 'code'-blocks if you post code, otherwise your post is unnecessarily difficult to read.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#7
I most definitely would like to work in Hybrid.  I am not trying to work around it.  it has teh Custom script option so I used it.  If Hybrid hadn't the Custom, I wouldn't have used it of course.  Can you suggest anything else within Hybrid that will allow me to adjust the rfactor or otherwise not have to create such a huge resize that I then have to downsize.

thank you
Reply
#8
Use the dev version and the new option,...
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)