This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

[BUG] Resize Keeps Changing on its own when Starting a Batch Queue, Please fix this.
#71
(28.10.2024, 16:52)Selur Wrote: Uploaded a new dev version which should with both VSGAN and VSMLRT:
a. when 'Multi' and 'no adjust' are disabled: apply one model, adjust the resolution to the set resize resolution with the selected (adjustment) resizer.
b. when 'Multi' is enabled and 'no adjust' is disabled: apply all the models and between the models undo the resizing with the selected (adjustment) resizer.
After the last model, the resolution is adjusted to the set resize resolution with the selected (adjustment) resizer.
c. when 'Multi' and 'no adjust' are enabled: apply all the models and only after the last adjust the resolution to the set resize resolution with the selected (adjustment) resizer.

This should allow doing what you wanted to achieve. (assuming I didn't make a mistake)

Cu Selur
Thank you, buti am confused, so i would like option c correct. But why resize again at the end, can i just leave it where the second model left off. I dont wanna use resize for a third time for no reason. Unless i am miss understanding this.

1. input Video 852x489

1. Apply model which does double the size 1704x960

2. Apply another model which doubles the size. 3408x1920

3. Video is done final size 3408x1920


Edit just tested it this doesnt seem right where did it get 6848x3840 resolution from lol also 852 multiplied by 2 is 1704 not 1712, it says the input is 856x480 but it is 852x480 , Both models are x2 and also at the end is resizing again which defeats the whole purpose again.

# making sure the detected scan type is set (detected: progressive)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_PROGRESSIVE) # progressive
clip = core.std.AddBorders(clip=clip, left=2, right=2, top=0, bottom=0) # add borders to archive mod 8 (vsVSMLRT) - 856x480
# changing range from limited to full range for vsVSMLRT
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# setting color range to PC (full) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_FULL)
# adjusting color space from YUV420P10 to RGBH for vsVSMLRT
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="709", range_s="full")
# resizing using VSMLRT, target: 3408x1920
from vsmlrt import Backend
clip = vsmlrt.inference([clip],network_path="I:/AI_Upscale/Tools/Hybrid/64bit/onnx_models/2x_Ani4Kv2_G6i2_Compact_107500_FP16.onnx", backend=Backend.TRT(fp16=True,device_id=0,num_streams=1,verbose=True,use_cuda_graph=False,workspace=1073741824,builder_optimization_level=3,engine_folder="I:/AI_Upscale/.Upscale Projects/Engines")) # 1712x960
clip = vsmlrt.inference([clip],network_path="I:/AI_Upscale/Tools/Hybrid/64bit/onnx_models/cugan_pro-denoise3x-up2x_op18_fp16_clamp_colorfix.onnx", backend=Backend.TRT(fp16=True,device_id=0,num_streams=1,verbose=True,use_cuda_graph=False,workspace=1073741824,builder_optimization_level=3,engine_folder="I:/AI_Upscale/.Upscale Projects/Engines")) # 6848x3840
clip = core.std.Crop(clip=clip, left=8, right=8, top=0, bottom=0) # removing borders (vsVSMLRT) -  6832x3840
# resizing 6832x3840 to resize resolution 3408x1920
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="full")
clip = core.fmtc.resample(clip=clip, w=3408, h=1920, kernel="spline64", interlaced=False, interlacedd=False)
# changing range from full to limited range for vsVSMLRT
clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
# adjusting output color from: RGBS to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 29.97fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
# output
clip.set_output()

This part should not be there it should stop before this with the resizing stuff:

# resizing 6832x3840 to resize resolution 3408x1920
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="full")
clip = core.fmtc.resample(clip=clip, w=3408, h=1920, kernel="spline64", interlaced=False, interlacedd=False)


Input is also 852x480 but here its read like 48x48

[Image: EyL9YiO.png]


Edit 2, ReEditing the Resize tab refreshed the 48x48,  There seems to be  a bug where upon source load it doesnt show proper size in Vaporsynth>Resize tab, instead you have to go back to Crop/Resize, change the resize value once in order to trigger the proper resolution and not 48x48.  Might be just visual bug tho. 

[Image: d45Ncrs.png]

Should just give option to disable this completly and let the models do its thing. Currently it disables upscale as well, its weird why we mixing upscale and resize imo in the first place. What the models upscale is what we should get. IF we want to resize at the end to get a final resolution sure have it as an option but i dont think it should be forced.

[Image: Pc66L2n.png]
Reply
#72
Processing seems to be correct to me.
  • "Crop/Resize->Base->Resize" lets Hybrid know whether you want to resize and what the output resolution should be. So if you disable this option, no resizing will be done.
  • "Filtering->Vapoursynth->Frame->Resize->Resizer" lets you overwrite which resizer should be used.
    When enabling "Filtering->Vapoursynth->Frame->Resize->Resizer", "Crop/Resize->Base->Resize will be enabled too. If you disable "Crop/Resize->Base->Resize", "Filtering->Vapoursynth->Frame->Resize->Resizer" will stay enabled, but have no effect, since it only allows to overwrite the "Crop/Resize->Base->Resize->Picture Resize->Resize method" choice.
  • Afaik. VSMLRT requires input to be mod8, since your source is not mod8 Hybrid uses
    clip = core.std.AddBorders(clip=clip, left=2, right=2, top=0, bottom=0) # add borders to archive mod 8 (vsVSMLRT) - 856x480
    to handle this before using VSMLRT and removed the added borders after the resizing using:
    clip = core.std.Crop(clip=clip, left=8, right=8, top=0, bottom=0) # removing borders (vsVSMLRT) -  6832x3840
  • Since you told Hybrid the output should be '3408x1920', but the operations you used result in 6832x3840, Hybrid resizes to '3408x1920'.

I need a step-by-step to reproduce the '48x...'-display glitch.
Wild guessing, I tried reproducing this by:
  • Reset all Setting in Hybrid.
  • Starting Hybrid.
  • Enabling "Crop/Resize->Base->Resize" and setting the target width to "3840".
  • Enabling "Crop/Resize->Misc->Resizing->Keep resize for new source" and setting the target width to "3840".
  • Enabling "Filtering->Vapoursynth->Frame->Resize->Resizer", selecting VSMLRT, enabling 'Multi', enabling 'no adjust', selecting two 2x models.
  • Loading a source file.
This did not work. The input resolution was properly displayer in the "Filtering->Vapoursynth->Resize"-tab.
Like I wrote, I need a precise step-by-step guide what you did to get this bug.


As a general side note: I assume, you are aware that disabling the 'Auto adjust' option will distort the image. If the source has no proper PAR-flag, you should let Hybrid know the proper flag by adjusting 'Crop/Resize->Base->Pixel Aspect Ratio (PAR)->Input PAR' to the correct value.

Hope this helps to understand why Hybrid does what it does. I could probably add an option to allow overwriting the required PAR for VSMLRT, this would allow you to let Hybrid know which mod the models you use require at a minimum and stop Hybrid from adding borders to achieve mod8 (if you told it it's not required), but if you set for example mod4, but the model requires mod8 it might either crash or create a broken output.

=> Looking forward to a step-by-step guide for the '48x...'-display glitch. If you did not load a source, it is correct that the 48x... isn't changed, since no Vapoursynth script is created at this stage.

Cu Selur
Reply
#73
(Yesterday, 04:46)Selur Wrote:
clip = core.std.Crop(clip=clip, left=8, right=8, top=0, bottom=0) # removing borders (vsVSMLRT) -  6832x3840
[*]Since you told Hybrid the output should be '3408x1920', but the operations you used result in 6832x3840, Hybrid resizes to '3408x1920'.
[*]
This makes no sense. the operations do not result in 6832x3840. i dunno why this has to be so complicated. 2 models are used that have times two resolution, that is way beyond what the result should be. You would technically be upscaling a 4k video to 8k+ then resizing it this makes no sense to me. It should be x4 the normal video resolution. I am not much knowledgeable about the mod8 stuff you mentioned but so far not a single upscaler has done this either.Here is the script i have been using the past years if that helps at all to make it more clear. As for the bug i will record video of it when im back from work. The below script results in x4 resolution of the video with no modifications to it nor any 8k upscales. Nowhere is defined what the target resolution is, the model decide it self. I just wanted to do the same thing but in Hybrid, buts its turning out not as straightforward as i thought, maybe the below script does shit i am unaware again i aint much knknowledgeable on this stuff and i was wrong.  Big Grin


import sys
import vapoursynth as vs
from src.rife_trt import rife_trt
from src.scene_detect import scene_detect
from src.utils import FastLineDarkenMOD
from vs_temporalfix import vs_temporalfix

sys.path.append("/workspace/tensorrt/")
core = vs.core
core.num_threads = 4

core.std.LoadPlugin(path="/usr/local/lib/libvstrt.so")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libmvtools.so")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libfillborders.so")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libmotionmask.so")
core.std.LoadPlugin(path="/usr/local/lib/x86_64-linux-gnu/libtemporalmedian.so")


   
def metrics_func(clip):
    offs1 = core.std.BlankClip(clip, length=1) + clip[:-1]
    offs1 = core.std.CopyFrameProps(offs1, clip)
    return core.vmaf.Metric(clip, offs1, 2)


def inference_clip(video_path="", clip=None):
    interp_scale = 2
    if clip is None:
        clip = core.bs.VideoSource(source=video_path)

clip = FastLineDarkenMOD(clip)

clip = vs_temporalfix(clip, strength=400, tr=6, exclude="[10 20]", debug=False)

   
    clip = vs.core.resize.Bicubic(clip, format=vs.RGBH, matrix_in_s="709")


    clip = core.akarin.Expr(clip, "x 0 1 clamp")

   
    upscaled = core.trt.Model(
        clip,
        engine_path="/workspace/tensorrt/Engines/2x_Ani4Kv2_G6i2_Compact_107500_FP16_852x480p.engine", #Anime4Kv2 x2 852x480p 16x9 NTCS
        #tilesize=[854, 480],
        overlap=[0, 0],
        num_streams=1,
    )
   
 
    upscaled = core.trt.Model(
        upscaled,
        engine_path="/workspace/tensorrt/Engines/CuGAN_Pro_DeNoise3x_up2x_Opset13_FP16_Clamp_Colorfix_1704x960p.engine", #CuGAN Denoise x2 1704x960p 16x9 NTCS
        #tilesize=[854, 480],
        overlap=[0, 0],
        num_streams=1,
    )
   
   
    upscaled_metrics = vs.core.resize.Bicubic(
        clip, width=224, height=224, format=vs.YUV420P10, matrix_s="709"
    )
    upscaled_metrics = metrics_func(upscaled_metrics)

   
    clip = core.akarin.Select(
        [upscaled, upscaled[1:] + upscaled[-1]],
        upscaled_metrics,
        "x.float_ssim 0.999 >",
    )
   

    clip = vs.core.resize.Bicubic(clip, format=vs.YUV420P10, matrix_s="709")

    return clip
Reply
#74
(Yesterday, 04:46)Selur Wrote: As a general side note: I assume, you are aware that disabling the 'Auto adjust' option will distort the image. If the source has no proper PAR-flag, you should let Hybrid know the proper flag by adjusting 'Crop/Resize->Base->Pixel Aspect Ratio (PAR)->Input PAR' to the correct value.

Hope this helps to understand why Hybrid does what it does. I could probably add an option to allow overwriting the required PAR for VSMLRT, this would allow you to let Hybrid know which mod the models you use require at a minimum and stop Hybrid from adding borders to achieve mod8 (if you told it it's not required), but if you set for example mod4, but the model requires mod8 it might either crash or create a broken output.

=> Looking forward to a step-by-step guide for the '48x...'-display glitch. If you did not load a source, it is correct that the 48x... isn't changed, since no Vapoursynth script is created at this stage.

Cu Selur

I already did convert it to 1:1 Par in the first pass where i denoise and deinterlace now i want to upscale thats why i dont want to resize again for no reason. As promised here is a video of the "bug"

https://streamable.com/ryf2ru
Reply
#75
Quote:This makes no sense. the operations do not result in 6832x3840.
Hybrid takes the scaling from the name! "Xx_...."
If the name does not start with '<Number>x_', Hybrd assumes 4x, so this is correct.

clip = core.std.AddBorders(clip=clip, left=2, right=2, top=0, bottom=0) # add borders to archive mod 8 (vsVSMLRT) - 856x480
Source gets padded to 856x480
clip = vsmlrt.inference([clip],network_path="I:/AI_Upscale/Tools/Hybrid/64bit/onnx_models/2x_Ani4Kv2_G6i2_Compact_107500_FP16.onnx", backend=Backend.TRT(fp16=True,device_id=0,num_streams=1,verbose=True,use_cuda_graph=False,workspace=1073741824,builder_optimization_level=3,engine_folder="I:/AI_Upscale/.Upscale Projects/Engines")) # 1712x960
Hybrid assumes, that the source gets upscaled by 2x. (856x480*2 = 1712x960)
clip = vsmlrt.inference([clip],network_path="I:/AI_Upscale/Tools/Hybrid/64bit/onnx_models/cugan_pro-denoise3x-up2x_op18_fp16_clamp_colorfix.onnx", backend=Backend.TRT(fp16=True,device_id=0,num_streams=1,verbose=True,use_cuda_graph=False,workspace=1073741824,builder_optimization_level=3,engine_folder="I:/AI_Upscale/.Upscale Projects/Engines")) # 6848x3840
Hybrid assumes, that the source gets upscaled by 4x. (1712x960*4 = 6848x3840)

Quote:clip = core.std.Crop(clip=clip, left=8, right=8, top=0, bottom=0) # removing borders (vsVSMLRT) - 6832x3840
This is wrong. Hybrid needs to multiply the scaling factors, when 'no adjust' is used.
=> uploaded a new dev which hopefully should fix this

https://streamable.com/ryf2ru, if you are not willing to write a step-by-step, I don't think I will spend time on looking into this.

Cu Selur
Reply
#76
(7 hours ago)Selur Wrote:
Quote:This makes no sense. the operations do not result in 6832x3840.
Hybrid takes the scaling from the name! "Xx_...."
If the name does not start with '<Number>x_', Hybrd assumes 4x, so this is correct.

Well that explains that then. Renaming the File seems fine now, can we please get an option to remove the padding and final resize ?

Current build:

This:
clip = core.std.AddBorders(clip=clip, left=2, right=2, top=0, bottom=0) # add borders to archive mod 8 (vsVSMLRT) - 856x480

And this:
# resizing 3416x1920 to resize resolution 3408x1920 section.

clip = core.std.AddBorders(clip=clip, left=2, right=2, top=0, bottom=0) # add borders to archive mod 8 (vsVSMLRT) - 856x480
# changing range from limited to full range for vsVSMLRT
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# setting color range to PC (full) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_FULL)
# adjusting color space from YUV420P10 to RGBH for vsVSMLRT
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="709", range_s="full")
# resizing using VSMLRT, target: 3408x1920
clipref = clip
from vsmlrt import Backend
clip = vsmlrt.inference([clip],network_path="I:/AI_Upscale/Tools/Hybrid/64bit/onnx_models/2x_Ani4Kv2_G6i2_Compact_107500_FP16.onnx", backend=Backend.TRT(fp16=True,device_id=0,num_streams=1,verbose=True,use_cuda_graph=False,workspace=1073741824,builder_optimization_level=3,engine_folder="C:/Users/Kristijan1001/AppData/Local/Temp")) # 1712x960
clip = vsmlrt.inference([clip],network_path="I:/AI_Upscale/Tools/Hybrid/64bit/onnx_models/2x_cugan_pro-denoise3x-up2x_op18_fp16_clamp_colorfix.onnx", backend=Backend.TRT(fp16=True,device_id=0,num_streams=1,verbose=True,use_cuda_graph=False,workspace=1073741824,builder_optimization_level=3,engine_folder="C:/Users/Kristijan1001/AppData/Local/Temp")) # 3424x1920
clip = core.std.Crop(clip=clip, left=4, right=4, top=0, bottom=0) # removing borders (vsVSMLRT) -  3416x1920
# resizing 3416x1920 to resize resolution 3408x1920
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="full")
clip = core.fmtc.resample(clip=clip, w=3408, h=1920, kernel="spline64", interlaced=False, interlacedd=False)
clipref = core.resize.Bicubic(clip=clipref, format=vs.RGBS, range_s="full")
# fixing colors with ColorFix
clip = vs_colorfix.average(clip=clip, ref=clipref, radius=10, fast=False)
# changing range from full to limited range for vsVSMLRT
clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
# adjusting output color from: RGBS to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 29.97fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
# output
clip.set_output()
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)