This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

[BUG] Scunet Denoise Tile Size (Syntax Error)
#1
Version 2025-08-17

Bug:Using VS/Scunet Tile size will throw a syntax error

BTW:
IIRC, this became dysfunctional with the torch2.7 update? can't tell for sure.


Attached Files Thumbnail(s)
           
Reply
#2
The torch2.7dev update was for 5xxx cards, which is why I don't use it.
Seems to work with the normal torch-addon.
You could try the vs-mlrt addon and SCUNet (mlrt),...
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#3
In my case it never worked, I just get a black screen after the .engine finishes compiling (3090).
I'm using 2025.07.27.1 and it didn't work with the May version either (same behavior).

The mlrt version works, but for some reason is way slower. Plus I don't need tiling there cause FP16 or BF16 are enough to save memory.


Question: is there a way to use the vsSCUnet models quantized?
I tried converting them to bf16 and fp16 with a python script, they load up but the memory usage remains the same, I think they are expanded back to fp32.
It would be nice to have parity between the vs and mlrt in terms of options to see which one is faster.
Reply
#4
Quote:Question: is there a way to use the vsSCUnet models quantized?
I tried converting them to bf16 and fp16 with a python script, they load up but the memory usage remains the same, I think they are expanded back to fp32.
It would be nice to have parity between the vs and mlrt in terms of options to see which one is faster.
Reading:
https://github.com/AmusementClub/vs-mlrt...lrt.py#L98
it does not sound like it, but you would have to ask over at https://github.com/AmusementClub/vs-mlrt
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#5
Hi Selur, I was actually referring to https://github.com/HolyWu/vs-scunet

Looking at https://github.com/HolyWu/vs-scunet/blob...t__.py#L81
it appears that FP16 is selected if the color space is RBGH

I actually managed to make it work with custom script


from vsscunet import scunet as SCUNet

# adjusting color space fromYUV420P8 to RGBH for vsSCUNet to trigger FP16
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="709", range_s="limited")

# denoising using SCUNet
clip = SCUNet(clip=clip, model=4)

# adjusting output color from: RGBH to YUV420P8 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="709", range_s="limited", dither_type="error_diffusion")



I also tried calling SCUNet with trt=True, trt_optimization_level=3, trt_cache_dir.....
but the engine compilation doesn't seem to start or never finishes? I cannot tell, and I'm not sure why.
Reply
#6
I'll do some testing. Smile

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#7
Worked fine here.
As expected, using FP16 with TRT the engine files get build anew (which really seems to take ages,... I checked the taskmgr to see that there really something was happening since the first few minutes both cpu&gpu usage were rather low,...), but both with TRT enabled and disabled, feeding SCUNet RGBH worked fine here.
=> Uploaded a new dev version where a FP16 checkbox got added to SCUNet. (when enabled Hybrid will feed SCUNet RGBH otherwise RGBS)

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)