This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

[BUG] Scunet Denoise Tile Size (Syntax Error)
#1
Version 2025-08-17

Bug:Using VS/Scunet Tile size will throw a syntax error

BTW:
IIRC, this became dysfunctional with the torch2.7 update? can't tell for sure.


Attached Files Thumbnail(s)
           
Reply
#2
The torch2.7dev update was for 5xxx cards, which is why I don't use it.
Seems to work with the normal torch-addon.
You could try the vs-mlrt addon and SCUNet (mlrt),...
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#3
In my case it never worked, I just get a black screen after the .engine finishes compiling (3090).
I'm using 2025.07.27.1 and it didn't work with the May version either (same behavior).

The mlrt version works, but for some reason is way slower. Plus I don't need tiling there cause FP16 or BF16 are enough to save memory.


Question: is there a way to use the vsSCUnet models quantized?
I tried converting them to bf16 and fp16 with a python script, they load up but the memory usage remains the same, I think they are expanded back to fp32.
It would be nice to have parity between the vs and mlrt in terms of options to see which one is faster.
Reply
#4
Quote:Question: is there a way to use the vsSCUnet models quantized?
I tried converting them to bf16 and fp16 with a python script, they load up but the memory usage remains the same, I think they are expanded back to fp32.
It would be nice to have parity between the vs and mlrt in terms of options to see which one is faster.
Reading:
https://github.com/AmusementClub/vs-mlrt...lrt.py#L98
it does not sound like it, but you would have to ask over at https://github.com/AmusementClub/vs-mlrt
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)