Posts: 96
Threads: 26
Joined: Feb 2024
Posts: 11.761
Threads: 63
Joined: May 2017
The torch2.7dev update was for 5xxx cards, which is why I don't use it.
Seems to work with the normal torch-addon.
You could try the vs-mlrt addon and SCUNet (mlrt),...
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Posts: 3
Threads: 1
Joined: Aug 2025
In my case it never worked, I just get a black screen after the .engine finishes compiling (3090).
I'm using 2025.07.27.1 and it didn't work with the May version either (same behavior).
The mlrt version works, but for some reason is way slower. Plus I don't need tiling there cause FP16 or BF16 are enough to save memory.
Question: is there a way to use the vsSCUnet models quantized?
I tried converting them to bf16 and fp16 with a python script, they load up but the memory usage remains the same, I think they are expanded back to fp32.
It would be nice to have parity between the vs and mlrt in terms of options to see which one is faster.
Posts: 11.761
Threads: 63
Joined: May 2017
Quote:Question: is there a way to use the vsSCUnet models quantized?
I tried converting them to bf16 and fp16 with a python script, they load up but the memory usage remains the same, I think they are expanded back to fp32.
It would be nice to have parity between the vs and mlrt in terms of options to see which one is faster.
Reading:
https://github.com/AmusementClub/vs-mlrt...lrt.py#L98
it does not sound like it, but you would have to ask over at
https://github.com/AmusementClub/vs-mlrt
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.