![]() |
[INFO] Hybrid work with RTX - Printable Version +- Selur's Little Message Board (https://forum.selur.net) +-- Forum: Hybrid - Support (https://forum.selur.net/forum-1.html) +--- Forum: Problems & Questions (https://forum.selur.net/forum-3.html) +--- Thread: [INFO] Hybrid work with RTX (/thread-4163.html) Pages:
1
2
|
Hybrid work with RTX - Smiggy - 22.07.2025 Hi Selur, I had a question. Will Hybrid work on a RTX Pro 6000 (Blackwell)? Particularly, the upscale model RealESRGAN realsr-anime model and the SCUNet tool? Thank you. RE: Hybrid work with RTX - Selur - 22.07.2025 It should. RE: Hybrid work with RTX - Smiggy - 22.07.2025 Hi, Thank you for your answer. Given the power of the GPU. Will the encoding be faster compared to a 4090 or 5090? RE: Hybrid work with RTX - Selur - 22.07.2025 Not owning a 5090, I can only say that from the specs it should. It has more VRAM (32 vs 24), more CUDA Cores (21760 vs 16384), newer Tensor Cores. RE: Hybrid work with RTX - Smiggy - 22.07.2025 Hello, We tried and got this error message. Please assist. 2025-07-22 14:52:58.031 C:\Users\user\Downloads\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\cuda\__init__.py:287: UserWarning: NVIDIA RTX PRO 6000 Blackwell Workstation Edition with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. If you want to use the NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn( C:\Users\user\Downloads\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\cuda\__init__.py:287: UserWarning: NVIDIA RTX PRO 6000 Blackwell Workstation Edition with CUDA capability sm_120 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90. If you want to use the NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ warnings.warn( 2025-07-22 14:52:58.056 Failed to evaluate the script: Python exception: CUDA error: invalid device ordinal CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. Traceback (most recent call last): File "src/cython/vapoursynth.pyx", line 3378, in vapoursynth._vpy_evaluate File "src/cython/vapoursynth.pyx", line 3379, in vapoursynth._vpy_evaluate File "C:\Users\user\AppData\Local\Temp\tempPreviewVapoursynthFile14_52_44_742.vpy", line 38, in clip = SCUNet(clip=clip, model=3, device_index=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "contextlib.py", line 81, in inner File "C:\Users\user\Downloads\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsscunet\__init__.py", line 134, in scunet inf_streams = [torch.cuda.Stream(device) for _ in range(num_streams)] ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\cuda\streams.py", line 39, in new with torch.cuda.device(device): ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\cuda\__init__.py", line 495, in enter self.prev_idx = torch.cuda._exchange_device(self.idx) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: invalid device ordinal CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions. RE: Hybrid work with RTX - Selur - 23.07.2025 Seems like the pytorch version used does not support the card. ![]() One would probably have to update pytorch which then requires updates of some of the other stuff. RE: Hybrid work with RTX - Smiggy - 23.07.2025 Hi, Where can we update pytorch? RE: Hybrid work with RTX - Selur - 23.07.2025 The portable python environment is under Hybrid/64bit/Vapoursynth, but be warned doing this is not easy. once could try calling: python -m pip install -U torch torchvision torch_tensorrt --index-url https://download.pytorch.org/whl/cu126 --extra-index-url https://pypi.nvidia.com But no clue how much this could break, one could end up having to setup a custom torch add-on. I outlined the basic step here: Here's how I build Hybrids torch-addon: I'll try finding some time, to create such a torch add-on in the next few days. (best remind me if I didn't get around to this on Friday) Cu Selur RE: Hybrid work with RTX - tailland - 23.07.2025 ![]() RE: Hybrid work with RTX - Selur - 23.07.2025 Won't probably get around to upload a new test torch, but I'm uploading a VapoursynthR72_torch_2025.06.06_torch2.7dev which I didn't build new, but which was a test setup I used in June. Should be up in ~1hour, please test and let me know whether that one works. Cu Selur Ps.: when I remember correctly the problem with torch dev2.7 was that it caused problems with HAVC due to excessive log output,... |