29.03.2022, 21:40
Okay, here's how I setup Vapoursynth to get it working for CUDA:
-> Now all the addong work fine for me using CUDA in Hybrid.
According to:
https://onnxruntime.ai/docs/execution-pr...quirements.
it should be enought to install 'TensorRT 8.0 GA Update 1' from https://developer.nvidia.com/nvidia-tens...x-download:
-> https://developer.nvidia.com/compute/mac...dnn8.2.zip
Now to get this portable, a bunch of dlls need to be copied from the 'TensorRT 8.0 GA Update 1', but I have no clue which, since my card does not support tensors.
According to https://nietras.com/2021/01/25/onnxruntime/ it seems like only the nvinfer*.dlls should be needed for tensor support.
Reading your post I you also called:
-> Can you try setting up the Vapoursynth folder like I did?
And check:
a. does it work on your system when 'only' CUDA is used like it does for me?
b. does it work if you copy the dlls and install the whl file to get TensorRT working?
if this works I could do the same to have a portable version with CUDA and TensorRT support.
Cu Selur
- I created a new empy Vapoursynth-folder
- downloaded 'Windows embeddable package (64-bit)' from https://www.python.org/downloads/release/python-3912/
- extracted the Python download into the download into the 'Vapoursynth'-folder
- downloaded 'VapourSynth64-Portable-R57' from https://github.com/vapoursynth/vapoursynth/releases
- extracted the Vapoursynth portable download into the 'Vapoursynth'-folder
- downloaded get-pip.py from https://bootstrap.pypa.io/get-pip.py and save it into the 'Vapoursynth'-folder
- opened a 'Windows Command Prompt'-window and navigate into the 'Vaporusynth'-folder
- installed pip by calling :
pyhton get-pip.py
- opened the python39._pth in a text addition and added the following to lines above anything else in that file and saved the file
Scripts
Lib\site-packages - installed VSGAN
python -m pip install vsgan
python -m pip install torch===1.11.0+cu113 torchvision==0.12.0 -f https://download.pytorch.org/whl/torch_stable.html - installed BASICVSR++
installted RIFEpython -m pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
python -m pip install tqdm
python -m pip install opencv-python
python -m pip install --upgrade vsbasicvsrpp
python -m vsbasicvsrpp
python -m pip install --upgrade vsrife
- installed SWINIR
python -m pip install --upgrade vsswinir
python -m vsswinir - installed DPIR and onnxruntime-gpu
python -m pip install --upgrade vsdpir
python -m pip install --upgrade onnxruntime-gpu - from cudnn-11.4-windows-x64-v8.2.4.15.zip and NVIDIA CUDA SDK 11.4.1 runtimes
I copied:
cublas64_11.dll
cublasLt64_11.dll
cudart64_110.dll
cudnn64_8.dll
cudnn_cnn_infer64_8.dll
cudnn_ops_infer64_8.dll
cufft64_10.dll
cufftw64_10.dll
into Vapoursynth/Lib/site-packages/onnxruntime/capi and then uninstalled the sdk and cudnn.
- downloaded the vsdpir modules
python -m vsdpir
- installed REALESRGAN (which also uses onnxruntime)
python -m pip install --upgrade vsrealesrgan
python -m vsrealesrgan
-> Now all the addong work fine for me using CUDA in Hybrid.
According to:
https://onnxruntime.ai/docs/execution-pr...quirements.
it should be enought to install 'TensorRT 8.0 GA Update 1' from https://developer.nvidia.com/nvidia-tens...x-download:
-> https://developer.nvidia.com/compute/mac...dnn8.2.zip
Now to get this portable, a bunch of dlls need to be copied from the 'TensorRT 8.0 GA Update 1', but I have no clue which, since my card does not support tensors.
According to https://nietras.com/2021/01/25/onnxruntime/ it seems like only the nvinfer*.dlls should be needed for tensor support.
Reading your post I you also called:
python -m pip install PATHTO/graphsurgeon-0.4.5-py2.py3-none-any.whl
python -m pip install PATHTO/uff-0.6.9-py2.py3-none-any.whl
python -m pip install PATHTO/onnx_graphsurgeon-0.3.10-py2.py3-none-any.whl
-> Can you try setting up the Vapoursynth folder like I did?
And check:
a. does it work on your system when 'only' CUDA is used like it does for me?
b. does it work if you copy the dlls and install the whl file to get TensorRT working?
if this works I could do the same to have a portable version with CUDA and TensorRT support.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.