This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Hybrid 2022.03.20.1: No module named 'vsdpir'
#41
(29.03.2022, 18:37)Selur Wrote: Don't know what to do with the sample.
Since I don't own a card that has tensor cores, since you say that using those tensor cores is only possible when the full SDK is installed and it can't be used in a portable setup the only conclusion for me is that I will remove support for the tensor model completely from Hybrid. (At least until I either own a card with tensor support or someone else figures out a way to make it portable.)

Cu Selur

The problem is that on my PC vs-dpir is configured to work with "onnxruntime_gpu" and this addon is not working with the dlls located in the local folder. On my PC I was unable to get working even the CUDA version of vs-dipr. In effect I asked to you to stay on version 1.7.1 of vs-dpir because the version 2.0.0 switched on "onnxruntime_gpu" and this is the addon that has problems working in portable way. It seems that "onnxruntime_gpu" looks for the environment variable CUDA_PATH and then it expect to find under this path the folder structure: .\bin, .\lib, ...

P.S.
In my tests the increase in speed between version 1.6.0 and 2.0.0 of vs-dpir is only 3.5% (the quality seems the same).
Reply
#42
Fact is that HolyWu, who writes most of the ml based filters is switching the filters to onnxruntime. (atm. dpir and realesrgan are ported)
So the goal is to get onnxruntime working in a portable way.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#43
Okay, here's how I setup Vapoursynth to get it working for CUDA:
  • I created a new empy Vapoursynth-folder
  • downloaded 'Windows embeddable package (64-bit)' from https://www.python.org/downloads/release/python-3912/
  • extracted the Python download into the download into the 'Vapoursynth'-folder
  • downloaded 'VapourSynth64-Portable-R57' from https://github.com/vapoursynth/vapoursynth/releases
  • extracted the Vapoursynth portable download into the 'Vapoursynth'-folder
  • downloaded get-pip.py from https://bootstrap.pypa.io/get-pip.py and save it into the 'Vapoursynth'-folder
  • opened a 'Windows Command Prompt'-window and navigate into the 'Vaporusynth'-folder
  • installed pip by calling :
    pyhton get-pip.py
  • opened the python39._pth in a text addition and added the following to lines above anything else in that file and saved the file
    Scripts
    Lib\site-packages
  • installed VSGAN
    python -m pip install vsgan
    python -m pip install torch===1.11.0+cu113 torchvision==0.12.0 -f https://download.pytorch.org/whl/torch_stable.html
  • installed BASICVSR++
    python -m pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
    python -m pip install tqdm
    python -m pip install opencv-python
    python -m pip install --upgrade vsbasicvsrpp
    python -m vsbasicvsrpp
    installted RIFE
    python -m pip install --upgrade vsrife
  • installed SWINIR
    python -m pip install --upgrade vsswinir
    python -m vsswinir
  • installed DPIR and onnxruntime-gpu
    python -m pip install --upgrade vsdpir
    python -m pip install --upgrade onnxruntime-gpu
  • from cudnn-11.4-windows-x64-v8.2.4.15.zip and NVIDIA CUDA SDK 11.4.1 runtimes
    I copied:
    cublas64_11.dll
    cublasLt64_11.dll
    cudart64_110.dll
    cudnn64_8.dll
    cudnn_cnn_infer64_8.dll
    cudnn_ops_infer64_8.dll
    cufft64_10.dll
    cufftw64_10.dll
    into Vapoursynth/Lib/site-packages/onnxruntime/capi and then uninstalled the sdk and cudnn.
  • downloaded the vsdpir modules
    python -m vsdpir
  • installed REALESRGAN (which also uses onnxruntime)
    python -m pip install --upgrade vsrealesrgan
    python -m vsrealesrgan

-> Now all the addong work fine for me using CUDA in Hybrid.

According to:
https://onnxruntime.ai/docs/execution-pr...quirements.
it should be enought to install 'TensorRT 8.0 GA Update 1' from https://developer.nvidia.com/nvidia-tens...x-download:
-> https://developer.nvidia.com/compute/mac...dnn8.2.zip

Now to get this portable, a bunch of dlls need to be copied from the 'TensorRT 8.0 GA Update 1', but I have no clue which, since my card does not support tensors.

According to https://nietras.com/2021/01/25/onnxruntime/ it seems like only the nvinfer*.dlls should be needed for tensor support.
Reading your post I you also called:
python -m pip install PATHTO/graphsurgeon-0.4.5-py2.py3-none-any.whl
python -m pip install PATHTO/uff-0.6.9-py2.py3-none-any.whl
python -m pip install PATHTO/onnx_graphsurgeon-0.3.10-py2.py3-none-any.whl

-> Can you try setting up the Vapoursynth folder like I did?
And check:
a. does it work on your system when 'only' CUDA is used like it does for me?
b. does it work if you copy the dlls and install the whl file to get TensorRT working?
if this works I could do the same to have a portable version with CUDA and TensorRT support.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#44
I think that the part regarding the Vapoursynth + pip install and related modules should be provide the same output contained in the file "Hybrid_torch_addon.7z" that you sent to me (with the exception of folder vsgan_models). So I think that I can skip these steps (unless you think that your archive is not reliable to perform this test).

Now to create my CUDA setup I used the following files:

cuda_11.4.3_472.50_win10.exe
cudnn-11.4-windows-x64-v8.2.2.26.zip
TensorRT-8.0.3.4.Windows10.x86_64.cuda-11.3.cudnn8.2.zip

and I already wrote with this setup vs-dpir and onnxruntime is working perfectly.

Theoretically should be enough:

delete the env variable CUDA_PATH, CUDA_PATH_V11_4
rename "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4" in "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4_fake" (just to don't deinstall CUDA)
reboot the PC

now just to be sure, copy all dlls in "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4_fake\bin" in "D:\Programs\Hybrid\64bit\Vapoursynth\Lib\site-packages\onnxruntime\capi"

I think that in the past I already tested a configuration like it, but I will perform this test again tomorrow.

In meanwhile I found the following code in the source directory of onnxruntime (file: onnxruntime_pybind_state.cc)

#ifdef USE_CUDA
    // If the environment variable 'CUDA_UNAVAILABLE' exists, then we do not load cuda. This is set by _ld_preload for the manylinux case
    // as in that case, trying to load the library itself will result in a crash due to the way that auditwheel strips dependencies.
    if (Env::Default().GetEnvironmentVar("ORT_CUDA_UNAVAILABLE").empty()) {
      if (auto* cuda_provider_info = TryGetProviderInfo_CUDA()) {
        const CUDAExecutionProviderInfo info = GetCudaExecutionProviderInfo(cuda_provider_info,
                                                                            provider_options_map);

        // This variable is never initialized because the APIs by which it should be initialized are deprecated, however they still
        // exist are are in-use. Neverthless, it is used to return CUDAAllocator, hence we must try to initialize it here if we can
        // since FromProviderOptions might contain external CUDA allocator.
        external_allocator_info = info.external_allocator_info;
        return cuda_provider_info->CreateExecutionProviderFactory(info)->CreateProvider();
      } else {
        if (!Env::Default().GetEnvironmentVar("CUDA_PATH").empty()) {
          ORT_THROW("CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.");
        }
      }
    }
    LOGS_DEFAULT(WARNING) << "Failed to create " << type << ". Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.";
#endif

It seems that if the environment variable CUDA_PATH is not found the CUDA library is not loaded. But first the module try to initialize CUDA in any case if is not defined the environment variable ORT_CUDA_UNAVAILABLE.
Reply
#45
Please do a clean setup of the Vapoursynth-folder.(you also need to copy vsViewer, the Qt6*.dlls and the platforms folder into the Vapoursynth folder)

Quote:It seems that if the environment variable CUDA_PATH is not found the CUDA library is not loaded. But first the module try to initialize CUDA in any case if is not defined the environment variable ORT_CUDA_UNAVAILABLE.
That conclusion does not fit the code you posted.
The code states that if onnxruntime is compiled with CUDA support and the ORT_CUDA_UNAVAILABLE is empty (or non existant), TryGetProviderInfo_CUDA is triggered. If TryGetProviderInfo_CUDA returns cuda_provider_info everything is fine. Only if cuda_provider_info is null and the CUDA_PATH isn't set the dlls are not cloaded.
-> since on my setup either of the variables are set TryGetProviderInfo_CUDA is triggered and successfully gets the cuda_provider_info.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#46
(30.03.2022, 05:25)Selur Wrote: Please do a clean setup of the Vapoursynth-folder.(you also need to copy vsViewer, the Qt6*.dlls and the platforms folder into the Vapoursynth folder)

Quote:It seems that if the environment variable CUDA_PATH is not found the CUDA library is not loaded. But first the module try to initialize CUDA in any case if is not defined the environment variable ORT_CUDA_UNAVAILABLE.
That conclusion does not fit the code you posted.
The code states that if onnxruntime is compiled with CUDA support and the ORT_CUDA_UNAVAILABLE is empty (or non existant), TryGetProviderInfo_CUDA is triggered. If TryGetProviderInfo_CUDA returns cuda_provider_info everything is fine. Only if cuda_provider_info is null and the CUDA_PATH isn't set the dlls are not cloaded.
-> since on my setup either of the variables are set TryGetProviderInfo_CUDA is triggered and successfully gets the cuda_provider_info.

Cu Selur

I read the code in a hurry, but at first there is a check for the env variable ORT_CUDA_UNAVAILABLE, if it is not defined try to load CUDA, if it fails check if the env variable CUDA_PATH is defined, if not issue an error. So it seems that the the presence of CUDA_PATH is necessary for a successful load of CUDA library. But I will try again this afternoon.
Reply
#47
if (Env::Default().GetEnvironmentVar("ORT_CUDA_UNAVAILABLE").empty()) {
      if (auto* cuda_provider_info = TryGetProviderInfo_CUDA()) {
        
      } else {
        if (!Env::Default().GetEnvironmentVar("CUDA_PATH").empty()) {
        }
      }
    }
-> look for ORT_CUDA_UNAVAILABLE if it's empt use 'TryGetProviderInfo_CUDA'.
Only if that fails, which it does not seem to de on my systems, try CUDA_PATH.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#48
I was finally able to get vs-dpir working on my system with no CUDA installed.
The presence of CUDA_PATH is not relevant, this string is used only in the file onnxruntime_pybind_state.cc. The function TryGetProviderInfo_CUDA() is using the standard way used by windows to find the dlls, i.e. looking in the paths specified in the environment variable PATH.

So after having uninstalled CUDA, I created the folder "D:\Programs\Hybrid\64bit\cuda_11_4" and then I copied in this folder the files:

cublas64_11.dll
cublasLt64_11.dll
cudart64_110.dll
cudnn64_8.dll
cudnn_cnn_infer64_8.dll
cudnn_ops_infer64_8.dll
cufft64_10.dll
cufftw64_10.dll
nvinfer.dll
nvinfer_plugin.dll
nvonnxparser.dll
nvparsers.dll

Then I added in the env variable PATH, the folder "D:\Programs\Hybrid\64bit\cuda_11_4". The vs-dpir is working perfectly using both CUDA and TensorRT engine.

If I remove from the path "D:\Programs\Hybrid\64bit\cuda_11_4" and move the files above in "Vapoursynth\Lib\site-packages\onnxruntime\capi" vs-dpir crash without any message.

It seems that on my system windows does not look in the onnxruntime\capi folder. I don't know way because torch, which is using the same approach, works as expected.
Reply
#49
Quote:standard way used by windows to find the dlls,
the standard way would be first look next to binary then search PATH.

What's the content of your python39._pth ?
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#50
(30.03.2022, 19:48)Selur Wrote:
Quote:standard way used by windows to find the dlls,
the standard way would be first look next to binary then search PATH.

What's the content of your python39._pth  ?

Scripts
Lib\site-packages
python39.zip
.

# Uncomment to run site.main() automatically
#import site
Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)