This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Hybrid 2022.03.20.1: No module named 'vsdpir'
#37
(27.03.2022, 17:50)Selur Wrote: the dlls I need for GTX cards are:
  • cublas64_11.dll
  • cublasLt64_11.dll
  • cudart64_110.dll
  • cudnn64_8.dll
  • cudnn_cnn_infer64_8.dll
  • cudnn_ops_infer64_8.dll
  • cufft64_10.dll
  • cufftw64_10.dll
see: https://onnxruntime.ai/docs/execution-pr...vider.html
and I thought that
  • nvinfer.dll
  • nvinfer_plugin.dll
  • nvonnxparser.dll
  • nvparsers.dll
should provide TensorRT support. (Just checked, thise re the only dlls which come with TensorRT-8.0.3.4)

Cu Selur

I was finally able to get TensorRT working with vsDPIR.

First it is necessary to perform a full installation of CUDA 11.4 Developer components.
Then after having extracted the zip file of TensorRT-8.0.3.4 it is necessary to copy  "lib\*dll" in  "c:\program files\nvidia gpu computing toolkit\cuda\v11.4\bin"
Finally it is necessary to install via "pip" the following python modules that are available in the archive of  TensorRT-8.0.3.4:

graphsurgeon-0.4.5-py2.py3-none-any.whl
uff-0.6.9-py2.py3-none-any.whl
onnx_graphsurgeon-0.3.10-py2.py3-none-any.whl

on my RTX 3060 the TensorRT version of vsDIPIR is only 5% faster than the CUDa version.
Reply


Messages In This Thread
RE: Hybrid 2022.03.20.1: No module named 'vsdpir' - by Dan64 - 29.03.2022, 17:52

Forum Jump:


Users browsing this thread: 2 Guest(s)