29.03.2022, 17:52
(27.03.2022, 17:50)Selur Wrote: the dlls I need for GTX cards are:see: https://onnxruntime.ai/docs/execution-pr...vider.html
- cublas64_11.dll
- cublasLt64_11.dll
- cudart64_110.dll
- cudnn64_8.dll
- cudnn_cnn_infer64_8.dll
- cudnn_ops_infer64_8.dll
- cufft64_10.dll
- cufftw64_10.dll
and I thought thatshould provide TensorRT support. (Just checked, thise re the only dlls which come with TensorRT-8.0.3.4)
- nvinfer.dll
- nvinfer_plugin.dll
- nvonnxparser.dll
- nvparsers.dll
Cu Selur
I was finally able to get TensorRT working with vsDPIR.
First it is necessary to perform a full installation of CUDA 11.4 Developer components.
Then after having extracted the zip file of TensorRT-8.0.3.4 it is necessary to copy "lib\*dll" in "c:\program files\nvidia gpu computing toolkit\cuda\v11.4\bin"
Finally it is necessary to install via "pip" the following python modules that are available in the archive of TensorRT-8.0.3.4:
graphsurgeon-0.4.5-py2.py3-none-any.whl
uff-0.6.9-py2.py3-none-any.whl
onnx_graphsurgeon-0.3.10-py2.py3-none-any.whl
on my RTX 3060 the TensorRT version of vsDIPIR is only 5% faster than the CUDa version.