ubuntuでTensorRT 7.0をインストールし、検証


インストール環境
Ubuntu 18.04
TensorRT-7.0.0.11
cuda 10.0
cudnn 7.6.5

インストールTensorRT 7.0バージョンTar File Installation This section contains instructions for installing TensorRT from a tar file.About this task
Note: Before issuing the following commands, you’ll need to replace 7.x.x.x with your specific TensorRT version. The following commands are examples.
Procedure
  • Install the following dependencies, if not already present: CUDA 10.0 cuDNN 7.6.5 Python 3 (Optional)
  • Download the TensorRT tar file that matches the Linux distribution you are using.
  • Choose where you want to install TensorRT. This tar file will install everything into a subdirectory called TensorRT-7.x.x.x.
  • Unpack the tar file.
    version="7.x.x.x"  
    os=""  
    arch=$(uname -m)      
    cuda="cuda-x.x"   
    cudnn="cudnn8.x"   
    tar xzvf TensorRT-${version}.${os}.${arch}-gnu.${cuda}.${cudnn}.tar.gz  
    
    Where:
  • 7.x.x.x is your TensorRT version
  • is: Ubuntu-16.04 Ubuntu-18.04 CentOS-7.6
  • cuda-x.x is CUDA version 10.2 or 11.0.
  • cudnn8.x is cuDNN version 8.0. This directory will have sub-directories like lib, include, data, etc…
  • ls TensorRT-${version}
    bin  data  doc  graphsurgeon  include  lib  python  samples  targets  TensorRT-Release-Notes.pdf  uff
    
  • Add the absolute path to the TensorRTlib directory to the environment variable LD_LIBRARY_PATH:
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:
    
  • Install the Python TensorRT wheel file.
    cd TensorRT-${version}/python
    
    If using Python 2.7:
    sudo pip2 install tensorrt-*-cp27-none-linux_x86_64.whl
    
    If using Python 3.x:
    sudo pip3 install tensorrt-*-cp3x-none-linux_x86_64.whl
    
  • Install the Python UFF wheel file. This is only required if you plan to use TensorRT with TensorFlow.
    cd TensorRT-${version}/uff
    
    If using Python 2.7:
    sudo pip2 install uff-0.6.9-py2.py3-none-any.whl
    
    If using Python 3.x:
    sudo pip3 install uff-0.6.9-py2.py3-none-any.whl
    
    In either case, check the installation with:
    which convert-to-uff
    
  • Install the Python graphsurgeon wheel file.
    cd TensorRT-${version}/graphsurgeon
    
    If using Python 2.7:
    sudo pip2 install graphsurgeon-0.4.5-py2.py3-none-any.whl
    
    If using Python 3.x:
    sudo pip3 install graphsurgeon-0.4.5-py2.py3-none-any.whl
    
  • Verify the installation: a. Ensure that the installed files are located in the correct directories. For example, run the tree -d command to check whether all supported installed files are in place in the lib, include, data, etc… directories. b. Build and run one of the shipped samples, for example, sampleMNIST in the installed directory. You should be able to compile and execute the sample without additional settings. For more information, see the “Hello World” For TensorRT (sampleMNIST). c. The Python samples are in the samples/python directory.


  • TensorRTのコンパイル例(sampleMNIST)エラー:
    ~/Downloads/TensorRT/TensorRT-7.0.0.11/samples/sampleMNIST$ make
    ../Makefile.config:7: CUDA_INSTALL_DIR variable is not specified, using /usr/local/cuda by default, use CUDA_INSTALL_DIR=<cuda_directory> to change.
    ../Makefile.config:10: CUDNN_INSTALL_DIR variable is not specified, using $CUDA_INSTALL_DIR by default, use CUDNN_INSTALL_DIR=<cudnn_directory> to change.
    make: Nothing to be done for 'all'.
    
    解決方法:
    vim ~/.bashrc
    
    # tensorrt cuda and cudnn
    export CUDA_INSTALL_DIR=/usr/local/cuda
    export CUDNN_INSTALL_DIR=/usr/local/cuda
    
    :~/Downloads/TensorRT/TensorRT-7.0.0.11/samples/sampleMNIST$ make -j8
    make: Nothing to be done for 'all'.
    
    解決方法:
    make clean
    
    ~/Downloads/TensorRT/TensorRT-7.0.0.11/bin$ ./sample_mnist
    ./sample_mnist: error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory
    
    
    解決方法:
    vim ~/.bashrc
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/xxx/Downloads/TensorRT/TensorRT-7.0.0.11/lib
    
    ~/Downloads/TensorRT/TensorRT-7.0.0.11/bin$ ./sample_mnist
    &&&& RUNNING TensorRT.sample_mnist # ./sample_mnist
    [07/14/2020-11:43:08] [I] Building and running a GPU inference engine for MNIST
    [07/14/2020-11:43:10] [I] [TRT] Detected 1 inputs and 1 output network tensors.
    [07/14/2020-11:43:10] [W] [TRT] Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
    Could not find 6.pgm in data directories:
        data/mnist/
        data/samples/mnist/
    &&&& FAILED
    
    解決方法:
    cd /TensorRT-7.0.0.11/data/mnist
    python download_pgms.py 
    
    参考:install and configure tensorrt 4 on ubuntu 16.04【TensorRT】tensorRT 7.0のインストール構成