深さ学習の初探査


深さ学習の初探査


一、パソコンの準備


ASUSのフライングバリアを入手し,基本的にCPU:I 5 7世代標圧,GPU:GTX 1060 6 G,メモリ16 G,SSD 128 Gとした.まずUbuntu 16.04.1をインストールし,40 Gのスペースをホームに割り当てた.

1、掛けた後に起こしてはいけない:laptop_mode


インストールが完了すると、ノートパソコンはカバーを閉じた後、起動できません.インターネットで資料を調べてlaptopを起動すると言った.mode.
プロファイルへのアクセス:
sudo lmt-config-gui

wr@wr-FX503VM:~$ dpkg -l | grep laptop-mode-tools
ii  laptop-mode-tools 1.68-3ubuntu1 all Tools for Power Savings based on battery/AC status

in /etc/default/acpi-support,we will see:
# Note: to enable "laptop mode" (to spin down your hard drive for longer
# periods of time), install the laptop-mode-tools package and configure
# it in /etc/laptop-mode/laptop-mode.conf.

ヒントは/etc/laptop-mode/laptop-mode.confでプロファイル検索ENABLE_を行うLAPTOP_MODE_ON_BATTERY、ENABLE_LAPTOP_MODE_ON_AC、ENABLE_LAPTOP_MODE_WHEN_LID_CLOSEDは注釈を見て大体どういう意味か分かります電池、外付け電源を使って、ディスプレイを閉じる時LAPTOP_を有効にするかどうかMODEは全て1に設定すればよい.
sudo laptop_mode start  

ok

check:
wr@wr-FX503VM:~$ cat /proc/sys/vm/laptop_mode 
2
sudo gedit /etc/systemd/logind.conf 
#HandleLidSwitch=suspend -> HandleLidSwitch=ignore
#HandleLidSwitchDocked=ignore -> HandleLidSwitchDocked=ignore

sudo restart systemd-logind

やはり完全に正常ではありません.後分析はnvidiaの駆動をインストールするべきです.

2、NVIDIA駆動を取り付ける


最初は資料を探して回り道をしましたが、
sudo gedit/etc/modprobe.d/blacklist.confはblacklist nouveauを/etc/modprobeに追加する.d/blacklist.conf
{ blacklist vga16fb blacklist nouveau blacklist rivafb blacklist rivatv blacklist nvidiafb }
sudo update-initramfs -u
そして起動できなくなって、血の教訓は私たちに教えて、このようにしないでください~
スイッチを取り付けることでコマンドラインに入り、戻ってきます~
そして
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update

ubuntu-drivers devices:
{
wr@wr-FX503VM:~$ ubuntu-drivers devices
== /sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 ==
modalias : pci:v000010DEd00001C20sv00001043sd0000154Ebc03sc00i00
vendor   : NVIDIA Corporation
driver   : nvidia-396 - third-party free recommended
driver   : nvidia-390 - third-party free
driver   : nvidia-384 - third-party free
driver   : xserver-xorg-video-nouveau - distro free builtin
}

適切なドライバを見つけて、
ctrl+alt+F 1を押してttyテキストモードに入りデスクトップディスプレイマネージャLightDMを閉じる
sudo service lightdm stop
sudo apt-get install nvidia-396
sudo reboot
sudo nvidia-smi
sudo nvidia-settings

そこでわざわざlsmod|grep nouveau nouveauは存在しない~

3、nvidia-cuda 9をインストールする.1


公式サイトで自分のマシンに合ったバージョンを見つけました.cuda_9.1.85_387.26_linux
  • cuda
  • を取り付ける
    sudo ./cuda_9.1.85_387.26_linux.run

    注意:実行後、2番目のプロンプトでドライバをインストールするかどうかを選択するプロンプトが表示されます(Install NVIDIA Accelerated Graphics Driver for Linux-x 86_64 361.62?)Noを選択してください.前により新しいnvidiaドライバがインストールされているので、ここではインストールを選択しないでください.残りは直接デフォルトまたは「Yes」を選択します.そしてインストールすると予想以上に速くなります...
  • bashrc
  • の構成
    sudo gedit ~/.bashrc
    export PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}} 
    export LD_LIBRARY_PATH=/usr/local/cuda9.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} 
    export CUDA_HOME=/usr/local/cuda
  • プロファイル
  • の構成
    sudo gedit /etc/profile
    export PATH=/usr/local/cuda/bin:$PATH

    保存後、リンクファイルを作成します.
    sudo gedit /etc/ld.so.conf.d/cuda.conf

    開いているファイルに次の文を追加します.
        /usr/local/cuda/lib64

    そして実行
        sudo ldconfig

    再起動OK
    デモの検証を実行します.
    wr@wr-FX503VM:/usr/local/cuda-9.1/samples/1_Utilities/deviceQuery$ sudo make
    [sudo] password for wr: 
    "/usr/local/cuda-9.1"/bin/nvcc -ccbin g++ -I../../common/inc  -m64    -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_70,code=compute_70 -o deviceQuery.o -c deviceQuery.cpp
    "/usr/local/cuda-9.1"/bin/nvcc -ccbin g++   -m64      -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_70,code=compute_70 -o deviceQuery deviceQuery.o 
    mkdir -p ../../bin/x86_64/linux/release
    cp deviceQuery ../../bin/x86_64/linux/release
    wr@wr-FX503VM:/usr/local/cuda-9.1/samples/1_Utilities/deviceQuery$ 
    wr@wr-FX503VM:/usr/local/cuda-9.1/samples/1_Utilities/deviceQuery$ ./deviceQuery 
    ./deviceQuery Starting...
    
     CUDA Device Query (Runtime API) version (CUDART static linking)
    
    Detected 1 CUDA Capable device(s)
    
    Device 0: "GeForce GTX 1060"
      CUDA Driver Version / Runtime Version          9.2 / 9.1
      CUDA Capability Major/Minor version number:    6.1
      Total amount of global memory:                 6070 MBytes (6365118464 bytes)
      (10) Multiprocessors, (128) CUDA Cores/MP:     1280 CUDA Cores
      GPU Max Clock rate:                            1671 MHz (1.67 GHz)
      Memory Clock rate:                             4004 Mhz
      Memory Bus Width:                              192-bit
      L2 Cache Size:                                 1572864 bytes
      Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
      Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
      Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
      Total amount of constant memory:               65536 bytes
      Total amount of shared memory per block:       49152 bytes
      Total number of registers available per block: 65536
      Warp size:                                     32
      Maximum number of threads per multiprocessor:  2048
      Maximum number of threads per block:           1024
      Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
      Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
      Maximum memory pitch:                          2147483647 bytes
      Texture alignment:                             512 bytes
      Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
      Run time limit on kernels:                     Yes
      Integrated GPU sharing Host Memory:            No
      Support host page-locked memory mapping:       Yes
      Alignment requirement for Surfaces:            Yes
      Device has ECC support:                        Disabled
      Device supports Unified Addressing (UVA):      Yes
      Supports Cooperative Kernel Launch:            Yes
      Supports MultiDevice Co-op Kernel Launch:      Yes
      Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
      Compute Mode:
         < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
    
    deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.2, CUDA Runtime Version = 9.1, NumDevs = 1
    Result = PASS

    4、nvidia-cudnのインストール


    公式サイトは相応のcudnnをダウンロードします
    cudnnをダウンロードして解凍し、cdはcudnn解凍後のincludeディレクトリに入り、コマンドラインで次の操作を行います.
        sudo cp cudnn.h /usr/local/cuda/include/ # 

    さらに、lib 64ディレクトリのダイナミックファイルにcdをコピーしてリンクします.
        sudo cp lib* /usr/local/cuda/lib64/ # 
        cd /usr/local/cuda/lib64/
        sudo rm -rf libcudnn.so libcudnn.so.7 # 
        sudo ln -s libcudnn.so.7.1.3 libcudnn.so.7 # 
        sudo ln -s libcudnn.so.7 libcudnn.so # 

    二、Android環境の準備


    1、gradle-4.1を取り付ける


    プロセス略.

    2、Android Stadioのインストール


    プロセス略.

    3、Sdkを取り付ける


    プロセス略.

    4、Ndk取付


    プロセス略.

    5、Adbを取り付ける

    sudo  apt-get  install android-tools-adb

    adb about: https://blog.csdn.net/u012351661/article/details/78201040

    三、TensorFlow環境の構築


    1、Anaconda 2-5.1.0-Linux-x 86_をインストールする64、AnacondaでTensorFlowを走ります


    python2.7、プロセス略.

    2、bazelのインストール


    略.

    3、TensorFlowの配置

    git clone --recurse-submodules https://github.com/tensorflow/tensorflow
    
    configure:
    {
    wr@wr-FX503VM:~/Tensorflow_Workspace/tensorflow$ ./configure 
    You have bazel 0.12.0 installed.
    Please specify the location of python. [Default is /home/wr/anaconda2/bin/python]: 
    
    
    Found possible Python library paths:
      /home/wr/anaconda2/lib/python2.7/site-packages
    Please input the desired Python library path to use.  Default is [/home/wr/anaconda2/lib/python2.7/site-packages]
    
    Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: y
    jemalloc as malloc support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
    No Google Cloud Platform support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
    No Hadoop File System support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n
    No Amazon S3 File System support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n
    No Apache Kafka Platform support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
    No XLA JIT support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with GDR support? [y/N]: n
    No GDR support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with VERBS support? [y/N]: n
    No VERBS support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
    No OpenCL SYCL support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with CUDA support? [y/N]: y
    CUDA support will be enabled for TensorFlow.
    
    Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: 9.1
    
    
    Please specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
    
    
    Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1
    
    
    Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
    
    
    Do you wish to build TensorFlow with TensorRT support? [y/N]: n
    No TensorRT support will be enabled for TensorFlow.
    
    Please specify the NCCL version you want to use. [Leave empty to default to NCCL 1.3]: 
    
    
    Please specify a list of comma-separated Cuda compute capabilities you want to build with.
    You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
    Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1]
    
    
    Do you want to use clang as CUDA compiler? [y/N]: 
    nvcc will be used as CUDA compiler.
    
    Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 
    
    
    Do you wish to build TensorFlow with MPI support? [y/N]: 
    No MPI support will be enabled for TensorFlow.
    
    Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: 
    
    
    Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: y
    Searching for NDK and SDK installations.
    
    Please specify the home path of the Android NDK to use. [Default is /home/wr/Android/Sdk/ndk-bundle]: /home/wr/Android_Workspace/SDK/ndk-bundle
    
    
    The path /home/wr/Android_Workspace/SDK/ndk-bundle or its child file "source.properties" does not exist.
    Please specify the home path of the Android NDK to use. [Default is /home/wr/Android/Sdk/ndk-bundle]: /home/wr/Android_Workspace/SDK/ndk-bundle
    
    
    Writing android_ndk_workspace rule.
    Please specify the home path of the Android SDK to use. [Default is /home/wr/Android/Sdk]: /home/wr/Android_Workspace/SDK
    
    
    Please specify the Android SDK API level to use. [Available levels: ['25', '26', '27']] [Default is 27]: 25
    
    
    Please specify an Android build tools version to use. [Available versions: ['25.0.3', '26.0.2', '27.0.3']] [Default is 27.0.3]: 25.0.3
    
    
    Writing android_sdk_workspace rule.
    
    Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
        --config=mkl            # Build with MKL support.
        --config=monolithic     # Config for mostly static monolithic build.
    Configuration finished
    wr@wr-FX503VM:~/Tensorflow_Workspace/tensorflow$ 
    
    }

    そしてbazelでコンパイルすればいいです.

    4、コンパイルインストールTensorFlow


    コンパイルが完了したら、コンパイルされたインストールパッケージをインストールします.
    (https://blog.csdn.net/briliantly/article/details/79566013)(pip install-U mock)pipでインストールする:
    bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
    bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg  
    2018  04  27    13:38:37 CST : === Output wheel file is in: /tmp/tensorflow_pkg
    
    
    wr@wr-FX503VM:~/Tensorflow_Workspace/tensorflow$ cd ///tmp/tensorflow_pkg
    wr@wr-FX503VM:/tmp/tensorflow_pkg$ ls
    tensorflow-1.8.0rc1-cp27-cp27mu-linux_x86_64.whl

    特に、「sudo pip install」は使用しないでください.そうしないと、/usr/local/lib/python 2が使用されます.7 Anacondaではなく
    pip install/tmp/tensorflow_pkg/tensorflow-1.8.0rc1-cp27-cp27mu-linux_x86_64.whl
    wr@wr-FX503VM:~$ pip install /tmp/tensorflow_pkg/tensorflow-1.8.0rc1-cp27-cp27mu-linux_x86_64.whl
    Processing /tmp/tensorflow_pkg/tensorflow-1.8.0rc1-cp27-cp27mu-linux_x86_64.whl
    Collecting protobuf>=3.4.0 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/9d/61/54c3a9cfde6ffe0ca6a1786ddb8874263f4ca32e7693ad383bd8cf935015/protobuf-3.5.2.post1-cp27-cp27mu-manylinux1_x86_64.whl (6.4MB)
        100% |████████████████████████████████| 6.4MB 2.2MB/s 
    Collecting astor>=0.6.0 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/b2/91/cc9805f1ff7b49f620136b3a7ca26f6a1be2ed424606804b0fbcf499f712/astor-0.6.2-py2.py3-none-any.whl
    Collecting backports.weakref>=1.0rc1 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/88/ec/f598b633c3d5ffe267aaada57d961c94fdfa183c5c3ebda2b6d151943db6/backports.weakref-1.0.post1-py2.py3-none-any.whl
    Requirement already satisfied: wheel in ./anaconda2/lib/python2.7/site-packages (from tensorflow==1.8.0rc1) (0.30.0)
    Requirement already satisfied: mock>=2.0.0 in ./anaconda2/lib/python2.7/site-packages (from tensorflow==1.8.0rc1) (2.0.0)
    Requirement already satisfied: enum34>=1.1.6 in ./anaconda2/lib/python2.7/site-packages (from tensorflow==1.8.0rc1) (1.1.6)
    Collecting gast>=0.2.0 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/5c/78/ff794fcae2ce8aa6323e789d1f8b3b7765f601e7702726f430e814822b96/gast-0.2.0.tar.gz
    Collecting termcolor>=1.1.0 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz
    Collecting absl-py>=0.1.6 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/90/6b/ba04a9fe6aefa56adafa6b9e0557b959e423c49950527139cb8651b0480b/absl-py-0.2.0.tar.gz (82kB)
        100% |████████████████████████████████| 92kB 6.4MB/s 
    Collecting tensorboard<1.8.0,>=1.7.0 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/6e/5b/18f50b69b8af42f93c47cd8bf53337347bc1974480a10de51fdd7f8fd48b/tensorboard-1.7.0-py2-none-any.whl (3.1MB)
        100% |████████████████████████████████| 3.1MB 3.5MB/s 
    Requirement already satisfied: six>=1.10.0 in ./anaconda2/lib/python2.7/site-packages (from tensorflow==1.8.0rc1) (1.11.0)
    Collecting grpcio>=1.8.6 (from tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/0d/54/b647a6323be6526be27b2c90bb042769f1a7a6e59bd1a5f2eeb795bfece4/grpcio-1.11.0-cp27-cp27mu-manylinux1_x86_64.whl (8.7MB)
        100% |████████████████████████████████| 8.7MB 3.1MB/s 
    Requirement already satisfied: numpy>=1.13.3 in ./anaconda2/lib/python2.7/site-packages (from tensorflow==1.8.0rc1) (1.14.0)
    Requirement already satisfied: setuptools in ./anaconda2/lib/python2.7/site-packages (from protobuf>=3.4.0->tensorflow==1.8.0rc1) (38.4.0)
    Requirement already satisfied: funcsigs>=1; python_version < "3.3" in ./anaconda2/lib/python2.7/site-packages (from mock>=2.0.0->tensorflow==1.8.0rc1) (1.0.2)
    Requirement already satisfied: pbr>=0.11 in ./anaconda2/lib/python2.7/site-packages (from mock>=2.0.0->tensorflow==1.8.0rc1) (4.0.2)
    Collecting bleach==1.5.0 (from tensorboard<1.8.0,>=1.7.0->tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/33/70/86c5fec937ea4964184d4d6c4f0b9551564f821e1c3575907639036d9b90/bleach-1.5.0-py2.py3-none-any.whl
    Requirement already satisfied: futures>=3.1.1; python_version < "3" in ./anaconda2/lib/python2.7/site-packages (from tensorboard<1.8.0,>=1.7.0->tensorflow==1.8.0rc1) (3.2.0)
    Collecting markdown>=2.6.8 (from tensorboard<1.8.0,>=1.7.0->tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/6d/7d/488b90f470b96531a3f5788cf12a93332f543dbab13c423a5e7ce96a0493/Markdown-2.6.11-py2.py3-none-any.whl (78kB)
        100% |████████████████████████████████| 81kB 5.3MB/s 
    Requirement already satisfied: werkzeug>=0.11.10 in ./anaconda2/lib/python2.7/site-packages (from tensorboard<1.8.0,>=1.7.0->tensorflow==1.8.0rc1) (0.14.1)
    Collecting html5lib==0.9999999 (from tensorboard<1.8.0,>=1.7.0->tensorflow==1.8.0rc1)
      Downloading https://files.pythonhosted.org/packages/ae/ae/bcb60402c60932b32dfaf19bb53870b29eda2cd17551ba5639219fb5ebf9/html5lib-0.9999999.tar.gz (889kB)
        100% |████████████████████████████████| 890kB 3.7MB/s 
    Building wheels for collected packages: gast, termcolor, absl-py, html5lib
      Running setup.py bdist_wheel for gast ... done
      Stored in directory: /home/wr/.cache/pip/wheels/9a/1f/0e/3cde98113222b853e98fc0a8e9924480a3e25f1b4008cedb4f
      Running setup.py bdist_wheel for termcolor ... done
      Stored in directory: /home/wr/.cache/pip/wheels/7c/06/54/bc84598ba1daf8f970247f550b175aaaee85f68b4b0c5ab2c6
      Running setup.py bdist_wheel for absl-py ... done
      Stored in directory: /home/wr/.cache/pip/wheels/23/35/1d/48c0a173ca38690dd8dfccfa47ffc750db48f8989ed898455c
      Running setup.py bdist_wheel for html5lib ... done
      Stored in directory: /home/wr/.cache/pip/wheels/50/ae/f9/d2b189788efcf61d1ee0e36045476735c838898eef1cad6e29
    Successfully built gast termcolor absl-py html5lib
    grin 1.2.1 requires argparse>=1.1, which is not installed.
    Installing collected packages: protobuf, astor, backports.weakref, gast, termcolor, absl-py, html5lib, bleach, markdown, tensorboard, grpcio, tensorflow
      Found existing installation: html5lib 1.0.1
        Uninstalling html5lib-1.0.1:
          Successfully uninstalled html5lib-1.0.1
      Found existing installation: bleach 2.1.2
        Uninstalling bleach-2.1.2:
          Successfully uninstalled bleach-2.1.2
    Successfully installed absl-py-0.2.0 astor-0.6.2 backports.weakref-1.0.post1 bleach-1.5.0 gast-0.2.0 grpcio-1.11.0 html5lib-0.9999999 markdown-2.6.11 protobuf-3.5.2.post1 tensorboard-1.7.0 tensorflow-1.8.0rc1 termcolor-1.1.0

    5、テスト

    wr@wr-FX503VM:~$ python
    Python 2.7.14 |Anaconda, Inc.| (default, Dec  7 2017, 17:05:42) 
    [GCC 7.2.0] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import tensorflow as tf
    /home/wr/anaconda2/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
      from ._conv import register_converters as _register_converters
    >>> print(tf.__version__)
    1.8.0-rc1
    >>> 

    四、TensorflowのAndroid Demo


    https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android
    bazel build -c opt //tensorflow/examples/android:tensorflow_demo
    {
    wr@wr-FX503VM:~/Tensorflow_Workspace/tensorflow$ bazel build -c opt //tensorflow/examples/android:tensorflow_demo
    ERROR: /home/wr/Tensorflow_Workspace/tensorflow/WORKSPACE:105:1: no such package '@androidsdk//': Bazel requires Android build tools version 26.0.1 or newer, 25.0.3 was provided and referenced by '//external:android/dx_jar_import'
    ERROR: Analysis of target '//tensorflow/examples/android:tensorflow_demo' failed; build aborted: Loading failed
    INFO: Elapsed time: 2.783s
    FAILED: Build did NOT complete successfully (13 packages loaded)
    }

    Sdk:26.02 Ndk:r 14 b
    external/eigen_archive/unsupported/Eigen/CXX11/Tensor:84:10: fatal error: 'cuda_runtime.h' file not found

    ここでエラーが発生しました.Android版にTensorFlowを再配置します.

    1、TensorFlowのAndroid環境を配置する


    環境を再構築する必要がある
    wr@wr-FX503VM:~/Tensorflow_Workspace/tensorflow$ ./configure 
    You have bazel 0.12.0 installed.
    Please specify the location of python. [Default is /home/wr/anaconda2/bin/python]: 
    
    
    Found possible Python library paths:
      /home/wr/anaconda2/lib/python2.7/site-packages
    Please input the desired Python library path to use.  Default is [/home/wr/anaconda2/lib/python2.7/site-packages]
    
    Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: n
    No jemalloc as malloc support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: n
    No Google Cloud Platform support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: n
    No Hadoop File System support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n
    No Amazon S3 File System support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with Apache Kafka Platform support? [Y/n]: n
    No Apache Kafka Platform support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
    No XLA JIT support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with GDR support? [y/N]: n
    No GDR support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with VERBS support? [y/N]: n
    No VERBS support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
    No OpenCL SYCL support will be enabled for TensorFlow.
    
    Do you wish to build TensorFlow with CUDA support? [y/N]: n
    No CUDA support will be enabled for TensorFlow.
    
    Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
    Clang will not be downloaded.
    
    Do you wish to build TensorFlow with MPI support? [y/N]: n
    No MPI support will be enabled for TensorFlow.
    
    Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: 
    
    
    The WORKSPACE file has at least one of ["android_sdk_repository", "android_ndk_repository"] already set. Will not ask to help configure the WORKSPACE. Please delete the existing rules to activate the helper.
    
    Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
        --config=mkl            # Build with MKL support.
        --config=monolithic     # Config for mostly static monolithic build.
    Configuration finished

    そしてまた
    bazel build -c opt //tensorflow/examples/android:tensorflow_demo
    {
    Target //tensorflow/examples/android:tensorflow_demo up-to-date:
      bazel-bin/tensorflow/examples/android/tensorflow_demo_deploy.jar
      bazel-bin/tensorflow/examples/android/tensorflow_demo_unsigned.apk
      bazel-bin/tensorflow/examples/android/tensorflow_demo.apk
    INFO: Elapsed time: 706.365s, Critical Path: 73.09s
    INFO: Build completed successfully, 970 total actions
    
    }

    ok

    2、Android Statio、bazelでコンパイル


    Android stadio,open project/home/wr/Tensorflow_を開くWorkspace/tensorflow/tensorflow/examples/android
    開けてgradleファイル、bazelLocationの場所を変更するには:
    //def bazelLocation = '/usr/local/bin/bazel'
    def bazelLocation = '/home/wr/.bazel/bin/bazel'

    build project
    Ok

    3、運転


    略.

    五、Inception V 3に基づく移行学習


    1、Pycharm略を取り付けます.2、Tensorboardインストール略.
    Tensorboardを使用してTensorFlowのモデルを表示する
    python /home/wr/Tensorflow_Workspace/tensorflow/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir=tensorflow_inception_graph_flower.pb --log_dir=tensorboard_graph
    tensorboard --logdir=tensorboard_graph

    または
    tensorboard --logdir=tensorboard_graph --debug

    見られないことに気づいて、画像がずっと出てこないので、TypeError:GetNext()
    find tensorboard_graph | grep tfevents
    tensorboard –inspect –logdir tensorboard_graph/
    後でインターネットで資料を調べてみるとtensorboardのbugです.
    {問題:TypeError:GetNext()takes exactly 1 argument(2 given)
    https://github.com/tensorflow/tensorboard/pull/1086/files/e303ebd339050756f451f033b15d75470d57e02a#diff-59cb290472c659c40df2436665c48aae
    ネット上の変更を/home/wr/anaconda 2/lib/python 2にする.7/site-packages/tensorboard/backend/event_processing/event_file_loader.py ok }
    python /home/wr/Tensorflow_Workspace/tensorflow/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir=tensorflow_inception_graph_flower.pb --log_dir=tensorboard_graph
    tensorboard --logdir=tensorboard_graph
    python /home/wr/Tensorflow_Workspace/tensorflow/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir=tensorflow_inception_graph_v3_flower_striped.pb --log_dir=tensorboard_graph
    tensorboard --logdir=tensorboard_graph

    プロンプトに従ってブラウザを使用すると、異なるモデルの構造図が表示されます.

    3、PC上で花を識別するモデルを訓練し、Inception V 3モデルに基づく移動


    1つのトレーニングデータは複数回使用されるので、元の画像をInception-v 3モデルで計算したフィーチャーベクトルをファイルに保存することができます.
    画像入力テンソルに対応する名称:JPEG_DATA_TENSOR_NAME = ‘DecodeJpeg/contents:0’
    //Inception-v 3モデルでボトルネック層の結果を表すテンソルの名前.Googleが提案したInception-v 3モデルでは、このテンソル名は「pool_3/_reshape:0’. モデルを訓練する際、tensor.を通過することができる.nameでテンソルの名前を取得します.BOTTLENECK_TENSOR_NAME = ‘pool_3/_reshape:0’
    花の画像をInception−v 3のモデル計算により特徴ベクトルを求め,次いで線形分類の全接続層を簡単に構築した.訓練されたInception−v 3モデルは,元のピクチャをより分類しやすい特徴ベクトルとして抽象化しているため,この新しい分類タスクを達成するためにそんなに複雑なニューラルネットワークを訓練する必要はない.
    コードはtransfer_を参照してくださいflower.py
    教訓:pbモデルの生成
    constant_graph=graph_util.convert_variables_to_constants(sess,sess.graph_def, ["final_training_ops/Softmax"])
    with gfile.FastGFile(os.path.join(MODEL_OUT_DIR, MODEL_FILE_OUT), mode='wb') as f:
    f.write(constant_graph.SerializeToString())

    convert_variables_to_constantsというステップはかなり重要で、variablesをモデルに保存する必要はありません.私达は苦労して模型を训练して、variablesではありませんか~androidでvariablesの模型がないことをロードしたら、绝えずあなたにvariables~を初期化することを要求して、variablesは模型の肝心な点で、保存していないで、androidで初期化してまたどんな意义がありますか~

    4、移行構造


    2つのモデルを組み合わせて(v 3,flower)、V 3に対して、v 3:import"Mul"output"pool_3/_reshape"を取得します.
    次に、v 3のボトルネック層outputでflowerのimport feed「pool_3/_reshape」>「BottleneckInputPlaceholder」を埋め込む
    flowerにはflower:import"BottleneckInputPlaceholder"output"final_training_ops/softmax"があります

    5、strip_unused

    bazel build tensorflow/python/tools:strip_unused && \
    bazel-bin/tensorflow/python/tools/strip_unused \
    --input_graph=/home/wr/Tensorflow_Workspace/code/transfer_learning/model/tensorflow_inception_graph_v3.pb \
    --output_graph=/home/wr/Tensorflow_Workspace/code/transfer_learning/model/tmp/tensorflow_inception_graph_v3_bootleneck_striped.pb \
    --input_node_names="Mul"  \
    --output_node_names="pool_3/_reshape"  \
    --input_binary=true
    
    bazel build tensorflow/python/tools:strip_unused && \
    bazel-bin/tensorflow/python/tools/strip_unused \
    --input_graph=/home/wr/Tensorflow_Workspace/code/transfer_learning/model/tmp/tensorflow_inception_graph_flower.pb \
    --output_graph=/home/wr/Tensorflow_Workspace/code/transfer_learning/model/tmp/tensorflow_inception_graph_v3_flower_striped.pb \
    --input_node_names="BottleneckInputPlaceholder"  \
    --output_node_names="final_training_ops/Softmax"  \
    --input_binary=true

    tensorboardでチェック:
    /home/wr/Tensorflow_Workspace/code/transfer_learning/model/tmp
    python /home/wr/Tensorflow_Workspace/tensorflow/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir=tensorflow_inception_graph_v3_bootleneck_striped.pb --log_dir=tensorboard_graph
    tensorboard --logdir=tensorboard_graph
    
    python /home/wr/Tensorflow_Workspace/tensorflow/tensorflow/python/tools/import_pb_to_tensorboard.py --model_dir=tensorflow_inception_graph_v3_flower_striped.pb --log_dir=tensorboard_graph
    tensorboard --logdir=tensorboard_graph
    

    これにより,それぞれ2つのモデルが得られ,V 3と略称し,Flowerと略称した.

    6、Java


    2つのモデルをそれぞれロードし、
    private static final String V3_BOTTLENECK_INPUT_NAME = "Mul"; //Mul
    private static final String V3_BOTTLENECK_OUTPUT_NAME = "pool_3/_reshape"; //
    private static final String V3_BOTTLENECK_MODEL_FILE = "file:///android_asset/tensorflow_inception_graph_v3_bootleneck_striped.pb";
    
    private static final String FLOWER_INPUT_NAME = "BottleneckInputPlaceholder"; //Mul
    private static final String FLOWER_OUTPUT_NAME = "final_training_ops/Softmax"; //
    private static final String FLOWER_MODEL_FILE = "file:///android_asset/tensorflow_inception_graph_v3_flower_striped.pb";
    private static final String FLOWER_LABEL_FILE =
              "file:///android_asset/imagenet_comp_graph_label_strings_flower.txt";

    まずV 3モデルを通過してボトルネック層の出力inferenceNeckInterfaceを得る.feed(neckInputNeckName, floatValues, 1, inputSize, inputSize, 3); inferenceNeckInterface.run(neckOnputNames, logStats); inferenceNeckInterface.fetch(neckOutputNeckName, neck_outputs);
    ボトルネック層の出力をFlowerモデルinferenceInterfaceに埋め込む.feed(inputName, neck_outputs, 1, neckSize); inferenceInterface.run(outputNames, logStats); inferenceInterface.fetch(outputName, outputs); 結果が得られます.
    コードはInceptionV 3 Classifier.を参照してください.JAvaとClassifierActivity.java

    7、運転


    略.

    六、Single Shot MultiBox Detector


    ssd_をmobilenet_v 1をssd_に変更mobilenet_v 2リファレンスhttps://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
    ssd_をダウンロードmobilenet_v2.pb、同時にDetectorActivityを直接置換する.JAvaのTF_OD_API_MODEL_FILE,
    private static final int TF_OD_API_INPUT_SIZE = 300;
    //  private static final String TF_OD_API_MODEL_FILE =
    //      "file:///android_asset/ssd_mobilenet_v1_android_export.pb";
      private static final String TF_OD_API_MODEL_FILE =
              "file:///android_asset/ssd_mobilenet_v2.pb";
      private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/coco_labels_list_v2.txt";

    同時に
    private static final DetectorMode MODE = DetectorMode.TF_OD_API;

    Ok.