Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Dockerfile for OpenvinoEP #1362

Merged
merged 8 commits into from
Jul 17, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 52 additions & 22 deletions dockerfiles/Dockerfile.openvino
Original file line number Diff line number Diff line change
Expand Up @@ -3,35 +3,65 @@
# Licensed under the MIT License.
#--------------------------------------------------------------------------

ARG OS_VERSION=16.04
FROM ubuntu:${OS_VERSION}
FROM ubuntu:16.04

ARG PYTHON_VERSION=3.5
ARG OPENVINO_VERSION=2018_R5
ARG TARGET_DEVICE=CPU_FP32
RUN apt update && \
apt -y install python3.5 python3-pip zip x11-apps lsb-core wget cpio sudo libboost-python-dev libpng-dev zlib1g-dev git libnuma1 ocl-icd-libopencl1 clinfo libboost-filesystem1.58.0 libboost-thread1.58.0 protobuf-compiler libprotoc-dev && pip3 install numpy networkx opencv-python pytest && locale-gen en_US.UTF-8 && update-locale LANG=en_US.UTF-8

ENV DEBIAN_FRONTEND noninteractive
ARG DEVICE=CPU_FP32
ARG ONNXRUNTIME_REPO=https://github.com/microsoft/onnxruntime
ARG ONNXRUNTIME_BRANCH=master

RUN apt-get update && \
apt-get install -y sudo git bash

ENV PATH="/opt/cmake/bin:${PATH}"
RUN git clone --branch preview-v0.7 --recursive https://github.com/intel/onnxruntime onnxruntime
RUN /onnxruntime/tools/ci_build/github/linux/docker/scripts/install_ubuntu.sh -p ${PYTHON_VERSION} && \
/onnxruntime/tools/ci_build/github/linux/docker/scripts/install_deps.sh
ENV pattern="COMPONENTS=DEFAULTS"
ENV replacement="COMPONENTS=intel-inference_engine_sdk__noarch;intel-inference_engine_cpu__noarch;intel-inference_engine_gpu__noarch;intel-inference_engine_vpu__noarch;intel-inference_engine_gna__noarch;intel-inference_engine_hddl__noarch;intel-model_optimizer__noarch;intel-opencv_ubuntu_16_rel__noarch"
COPY l_openvino_*.tgz .
RUN tar -xzf l_openvino_toolkit*.tgz && \
rm -rf l_openvino_toolkit*.tgz && \
cd l_openvino_toolkit* && \
sed -i "s/$pattern/$replacement/" silent.cfg && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh -s silent.cfg && \
/bin/bash -c "source /opt/intel/computer_vision_sdk/bin/setupvars.sh" && \
./install_cv_sdk_dependencies.sh && \
cd - && \
rm -rf l_openvino_toolkit*

RUN /onnxruntime/tools/ci_build/github/linux/docker/scripts/install_openvino.sh -o ${OPENVINO_VERSION}
ENV LD_LIBRARY_PATH=/usr/lib:/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
ENV INSTALLDIR=/opt/intel/computer_vision_sdk
ENV INTEL_CVSDK_DIR=${INSTALLDIR}
ENV LD_LIBRARY_PATH=${INSTALLDIR}/deployment_tools/model_optimizer/model_optimizer_caffe/bin:${LD_LIBRARY_PATH}
ENV ModelOptimizer_ROOT_DIR=${INSTALLDIR}/deployment_tools/model_optimizer/model_optimizer_caffe
ENV InferenceEngine_DIR=${INTEL_CVSDK_DIR}/deployment_tools/inference_engine/share
ENV IE_PLUGINS_PATH=${INTEL_CVSDK_DIR}/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64
ENV LD_LIBRARY_PATH=/opt/intel/opencl:${INSTALLDIR}/deployment_tools/inference_engine/external/cldnn/lib:${INSTALLDIR}/inference_engine/external/gna/lib:${INSTALLDIR}/deployment_tools/inference_engine/external/mkltiny_lnx/lib:${INSTALLDIR}/deployment_tools/inference_engine/external/omp/lib:${IE_PLUGINS_PATH}:${LD_LIBRARY_PATH}
ENV OpenCV_DIR=${INSTALLDIR}/opencv/share/OpenCV
ENV LD_LIBRARY_PATH=${INSTALLDIR}/opencv/lib:${INSTALLDIR}/opencv/share/OpenCV/3rdparty/lib:${LD_LIBRARY_PATH}
ENV PATH=${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer:$PATH
ENV PYTHONPATH=${INTEL_CVSDK_DIR}/deployment_tools/model_optimizer:$PYTHONPATH
ENV PYTHONPATH=$INTEL_CVSDK_DIR/python/python3.5:${INTEL_CVSDK_DIR}/python/python3.5/ubuntu16:${PYTHONPATH}
ENV HDDL_INSTALL_DIR=${INSTALLDIR}/deployment_tools/inference_engine/external/hddl
ENV LD_LIBRARY_PATH=${INSTALLDIR}/deployment_tools/inference_engine/external/hddl/lib:$LD_LIBRARY_PATH

WORKDIR /
RUN wget https://github.com/intel/compute-runtime/releases/download/19.15.12831/intel-gmmlib_19.1.1_amd64.deb
RUN wget https://github.com/intel/compute-runtime/releases/download/19.15.12831/intel-igc-core_1.0.2-1787_amd64.deb
RUN wget https://github.com/intel/compute-runtime/releases/download/19.15.12831/intel-igc-opencl_1.0.2-1787_amd64.deb
RUN wget https://github.com/intel/compute-runtime/releases/download/19.15.12831/intel-opencl_19.15.12831_amd64.deb
RUN wget https://github.com/intel/compute-runtime/releases/download/19.15.12831/intel-ocloc_19.15.12831_amd64.deb

ENV INTEL_CVSDK_DIR /data/dldt
RUN sudo dpkg -i *.deb && rm -rf *.deb

ENV LD_LIBRARY_PATH $INTEL_CVSDK_DIR/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:$INTEL_CVSDK_DIR/deployment_tools/inference_engine/temp/omp/lib:/usr/local/openblas/lib:$LD_LIBRARY_PATH

ENV PATH $INTEL_CVSDK_DIR/deployment_tools/model_optimizer:$PATH
ENV PYTHONPATH $INTEL_CVSDK_DIR/deployment_tools/model_optimizer:$INTEL_CVSDK_DIR/tools:$PYTHONPATH
RUN mkdir -p /opt/cmake/bin

ENV PATH /opt/cmake/bin:$PATH
ENV LANG en_US.UTF-8
RUN wget https://github.com/Kitware/CMake/releases/download/v3.13.2/cmake-3.13.2-Linux-x86_64.tar.gz && \
tar -xf cmake-3.13.2-Linux-x86_64.tar.gz --strip 1 -C /opt/cmake && rm -rf /cmake-3.13.2-Linux-x86_64.tar.gz

RUN /bin/bash -c "source /opt/intel/computer_vision_sdk/bin/setupvars.sh" && \
git clone --recursive -b $ONNXRUNTIME_BRANCH $ONNXRUNTIME_REPO /onnxruntime && \
cd /onnxruntime/cmake/external/onnx && python3 setup.py install && \
cd /onnxruntime && ./build.sh --config RelWithDebInfo --update --build --parallel --use_openvino $DEVICE --build_wheel && pip3 install /onnxruntime/build/Linux/RelWithDebInfo/dist/*-linux_x86_64.whl && rm -rf /onnxruntime


RUN mkdir -p /onnxruntime/build && \
python3 /onnxruntime/tools/ci_build/build.py --build_dir /onnxruntime/build --config Release --build_shared_lib --skip_submodule_sync --build_wheel --parallel --use_openvino ${TARGET_DEVICE} && \
pip3 install /onnxruntime/build/Release/dist/onnxruntime-*linux_x86_64.whl && \
rm -rf /onnxruntime
108 changes: 90 additions & 18 deletions dockerfiles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,24 +83,95 @@
## OpenVINO Version (Preview)
#### Linux 16.04, Python Bindings

1. Build the docker image from the Dockerfile in this repository.
```
# If you have a Linux machine, preface this command with "sudo"

docker build -t onnxruntime-openvino -f Dockerfile.openvino .
```
To use GPU_FP32:
```
docker build -t onnxruntime-openvino --build-arg TARGET_DEVICE=GPU_FP32 -f Dockerfile.openvino .
```

2. Run the Docker image

```
# If you have a Linux machine, preface this command with "sudo"

docker run -it onnxruntime-openvino
```
1. Build the onnxruntime image for all the accelerators supported as below

Retrieve your docker image in one of the following ways.

- For building the docker image, download OpenVINO online installer version 2018 R5.0.1 from [here](https://software.intel.com/en-us/openvino-toolkit/choose-download) and copy the openvino tar file in the same directory and build the image. The online installer size is only 16MB and the components needed for the accelerators are mentioned in the dockerfile. Providing the argument device enables onnxruntime for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default values are http://github.com/microsoft/onnxruntime and repo is master
```
docker build -t onnxruntime --build-arg DEVICE=$DEVICE .
```
- Pull the official image from DockerHub.


2. DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.

| Device Option | Target Device |
| --------- | -------- |
| <code>CPU_FP32</code> | Intel<sup></sup> CPUs |
| <code>GPU_FP32</code> |Intel<sup></sup> Integrated Graphics |
| <code>GPU_FP16</code> | Intel<sup></sup> Integrated Graphics |
| <code>MYRIAD_FP16</code> | Intel<sup></sup> Movidius<sup>TM</sup> USB sticks |
| <code>VAD-R_FP16</code> | Intel<sup></sup> Vision Accelerator Design based on Movidius<sup>TM</sup> MyriadX VPUs |

## CPU Version

1. Retrieve your docker image in one of the following ways.

- Build the docker image from the DockerFile in this repository.

```
docker build -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 --network host .
```
- Pull the official image from DockerHub.
```
# Will be available with next release
```
2. Run the docker image
```
docker run -it onnxruntime-cpu
```

## GPU Version

1. Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
```
docker build -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 --network host .
```
- Pull the official image from DockerHub.
```
# Will be available with next release
```

2. Run the docker image
```
docker run -it --device /dev/dri:/dev/dri onnxruntime-gpu:latest
```
## Myriad VPU Accelerator Version

1. Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
```
docker build -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 --network host .
```
- Pull the official image from DockerHub.
```
# Will be available with next release
```
2. Install the Myriad rules drivers on the host machine according to the reference in [here](https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux.html#additional-NCS-steps)
3. Run the docker image by mounting the device drivers
```
docker run -it --network host --privileged -v /dev:/dev onnxruntime-myriad:latest

```
## VAD-R Accelerator Version
avidiyal marked this conversation as resolved.
Show resolved Hide resolved

1. Retrieve your docker image in one of the following ways.
- Build the docker image from the DockerFile in this repository.
```
docker build -t onnxruntime-vadr --build-arg DEVICE=VAD-R_FP16 --network host .
```
- Pull the official image from DockerHub.
```
# Will be available with next release
```
2. Install the HDDL drivers on the host machine according to the reference in [here](https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_linux_ivad_vpu.html)
3. Run the docker image by mounting the device drivers
```
docker run -it --device --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion onnxruntime-hddl:latest

```
## ONNX Runtime Server (Preview)
#### Linux 16.04

Expand All @@ -120,3 +191,4 @@
```
curl -X POST -d "@request.json" -H "Content-Type: application/json" http://0.0.0.0:{your_local_port}/v1/models/mymodel/versions/3:predict
```