Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Commit

Permalink
Merge pull request #3 from NervanaSystems/master
Browse files Browse the repository at this point in the history
Merge upstream/master into release
  • Loading branch information
Tomasz Dołbniak authored Aug 6, 2019
2 parents 0951f53 + 16087f3 commit cba9e61
Show file tree
Hide file tree
Showing 602 changed files with 23,496 additions and 5,947 deletions.
5 changes: 4 additions & 1 deletion .github/ISSUE_TEMPLATE/bug-performance-issue.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ about: Use this template for reporting a bug or a performance issue.
**Describe the bug**
A clear and concise description of what the bug is.

**Urgency**
If there are particular important use cases blocked by this or strict project-related timelines, please share more information and dates. If there are no hard deadlines, please specify none.

**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- ONNX Runtime installed from (source or binary):
Expand All @@ -27,4 +30,4 @@ A clear and concise description of what you expected to happen.
If applicable, add screenshots to help explain your problem.

**Additional context**
Add any other context about the problem here.
Add any other context about the problem here. If the issue is about a particular model, please share the model details as well to facilitate debugging.
11 changes: 10 additions & 1 deletion .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
url = https://github.com/onnx/onnx
[submodule "cmake/external/tvm"]
path = cmake/external/tvm
url = https://github.com/dmlc/tvm.git
url = https://github.com/microsoft/onnxruntime-tvm.git
[submodule "cmake/external/date"]
path = cmake/external/date
url = https://github.com/HowardHinnant/date.git
Expand All @@ -32,3 +32,12 @@
[submodule "cmake/external/eigen"]
path = cmake/external/eigen
url = https://github.com/eigenteam/eigen-git-mirror.git
[submodule "cmake/external/grpc"]
path = cmake/external/grpc
url = https://github.com/grpc/grpc
[submodule "cmake/external/DNNLibrary"]
path = cmake/external/DNNLibrary
url = https://github.com/JDAI-CV/DNNLibrary
[submodule "cmake/external/spdlog"]
path = cmake/external/spdlog
url = https://github.com/gabime/spdlog.git
49 changes: 34 additions & 15 deletions BUILD.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Build ONNX Runtime
Dockerfiles are available [here](https://github.com/microsoft/onnxruntime/tree/master/tools/ci_build/github/linux/docker) to help you get started.

## Supported architectures

Expand All @@ -12,7 +13,7 @@

| OS | Supports CPU | Supports GPU| Notes |
|-------------|:------------:|:------------:|------------------------------------|
|Windows 10 | YES | YES |Must use VS 2017 or the latest VS2015|
|Windows 10 | YES | YES | VS2019 through the latest VS2015 are supported |
|Windows 10 <br/> Subsystem for Linux | YES | NO | |
|Ubuntu 16.x | YES | YES | Also supported on ARM32v7 (experimental) |

Expand All @@ -29,36 +30,51 @@ OS/Compiler Matrix:

ONNX Runtime python binding only supports Python 3.5, 3.6 and 3.7.

## Build
## Getting Started
You may either get a prebuilt onnxruntime from nuget.org, or do it yourself using the following steps:
1. Checkout the source tree:
```
git clone --recursive https://github.com/Microsoft/onnxruntime
cd onnxruntime
```
2. Install cmake-3.13 or better from https://cmake.org/download/.
3. (optional) Install protobuf 3.6.1 from source code (cmake/external/protobuf). CMake flag protobuf\_BUILD\_SHARED\_LIBS must be turned OFF on Windows and turned ON on Linux. After the installation, you should have the 'protoc' executable in your PATH. On Linux it is recommended to run `ldconfig` to make sure protobuf libraries are found.
4. If you installed your protobuf in a non standard location it would be helpful on Linux build to set the following env var:
`export CMAKE_ARGS="-DONNX_CUSTOM_PROTOC_EXECUTABLE=full path to protoc"` so ONNX build can find it.
On Linux also run `ldconfig <protobuf lib folder path>` so the linker can find protobuf libraries.

On Windows:
3. (optional) Install protobuf 3.6.1 from source code (cmake/external/protobuf). CMake flag protobuf\_BUILD\_SHARED\_LIBS must be turned OFF. After the installation, you should have the 'protoc' executable in your PATH.
4. (optional) Install onnx from source code (cmake/external/onnx)
```
export ONNX_ML=1
python3 setup.py bdist_wheel
pip3 install --upgrade dist/*.whl
```
5. Run `build.bat --config RelWithDebInfo --build_shared_lib --parallel`.
On Linux:
3. (optional) Install protobuf 3.6.1 from source code (cmake/external/protobuf). CMake flag protobuf\_BUILD\_SHARED\_LIBS must be turned ON. After the installation, you should have the 'protoc' executable in your PATH. It is recommended to run `ldconfig` to make sure protobuf libraries are found.
4. If you installed your protobuf in a non standard location it would be helpful to set the following env var:`export CMAKE_ARGS="-DONNX_CUSTOM_PROTOC_EXECUTABLE=full path to protoc"` so ONNX build can find it. Also run `ldconfig <protobuf lib folder path>` so the linker can find protobuf libraries.
5. (optional) Install onnx from source code (cmake/external/onnx)
```
export ONNX_ML=1
python3 setup.py bdist_wheel
pip3 install --upgrade dist/*.whl
```
6. Run `./build.sh --config RelWithDebInfo --build_wheel` for Linux (or `build.bat --config RelWithDebInfo --build_wheel` for Windows). Upon successful build you should be able to find the wheel under `dist` folder.
6. Run `./build.sh --config RelWithDebInfo --build_shared_lib --parallel`.
The build script runs all unit tests by default (for native builds and skips tests by default for cross-compiled builds).
The complete list of build options can be found by running `./build.sh (or ./build.bat) --help`
## Build x86
1. For Windows, just add --x86 argument when launching build.bat
2. For Linux, it must be built out of a x86 os, --x86 argument also needs be specified to build.sh
- For Windows, just add --x86 argument when launching build.bat
- For Linux, it must be built out of a x86 os, --x86 argument also needs be specified to build.sh
## Build ONNX Runtime Server on Linux
1. In the ONNX Runtime root folder, run `./build.sh --config RelWithDebInfo --build_server --use_openmp --parallel`
1. ONNX Runtime server (and only the server) requires you to have Go installed to build, due to building BoringSSL.
See https://golang.org/doc/install for installation instructions.
2. In the ONNX Runtime root folder, run `./build.sh --config RelWithDebInfo --build_server --use_openmp --parallel`
3. ONNX Runtime Server supports sending log to [rsyslog](https://www.rsyslog.com/) daemon. To enable it, please build with an additional parameter: `--cmake_extra_defines onnxruntime_USE_SYSLOG=1`. The build command will look like this: `./build.sh --config RelWithDebInfo --build_server --use_openmp --parallel --cmake_extra_defines onnxruntime_USE_SYSLOG=1`
## Build/Test Flavors for CI
Expand All @@ -74,6 +90,9 @@ The complete list of build options can be found by running `./build.sh (or ./bui
## Additional Build Flavors
The complete list of build flavors can be seen by running `./build.sh --help` or `./build.bat --help`. Here are some common flavors.
### Windows CMake Generator
The default generator on Windows is Visual Studio 2017, but you can also use the newer Visual Studio 2019 by passing `--cmake_generator "Visual Studio 16 2019"` to build.bat.
### Windows CUDA Build
ONNX Runtime supports CUDA builds. You will need to download and install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [CUDNN](https://developer.nvidia.com/cudnn).
Expand Down Expand Up @@ -142,7 +161,7 @@ ONNX Runtime supports OpenVINO Execution Provider to enable deep learning infere
The OpenVINO Execution Provider can be built using the following commands:
- Install the OpenVINO 2018 R5.0.1 release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)).
- Currently supports and validated on two versions of OpenVINO: OpenVINO 2018 R5.0.1 and OpenVINO 2019 R1.1(Recommended). Install the OpenVINO release along with its dependencies from ([https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit)).
- Install the model optimizer prerequisites for ONNX by running
<code><openvino_install_dir>/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_onnx.sh</code>
Expand All @@ -151,11 +170,11 @@ The OpenVINO Execution Provider can be built using the following commands:
<code>source setupvars.sh</code>
- To configure Intel<sup>®</sup> Processor Graphics(GPU), please follow the installation steps from (https://docs.openvinotoolkit.org/2018_R5/_docs_install_guides_installing_openvino_linux.html#GPU-steps)
- To configure Intel<sup>®</sup> Processor Graphics(GPU), please follow the installation steps from (https://docs.openvinotoolkit.org/2019_R1.1/_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps)
- To configure Intel<sup>®</sup> Movidius<sup>TM</sup> USB, please follow the getting started guide from (https://docs.openvinotoolkit.org/2018_R5/_docs_install_guides_installing_openvino_linux.html#Movidius-steps)
- To configure Intel<sup>®</sup> Movidius<sup>TM</sup> USB, please follow the getting started guide from (https://docs.openvinotoolkit.org/2019_R1.1/_docs_install_guides_installing_openvino_linux.html#additional-NCS-steps)
- To configure Intel<sup>®</sup> Vision Accelerator Design based on 8 Movidius<sup>TM</sup> MyriadX VPUs, please follow the configuration guide from (https://docs.openvinotoolkit.org/2018_R5/_docs_install_guides_installing_openvino_linux.html#Vision-Accelerator-Design-steps)
- To configure Intel<sup>®</sup> Vision Accelerator Design based on 8 Movidius<sup>TM</sup> MyriadX VPUs, please follow the configuration guide from (https://docs.openvinotoolkit.org/2019_R1.1/_docs_install_guides_installing_openvino_linux.html#install-VPU)
- Build ONNX Runtime using the below command.
Expand All @@ -172,7 +191,7 @@ The OpenVINO Execution Provider can be built using the following commands:
| <code>GPU_FP32</code> | Intel<sup>®</sup> Integrated Graphics |
| <code>GPU_FP16</code> | Intel<sup>®</sup> Integrated Graphics with FP16 quantization of models |
| <code>MYRIAD_FP16</code> | Intel<sup>®</sup> Movidius<sup>TM</sup> USB sticks | 
| <code>VAD-R_FP16</code> | Intel<sup>®</sup> Vision Accelerator Design based on 8 Movidius<sup>TM</sup> MyriadX VPUs |
| <code>VAD-M_FP16</code> | Intel<sup>®</sup> Vision Accelerator Design based on 8 Movidius<sup>TM</sup> MyriadX VPUs |
For more information on OpenVINO Execution Provider&#39;s ONNX Layer support, Topology support, and Intel hardware enabled, please refer to the document OpenVINO-ExecutionProvider.md in <code>$onnxruntime_root/docs/execution_providers</code>
Expand Down
Loading

0 comments on commit cba9e61

Please sign in to comment.