Skip to content

Commit

Permalink
Add examples for JS (#43)
Browse files Browse the repository at this point in the history
* Add examples for JS

* resolve comments
  • Loading branch information
fs-eire authored Oct 20, 2021
1 parent d2a9555 commit e47d00b
Show file tree
Hide file tree
Showing 5 changed files with 181 additions and 0 deletions.
4 changes: 4 additions & 0 deletions js/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,7 @@ Click links for README of each examples.
* [API usage - Tensor](api-usage_tensor) - a demonstration of basic usage of `Tensor`.

* [API usage - InferenceSession](api-usage_inference-session) - a demonstration of basic usage of `InferenceSession`.

* [API usage - SessionOptions](api-usage_session-options) - a demonstration of how to configure creation of an `InferenceSession` instance.

* [API usage - `ort.env` flags](api-usage_ort-env-flags) - a demonstration of how to configure a set of global flags.
2 changes: 2 additions & 0 deletions js/api-usage_inference-session/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ This example is a demonstration of basic usage of `InferenceSession`.

For more information about `SessionOptions` and `RunOptions`, please refer to other examples.

See also [`InferenceSession.create`](https://onnxruntime.ai/docs/api/js/interfaces/InferenceSessionFactory.html#create) and [`InferenceSession` interface](https://onnxruntime.ai/docs/api/js/interfaces/InferenceSession.html) in API reference document.

## Usage

```sh
Expand Down
73 changes: 73 additions & 0 deletions js/api-usage_ort-env-flags/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# API usage - `ort.env` flags

## Summary

This example is a demonstration of how to configure global flags by `ort.env`.

Following are some example code snippets:

```js
// enable DEBUG flag
ort.env.debug = true;

// set global logging level
ort.env.logLevel = 'info';
```

See also [`Env` interface](https://onnxruntime.ai/docs/api/js/interfaces/Env.html) in API reference document.

### WebAssembly flags (ONNX Runtime Web)

WebAssembly flags are used to customize behaviors of WebAssembly execution provider.

Following are some example code snippets:

```js
// set up-to-2-threads for multi-thread execution for WebAssembly
// may fallback to single-thread if multi-thread is not available in the current browser
ort.env.wasm.numThreads = 2;

// force single-thread for WebAssembly
ort.env.wasm.numThreads = 1;

// enable worker-proxy feature for WebAssembly
// this feature allows model inferencing to run in a web worker asynchronously.
ort.env.wasm.proxy = true;

// override path of wasm files - using a prefix
// in this example, ONNX Runtime Web will try to load file from https://example.com/my-example/ort-wasm*.wasm
ort.env.wasm.wasmPaths = 'https://example.com/my-example/';

// override path of wasm files - for each file
ort.env.wasm.wasmPaths = {
'ort-wasm.wasm': 'https://example.com/my-example/ort-wasm.wasm',
'ort-wasm-simd.wasm': 'https://example.com/my-example/ort-wasm-simd.wasm',
'ort-wasm-threaded.wasm': 'https://example.com/my-example/ort-wasm-threaded.wasm',
'ort-wasm-simd-threaded.wasm': 'https://example.com/my-example/ort-wasm-simd-threaded.wasm'
};
```

See also [WebAssembly flags](https://onnxruntime.ai/docs/api/js/interfaces/Env.WebAssemblyFlags.html) in API reference document.

### WebGL flags (ONNX Runtime Web)

WebGL flags are used to customize behaviors of WebGL execution provider.

Following are some example code snippets:

```js
// enable packed texture. This helps to improve inference performance for some models
ort.env.webgl.pack = true;
```

See also [WebGL flags](https://onnxruntime.ai/docs/api/js/interfaces/Env.WebGLFlags.html) in API reference document.

### SessionOptions vs. ort.env

Both `SessionOptions` and `ort.env` allow to specify configurations for inferencing behaviors. The biggest difference of them is: `SessionOptions` is set for one inference session instance, while `ort.env` is set global.

See also [API usage - `SessionOptions`](../api-usage_session-options) for an example of using `SessionOptions`.

## Usage

The code snippets demonstrated above cannot run standalone. Put the code into one of the "Quick Start" examples and try it out.
101 changes: 101 additions & 0 deletions js/api-usage_session-options/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# API usage - SessionOptions

## Summary

This example is a demonstration of how to configure an `InferenceSession` instance using `SessionOptions`.

A `SessionOptions` is an object with properties to instruct the creation of an `InferenceSession` instance. See [type declaration](https://github.com/microsoft/onnxruntime/blob/master/js/common/lib/inference-session.ts) for schema definition. `SessionOptions` is passed to `InferenceSession.create()` as the last parameter:

```js
const mySession = await InferenceSession.create(..., mySessionOptions);
```

### Execution providers

An [execution provider](https://onnxruntime.ai/docs/reference/execution-providers/) (EP) defines how operators get resolved to specific kernel implementation. Following is a table of supported EP in different environments:

| EP name | Hardware | available in |
| ------- | ----------------- | --------------------------------- |
| `cpu` | CPU (default CPU) | onnxruntime-node |
| `cuda` | GPU (NVIDIA CUDA) | onnxruntime-node |
| `dml` | GPU (Direct ML) | onnxruntime-node (Windows) |
| `wasm` | CPU (WebAssembly) | onnxruntime-web, onnxruntime-node |
| `webgl` | GPU (WebGL) | onnxruntime-web |

Execution provider is specified by `sessionOptions.executionProviders`. Multiple EPs can be specified and the first available one will be used.

Following are some example code snippets:

```js
// [Node.js binding example] Use CPU EP.
const sessionOption = { executionProviders: ['cpu'] };
```

```js
// [Node.js binding example] Use CUDA EP.
const sessionOption = { executionProviders: ['cuda'] };
```

```js
// [Node.js binding example] Use CUDA EP, specifying device ID.
const sessionOption = {
executionProviders: [
{
name: 'cuda',
deviceId: 0
}
]
};
```

```js
// [Node.js binding example] Try multiple EPs using an execution provider list.
// The first successfully initialized one will be used. Use CUDA EP if it is available, otherwise fallback to CPU EP.
const sessionOption = { executionProviders: ['cuda', 'cpu'] };
```

```js
// [ONNX Runtime Web example] Use WebAssembly (CPU) EP.
const sessionOption = { executionProviders: ['wasm'] };
```

```js
// [ONNX Runtime Web example] Use WebGL EP.
const sessionOption = { executionProviders: ['webgl'] };
```

### other common options

There are also some other options available for all EPs.

Following are some example code snippets:

```js
// [Node.js binding example] Use CPU EP with single-thread and enable verbose logging.
const sessionOption = {
executionProviders: ['cpu'],
interOpNumThreads: 1,
intraOpNumThreads: 1,
logSeverityLevel: 0
};
```

```js
// [ONNX Runtime Web example] Use WebAssembly EP and enable profiling.
const sessionOptions = {
executionProviders: ['wasm'],
enableProfiling: true
};
```

See also [`SessionOptions` interface](https://onnxruntime.ai/docs/api/js/interfaces/InferenceSession.SessionOptions.html) in API reference document.

### SessionOptions vs. ort.env

Both `SessionOptions` and `ort.env` allow to specify configurations for inferencing behaviors. The biggest difference of them is: `SessionOptions` is set for one inference session instance, while `ort.env` is set global.

See also [API usage - `ort.env` flags](../api-usage_ort-env-flags) for an example of using `ort.env`.

## Usage

The code snippets demonstrated above cannot run standalone. Put the code into one of the "Quick Start" examples and try it out.
1 change: 1 addition & 0 deletions js/api-usage_tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ This example is a demonstration of basic usage of `Tensor`.
- `tensor-create.js`: In this example, we create tensors in different ways.
- `tensor-properties.js`: In this example, we get tensor properties from a Tensor object.

See also [`Tensor` constructor](https://onnxruntime.ai/docs/api/js/interfaces/TensorConstructor.html) and [`Tensor` interface](https://onnxruntime.ai/docs/api/js/interfaces/Tensor.html) in API reference document.
## Usage

```sh
Expand Down

0 comments on commit e47d00b

Please sign in to comment.