Skip to content

Commit

Permalink
Fix executor for different compilers (apache#8006)
Browse files Browse the repository at this point in the history
* Fix executor for different compilers

At the moment compiling this file throws multiple errors with C++ compilers, this change proposes to fix them.
1. `tvm_model_t->run_func` of type `TVMBackedPackedFunc` returns an int at the moment which is different from the signature of this function `tvm_runtime_run`, implicit casting is not favorable in many compile chains and throws errors.
2. The index of iterators were of type `int` while that of `model->num_input_tensors` and `model->num_output_tensors` were of type `uint32_t`, this type difference again throws errors in many toolchains, and can potentially cause incorrect calculations.
3. C Style struct initialization of tensors with `(DLTensor){...}` is not supported in many C++ toolchains and throws “non-trivial designated initializers not supported” error. Explicitly setting values should work in all cases even though it looks a little less nice.

* changing type to size_t

* fix format for clang
  • Loading branch information
rijulg authored and Trevor Morris committed Jun 17, 2021
1 parent cf5c5c0 commit 276e2aa
Show file tree
Hide file tree
Showing 2 changed files with 19 additions and 22 deletions.
37 changes: 17 additions & 20 deletions src/runtime/crt/aot_executor/aot_executor.c
Original file line number Diff line number Diff line change
Expand Up @@ -37,29 +37,26 @@ tvm_crt_error_t tvm_runtime_run(const tvm_model_t* model, void** inputs, void**
TVMValue tvm_values[model->num_input_tensors + model->num_output_tensors]; // NOLINT
int32_t tvm_typeids[model->num_input_tensors + model->num_output_tensors]; // NOLINT

for (int i = 0; i < model->num_input_tensors; i++) {
tensors[i] = (DLTensor){
.device = fake_device,
.data = inputs[i],
.shape = &fake_shape,
.ndim = fake_dims,
.byte_offset = 0,
.strides = NULL,
};
for (size_t i = 0; i < model->num_input_tensors; i++) {
tensors[i].device = fake_device;
tensors[i].data = inputs[i];
tensors[i].shape = &fake_shape;
tensors[i].ndim = fake_dims;
tensors[i].byte_offset = 0;
tensors[i].strides = NULL;
tvm_values[i].v_handle = &tensors[i];
}

for (int i = 0; i < model->num_output_tensors; i++) {
tensors[model->num_input_tensors + i] = (DLTensor){
.device = fake_device,
.data = outputs[i],
.shape = &fake_shape,
.ndim = fake_dims,
.byte_offset = 0,
.strides = NULL,
};
tvm_values[model->num_input_tensors + i].v_handle = &tensors[model->num_input_tensors + i];
for (size_t i = 0; i < model->num_output_tensors; i++) {
size_t j = model->num_input_tensors + i;
tensors[j].device = fake_device;
tensors[j].data = outputs[i];
tensors[j].shape = &fake_shape;
tensors[j].ndim = fake_dims;
tensors[j].byte_offset = 0;
tensors[j].strides = NULL;
tvm_values[j].v_handle = &tensors[j];
}

return model->run_func(tvm_values, tvm_typeids, 0, NULL, 0, NULL);
return (tvm_crt_error_t)model->run_func(tvm_values, tvm_typeids, 0, NULL, 0, NULL);
}
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ extern "C" {
* model to the runtime.
*/
typedef struct {
uint32_t num_input_tensors; /** Number of expected input tensors */
uint32_t num_output_tensors; /** Number of expected output tensors */
size_t num_input_tensors; /** Number of expected input tensors */
size_t num_output_tensors; /** Number of expected output tensors */
TVMBackendPackedCFunc run_func; /** Generated model function, called through tvm_runtime_run */
} tvm_model_t;

Expand Down

0 comments on commit 276e2aa

Please sign in to comment.