-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[JS API] Fix inference for outputs with missing names #28665
base: master
Are you sure you want to change the base?
Conversation
b7da1a7
to
93d1fff
Compare
std::string name; | ||
if (_output.get_names().empty()) { | ||
name = _output.get_node()->get_name(); | ||
} else { | ||
name = _output.get_any_name(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure that we want replace default behavior.
Better expose get_names
method to allow user perform this check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it is not the best solution, I prefer to use core implementation for Output.get_any_name(). I hope it will be done in next releases.
But now we have this code in samples (hello-world.nnb):
const result = inferRequest.infer([tensor]);
const outputData = result[outputLayer].data;
If we don't add this change the current API will be broken for model with unnamed outputs.
I believe we need to choose one of the options:
- Change the API for infer result, to avoid cases with missing or duplicate names
- Use this fix until it is fixed in the core
- Change our tutorial to avoid this use case and mark this functionality as deprecated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API won't be broken, this fix allows to get results by name anyhow. And it's better than exception.
Yes, agree that:
- we must add detailed explanation into documentation about what user can expect to decode infer results
- create task in backlog to change this behavior when fix in the core appears
- leave comments in the samples with documentation reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider to make this behaviour only for model outputs. Generate this way name for tensor can make same name for different outputs.
std::string name; | ||
if (_output.get_names().empty()) { | ||
name = _output.get_node()->get_name(); | ||
} else { | ||
name = _output.get_any_name(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same here
outputs_obj.Set(node.get_any_name(), TensorWrap::wrap(info.Env(), new_tensor)); | ||
std::string name; | ||
if (node.get_names().empty()) { | ||
name = node.get_node()->get_name(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is can be name collision. I'm not sure that node name should be unique, across the possible output nodes names.
Hi @praasz, could you please tell, can more than one nodes of outputs have the same name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ov::Node has name
and friendly_name
. The name
is generated automatically as unique names, friendly name
can be set by user (is name if not set). There is transformation which check/resolve nodes name issues ResolveNameCollisions
.
The tensor names are optional and not validated.
The proposed used node names as tensor names is try to use name which not exist in model but can be fine for JS API (it would be good to indicated for user that this name is not real ov model name)
In case of Model outputs (Result nodes) It should be fine as Result has only one output.
In case used for node like Split which can have many outputs using this solution will give each output the same same "virtual" name.
There can be a case that user set some names inside the model which can be same as this generated virtual
name.
Solution to consider:
- not use tensor name as key for mapping (use port id instead)
- map only named tensors and skip not named (warning to user that there are not mapped tensors by names)
- map by names and create
virtual
names but indicate for user that this names are not real and not existing in the model
std::string name; | ||
if (_output.get_names().empty()) { | ||
name = _output.get_node()->get_name(); | ||
} else { | ||
name = _output.get_any_name(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::string name; | |
if (_output.get_names().empty()) { | |
name = _output.get_node()->get_name(); | |
} else { | |
name = _output.get_any_name(); | |
} | |
const std::string& js_model_output_name(output){ | |
return output.get_names().empty() ? output.get_node()->get_name() : output.get_any_name() | |
} |
Suggest to make js internal helper to get this name. If there will be change it will be easier to replace or update it.
Also tak into consideration that will work only for OV output which belongs node with single output. for Nodes which have got many outputs the same tensor name will be generated.
ccfeeb9
to
e553561
Compare
@@ -138,15 +138,33 @@ Napi::Value InferRequestWrap::get_output_tensor(const Napi::CallbackInfo& info) | |||
return TensorWrap::wrap(info.Env(), tensor); | |||
} | |||
|
|||
std::map<std::string, ov::Tensor> get_js_infer_result(ov::InferRequest* infer_request) { | |||
auto model_outputs = infer_request->get_compiled_model().outputs(); | |||
std::map<std::string, ov::Tensor> outputs; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is better to use model output ID (number) to show all outputs.
In case model has 3 outputs but output 2,3 have same tensor (same names) then this map shows only two outputs but model has got three.
Details:
Output.getAnyName()
returns the unique name of the node if the tensor has no name.InferRequest.infer(...)
to get the result if the tensor has no name.Tickets: