Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] Preserve friendly name and tensor names in PPP #23713

Merged
Show file tree
Hide file tree
Changes from 27 commits
Commits
Show all changes
80 commits
Select commit Hold shift + click to select a range
f48fa29
Preserve friendly name and tensor names
praasz Mar 27, 2024
4010796
Update IR tests
praasz Mar 28, 2024
d884347
Update input_get_source_output test
praasz Mar 28, 2024
b9914bd
Restore friendly name handling in ConvertPrecision
praasz Mar 28, 2024
fa21ec7
Revert input_get_source_output test changes
praasz Mar 28, 2024
7565b79
Add _set_names_compatibility_mode to disable moving names in PPP
praasz Apr 12, 2024
8056bc8
Merge remote-tracking branch 'origin/master' into bugfix/fix-ppp-dere…
praasz May 17, 2024
3a26ac6
Add new descriptor ResultOutputTensor to link with input tensor.
praasz May 28, 2024
2d11ad4
Add dedicated Result output descriptor
praasz May 31, 2024
af7accf
Convert precision use same rule for friendly name as PPP
praasz May 31, 2024
a1e82fd
Merge remote-tracking branch 'origin/master' into bugfix/fix-ppp-dere…
praasz May 31, 2024
76a1155
Fix build issues
praasz May 31, 2024
7aea700
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Jun 3, 2024
50d8d69
Add Result description about output names
praasz Jun 3, 2024
d7e12f8
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Jun 5, 2024
bfb5331
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Jun 6, 2024
621b3f9
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Jun 11, 2024
9b1251c
Update expected name for `test_input_get_source_output`
praasz Jun 17, 2024
df87a79
Merge remote-tracking branch 'origin/master' into bugfix/fix-ppp-dere…
praasz Jun 17, 2024
edf0dc4
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Jul 8, 2024
19568ab
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Jul 10, 2024
65a2be4
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
mlukasze Jul 12, 2024
092041c
Fix get tensor names difference
praasz Jul 16, 2024
dfffe8f
Update code style
praasz Jul 16, 2024
44b5621
Remove commented code
praasz Jul 22, 2024
b799a38
Update pybind docstring
praasz Jul 23, 2024
a5ca244
Update src/bindings/python/src/pyopenvino/graph/preprocess/pre_post_p…
praasz Jul 23, 2024
596d93c
Update src/bindings/python/src/pyopenvino/graph/preprocess/pre_post_p…
praasz Jul 26, 2024
da0a322
Use pimpl in Tensor descriptor to hide implementation details from pu…
praasz Aug 7, 2024
9c3bf87
Use descriptor iface in ReverseShapeAndTypeInfer to update shapes
praasz Aug 7, 2024
6e50ee3
Don't use deprecated ctor for descriptor
praasz Aug 7, 2024
f53efb8
Export `TensorExtension::Hasher` and `TensorExtension::Equal`
praasz Aug 7, 2024
0a495a5
Update preprocess build of OutputInfo
praasz Aug 9, 2024
2b8d426
Fix preserve rt_info for Result
praasz Aug 9, 2024
e166772
Fix mo API tensor names test
praasz Aug 9, 2024
0c397ab
Update tensor descriptor compare in stateful to stateless tests
praasz Aug 19, 2024
bbdac8d
Update PPP output info dump
praasz Aug 20, 2024
2dd83db
Update check output name in mo_convert tests
praasz Aug 21, 2024
6035c91
Correct add tensor names to Result
praasz Aug 21, 2024
0c31ebe
Test use only Result names if exists
praasz Aug 22, 2024
f629e83
Update set result tensor names for v10
praasz Aug 23, 2024
76d58b1
Result tensor uses shared tensor names
praasz Aug 23, 2024
27cc72b
Use pimpl in Tensor descriptor to hide implementation details from pu…
praasz Aug 7, 2024
67dd4dc
Use descriptor iface in ReverseShapeAndTypeInfer to update shapes
praasz Aug 7, 2024
deb697d
Don't use deprecated ctor for descriptor
praasz Aug 7, 2024
0901acf
Export `TensorExtension::Hasher` and `TensorExtension::Equal`
praasz Aug 7, 2024
82f7fc2
Update preprocess build of OutputInfo
praasz Aug 9, 2024
5f20ff1
Fix preserve rt_info for Result
praasz Aug 9, 2024
af23baf
Fix mo API tensor names test
praasz Aug 9, 2024
7f7b335
Update tensor descriptor compare in stateful to stateless tests
praasz Aug 19, 2024
322170a
Update PPP output info dump
praasz Aug 20, 2024
50f3675
Update check output name in mo_convert tests
praasz Aug 21, 2024
0dde418
Correct add tensor names to Result
praasz Aug 21, 2024
3ba9369
Test use only Result names if exists
praasz Aug 22, 2024
1f876d6
Update set result tensor names for v10
praasz Aug 23, 2024
1b7eedf
Result tensor uses shared tensor names
praasz Aug 23, 2024
fa53642
Merge branch 'master' into feature/specific-tensor-descriptor-for-res…
praasz Sep 11, 2024
d8e3592
Merge branch 'feature/specific-tensor-descriptor-for-results' of gith…
praasz Sep 11, 2024
fa8c5a9
Move shared tensro descriptor to separate file
praasz Sep 13, 2024
2fcb0a9
Add missing includes
praasz Sep 13, 2024
b7e0d13
Update compare result tensor for stateful to stateless transformation…
praasz Sep 16, 2024
0b2e37b
Merge branch 'master' into feature/specific-tensor-descriptor-for-res…
praasz Sep 17, 2024
b004cf9
When Result tensor set will hide names from Result's input
praasz Sep 17, 2024
196c848
Add doxy comments
praasz Sep 17, 2024
c08d827
Apply review comments
praasz Oct 3, 2024
a46bc4a
Merge remote-tracking branch 'origin/feature/specific-tensor-descript…
praasz Oct 4, 2024
7268dc3
Merge branch 'master' into feature/specific-tensor-descriptor-for-res…
praasz Oct 7, 2024
59cf786
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Oct 8, 2024
f757400
Merge branch 'master' into bugfix/fix-ppp-dereferences-input-tensor-n…
praasz Oct 10, 2024
29e1324
Review suggestions
praasz Oct 15, 2024
9446b86
Merge remote-tracking branch 'origin/master' into feature/specific-te…
praasz Oct 24, 2024
73631e3
Merge remote-tracking branch 'origin/master' into feature/specific-te…
praasz Nov 13, 2024
cc8a3b7
Enable UnrollIf tests on CPU
praasz Nov 14, 2024
17b9129
Merge remote-tracking branch 'origin/feature/specific-tensor-descript…
praasz Nov 15, 2024
99261db
Fix convert precision after merge
praasz Nov 18, 2024
32bb541
Remove set_names_compatibility_mode from PPP public API
praasz Nov 18, 2024
ba63838
Use const auto in added tests for PPP
praasz Nov 18, 2024
f2b4c7e
Merge remote-tracking branch 'origin/master' into bugfix/fix-ppp-dere…
praasz Dec 12, 2024
e715003
Merge remote-tracking branch 'origin/master' into bugfix/fix-ppp-dere…
praasz Dec 13, 2024
a8ecea8
Revert not required changes
praasz Dec 13, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -252,7 +252,7 @@ static void regclass_graph_InputTensorInfo(py::module m) {
},
py::arg("layout"),
R"(
Set layout for input tensor info
Set layout for input tensor info
:param layout: layout to be set
:type layout: Union[str, openvino.runtime.Layout]
)");
Expand Down Expand Up @@ -366,10 +366,24 @@ static void regclass_graph_OutputTensorInfo(py::module m) {
},
py::arg("layout"),
R"(
Set layout for output tensor info
Set layout for output tensor info

:param layout: layout to be set
:type layout: Union[str, openvino.runtime.Layout]
)");

info.def(
"_set_names_compatibility_mode",
[](ov::preprocess::OutputTensorInfo& self, const bool compatibility_mode) {
return &self.set_names_compatibility_mode(compatibility_mode);
},
py::arg("compatibility_mode"),
R"(
Set keep names compatibility mode

:param compatibility_mode: Mode to be set: True enable compatiblity, False disable
:type compatibility_mode: bool
)");
}

static void regclass_graph_InputInfo(py::module m) {
Expand Down Expand Up @@ -419,7 +433,8 @@ static void regclass_graph_OutputModelInfo(py::module m) {
},
py::arg("layout"),
R"(
Set layout for output model info
Set layout for output model info

:param layout: layout to be set
:type layout: Union[str, openvino.runtime.Layout]
)");
Expand Down
3 changes: 2 additions & 1 deletion src/bindings/python/tests/test_runtime/test_input_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,8 @@ def test_input_get_source_output(device):
net_input = compiled_model.output(0)
input_node = net_input.get_node().inputs()[0]
name = input_node.get_source_output().get_node().get_friendly_name()
assert name == "relu"
# Expected ReLu node name can be changed if conversion precision applied (new Convert node added)
assert name in ("relu", "relu.0")


def test_input_get_tensor(device):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,8 @@ bool convert_function_precision(const std::shared_ptr<Model>& f,
bool is_changed,
bool is_subgraph,
bool convert_input_output_precision,
bool store_original_precision_as_rt_attribute) {
bool store_original_precision_as_rt_attribute,
bool names_compatibility_mode) {
bool is_output_precision_changed = false;

ov::element::TypeVector orig_result_types;
Expand Down Expand Up @@ -273,7 +274,8 @@ bool convert_function_precision(const std::shared_ptr<Model>& f,
is_changed || is_output_precision_changed,
true,
true,
store_original_precision_as_rt_attribute) ||
store_original_precision_as_rt_attribute,
names_compatibility_mode) ||
is_changed;
}
}
Expand Down Expand Up @@ -321,24 +323,28 @@ bool convert_function_precision(const std::shared_ptr<Model>& f,
if (result->get_input_element_type(0) != orig_result_types[i]) {
auto result_input = result->input_value(0);
const auto convert = std::make_shared<ov::op::v0::Convert>(result_input, orig_result_types[i]);
if (result_input.get_node()->get_output_size() > 1) {
convert->set_friendly_name(result_input.get_node()->get_friendly_name() + "." +
std::to_string(result_input.get_index()));

auto convert_f_name = result_input.get_node()->get_friendly_name();
if (names_compatibility_mode) {
if (result_input.get_node()->get_output_size() > 1) {
convert_f_name += '.' + std::to_string(result_input.get_index());
} else {
result_input.get_node()->set_friendly_name("");
}

convert->get_output_tensor(0).set_names(result_input.get_names());
} else {
convert->set_friendly_name(result_input.get_node()->get_friendly_name());
result_input.get_node()->set_friendly_name("");
convert_f_name += '.' + std::to_string(result_input.get_index());
}

auto& convert_output_tensor = convert->get_output_tensor(0);
convert_output_tensor.set_names(result_input.get_names());
convert->set_friendly_name(convert_f_name);
OPENVINO_SUPPRESS_DEPRECATED_START
const auto& legacy_name = ov::descriptor::get_ov_tensor_legacy_name(result_input.get_tensor());
if (!legacy_name.empty()) {
ov::descriptor::set_ov_tensor_legacy_name(convert_output_tensor, legacy_name);
ov::descriptor::set_ov_tensor_legacy_name(convert->get_output_tensor(0), legacy_name);
}
OPENVINO_SUPPRESS_DEPRECATED_END

result_input.set_names({});
result->input(0).replace_source_output(convert->output(0));
result->revalidate_and_infer_types();
}
Expand All @@ -361,6 +367,8 @@ bool convert_precision(ov::pass::PassBase& pass,
// changing precision we need to understand which Constant consumers belongs
// to the current ov::Model
std::unordered_map<const ov::Node*, std::vector<Input<Node>>> const_to_internal_output;

const auto names_compatibility_mode = f->has_rt_info("version") && f->get_rt_info<int64_t>("version") < 11;
return convert_function_precision(f,
type_to_fuse,
type_to_extend,
Expand All @@ -371,7 +379,8 @@ bool convert_precision(ov::pass::PassBase& pass,
false,
false,
convert_input_output_precision,
store_original_precision_as_rt_attribute);
store_original_precision_as_rt_attribute,
names_compatibility_mode);
}

using precisions_set_t = std::unordered_set<ov::element::Type_t, EnumClassHash>;
Expand Down
19 changes: 13 additions & 6 deletions src/common/transformations/tests/utils/convert_precision.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2145,8 +2145,9 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsForParameterAndResult)
auto param_1 = make_shared<opset10::Parameter>(element::f64, Shape{3});
auto converted_param = make_shared<opset10::Convert>(param_1, element::f32);
auto sin = make_shared<opset10::Sin>(converted_param);
sin->get_output_tensor(0).add_names({"sine:0"});
auto converted_sin = make_shared<opset10::Convert>(sin, element::f64);
converted_sin->get_output_tensor(0).add_names({"sine:0"});
converted_sin->set_friendly_name("sine.0");
auto result_sin = make_shared<opset10::Result>(converted_sin);
model_ref = make_shared<Model>(result_sin, ParameterVector{param_1});
}
Expand All @@ -2156,7 +2157,7 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsForParameterAndResult)
ASSERT_TRUE(result.valid) << result.message;

const auto& results = model->get_results();
ASSERT_EQ("sine", results[0]->get_input_node_ptr(0)->get_friendly_name());
ASSERT_EQ("sine.0", results[0]->get_input_node_ptr(0)->get_friendly_name());
}

TEST(TransformationTests, ConvertPrecisionExplicitConvertsMultiParam) {
Expand Down Expand Up @@ -2220,8 +2221,8 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsMultiParam) {
auto converted_mul = make_shared<opset10::Convert>(mul, element::f64);
auto sin = make_shared<opset10::Sin>(convert_1);

converted_add->get_output_tensor(0).add_names({"add:0"});
converted_mul->get_output_tensor(0).add_names({"mul:0"});
add->get_output_tensor(0).add_names({"add:0"});
mul->get_output_tensor(0).add_names({"mul:0"});
sin->get_output_tensor(0).add_names({"sine:0"});

auto result_add = make_shared<opset10::Result>(converted_add);
Expand All @@ -2237,8 +2238,6 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsMultiParam) {
ASSERT_TRUE(result.valid) << result.message;

const auto& results = model->get_results();
ASSERT_EQ("add", results[0]->get_input_node_ptr(0)->get_friendly_name());
ASSERT_EQ("mul", results[1]->get_input_node_ptr(0)->get_friendly_name());
ASSERT_EQ("sine", results[2]->get_input_node_ptr(0)->get_friendly_name());
}

Expand All @@ -2259,6 +2258,8 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsSingleNodeMultipleOutp
ov::descriptor::set_ov_tensor_legacy_name(split->get_output_tensor(2), "legacy_split:2");
OPENVINO_SUPPRESS_DEPRECATED_END
model = make_shared<Model>(split->outputs(), ParameterVector{param_1});
// set version 10 to use names compatibility mode
model->get_rt_info()["version"] = static_cast<int64_t>(10);

type_to_fuse_map empty_type_to_fuse_map = {};
bool keep_precision_sensitive_in_fp32 = false;
Expand All @@ -2275,6 +2276,9 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsSingleNodeMultipleOutp
auto convert_1 = make_shared<opset10::Convert>(param_1, element::f32);
auto axis = opset10::Constant::create(element::i32, Shape{}, {0});
auto split = make_shared<opset10::Split>(convert_1, axis, 3);
split->get_output_tensor(0).add_names({"split:0"});
split->get_output_tensor(1).add_names({"split:1"});
split->get_output_tensor(2).add_names({"split:2"});

auto convert_split_0 = make_shared<opset10::Convert>(split->output(0), element::f64);
auto convert_split_1 = make_shared<opset10::Convert>(split->output(1), element::f64);
Expand Down Expand Up @@ -2348,6 +2352,8 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsMultiSubgraphs) {
result.get_node()->set_friendly_name("if_result");
result.add_names({"if_result:0"});
model = make_shared<Model>(OutputVector{result}, ParameterVector{cond, param_1, param_2});
// set version 10 to use names compatibility mode
model->get_rt_info()["version"] = static_cast<int64_t>(10);

type_to_fuse_map empty_type_to_fuse_map = {};
bool keep_precision_sensitive_in_fp32 = false;
Expand Down Expand Up @@ -2401,6 +2407,7 @@ TEST(TransformationTests, ConvertPrecisionExplicitConvertsMultiSubgraphs) {
if_op->set_input(convert_1, param_1_then, param_1_else);
if_op->set_input(convert_2, param_2_then, param_2_else);
auto result = if_op->set_output(result_then, result_else);
result.add_names({"if_result:0"});
auto converted_result = make_shared<opset10::Convert>(result, element::f64);
converted_result->get_output_tensor(0).add_names({"if_result:0"});

Expand Down
9 changes: 8 additions & 1 deletion src/core/include/openvino/core/descriptor/output.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,8 @@ class OPENVINO_API Output {
std::shared_ptr<Tensor> get_tensor_ptr() const {
return m_tensor;
}
void set_tensor_ptr(const std::shared_ptr<Tensor>& tensor) {

virtual void set_tensor_ptr(const std::shared_ptr<Tensor>& tensor) {
m_tensor = tensor;
}
void add_input(Input* input);
Expand Down Expand Up @@ -70,8 +71,14 @@ class OPENVINO_API Output {
Output(const Output&) = default;
Output(Output&&) = default;
Output& operator=(const Output&) = default;
virtual ~Output() = default;

protected:
friend void ov::Output<Node>::set_names(const std::unordered_set<std::string>& names);
friend void ov::Output<Node>::add_names(const std::unordered_set<std::string>& names);
virtual void set_names(const std::unordered_set<std::string>& names);
virtual void add_names(const std::unordered_set<std::string>& names);

Node* m_node;
size_t m_index;
std::shared_ptr<Tensor> m_tensor;
Expand Down
26 changes: 25 additions & 1 deletion src/core/include/openvino/core/node.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,10 @@ class OPENVINO_API Node : public std::enable_shared_from_this<Node> {
descriptor::Input& get_input_descriptor(size_t position);
descriptor::Output& get_output_descriptor(size_t position);

/// @brief \brief Factory function to make output descriptor.
using OutputDescriptorFactory = std::unique_ptr<descriptor::Output> (*)(Node*,
const size_t,
std::shared_ptr<descriptor::Tensor>);
/// \brief Construct an uninitialized Node
Node();
/// \brief Copying a node
Expand All @@ -137,6 +141,13 @@ class OPENVINO_API Node : public std::enable_shared_from_this<Node> {
/// \param arguments Output i will connect to input i
/// \param output_size Number of outputs for this node
Node(const OutputVector& arguments, size_t output_size = 1);

/// \brief Constructor for Node subclasses that have metaclasses.
/// \param arguments Output i will connect to input i.
/// \param descriptor_factory Creates output descriptor for this node.
/// \param output_size Number of outputs for this node (default 1).
Node(const OutputVector& arguments, Node::OutputDescriptorFactory descriptor_factory, size_t output_size = 1);

/// \brief Moves nodes that would be deleted from inputs to nodes to avoid stack overflows
/// on deep networks.
void safe_delete(NodeVector& nodes, bool recurse);
Expand Down Expand Up @@ -437,7 +448,7 @@ class OPENVINO_API Node : public std::enable_shared_from_this<Node> {
mutable std::atomic_bool m_name_changing{false};
static std::atomic<size_t> m_next_instance_id;
std::deque<descriptor::Input> m_inputs;
std::deque<descriptor::Output> m_outputs;
std::deque<std::unique_ptr<descriptor::Output>> m_outputs;
RTMap m_rt_info;

// The vector of SharedRTInfo attributes associated to Functions
Expand All @@ -452,6 +463,19 @@ class OPENVINO_API Node : public std::enable_shared_from_this<Node> {
// update of this field by having specific method with mutex.
void insert_info(std::shared_ptr<SharedRTInfo> info);
std::mutex m_insert_mutex;

/// brief Makes default ov::descriptor::Output for Node at port index.
///
/// \param node Node to
/// \param i Output port index.
/// \param tensor Shared tensor descriptor connected to this output descriptor.
///
/// \return std::unique_ptr<descriptor::Output>
static std::unique_ptr<descriptor::Output> make_output_descriptor(Node* node,
const size_t i,
std::shared_ptr<descriptor::Tensor> tensor);
/// \brief Holds function factory for output descriptor.
OutputDescriptorFactory m_output_descriptor_factory = make_output_descriptor;
};

using NodeTypeInfo = Node::type_info_t;
Expand Down
10 changes: 10 additions & 0 deletions src/core/include/openvino/core/preprocess/output_tensor_info.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,16 @@ class OPENVINO_API OutputTensorInfo final {
///
/// \return Reference to 'this' to allow chaining with other calls in a builder-like manner
OutputTensorInfo& set_layout(const ov::Layout& layout);

/// \brief Enable/disable to keep names in compatibility mode (default set).
///
/// In compatibility mode the friendly name or tensor names can be moved to node added by PrePostProcessor.
/// When compatiblity mode is disabled names will be not moved.
///
/// \param compatibility_mode True if compatibility mode, otherwise disabled.
///
/// \return Reference to 'this' to allow chaining with other calls in a builder-like manner
OutputTensorInfo& set_names_compatibility_mode(const bool compatibility_mode);
};

} // namespace preprocess
Expand Down
1 change: 1 addition & 0 deletions src/core/include/openvino/op/op.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ class OPENVINO_API Op : public Node {
protected:
Op() : Node() {}
Op(const OutputVector& arguments);
Op(const OutputVector& arguments, Node::OutputDescriptorFactory desriptor_factory);

public:
_OPENVINO_HIDDEN_METHOD static const ::ov::Node::type_info_t& get_type_info_static() {
Expand Down
11 changes: 11 additions & 0 deletions src/core/include/openvino/op/result.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,20 @@
namespace ov {
namespace op {
namespace v0 {

/// \brief Result operation.
///
/// \ingroup ov_ops_cpp_api
///
/// The Result operator output is special output which share tensor with node connected to this node.
/// The Result's output names are visible as model outputs names.
/// To set these use
/// - `Result::output(0)::set_names/add_names` to set/add this names on Result's output.
///
/// Using `Result::get_output_tensor(0)::set_names/add_names` will set/add names on tensor without modify
/// Result's output names.
/// The Result's output names are appended to connected tensor or transferred to new tensor when Result is connected
/// with new node.
class OPENVINO_API Result : public Op {
public:
OPENVINO_OP("Result", "opset1");
Expand Down
2 changes: 1 addition & 1 deletion src/core/src/descriptor/input.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ void ov::descriptor::Input::replace_output(Output& new_output) {
}

void ov::descriptor::Input::replace_output(const std::shared_ptr<ov::Node>& node, size_t i) {
replace_output(node->m_outputs.at(i));
replace_output(node->get_output_descriptor(i));
}

void ov::descriptor::Input::remove_output() {
Expand Down
8 changes: 8 additions & 0 deletions src/core/src/descriptor/output.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -52,3 +52,11 @@ const ov::PartialShape& ov::descriptor::Output::get_partial_shape() const {
const ov::element::Type& ov::descriptor::Output::get_element_type() const {
return m_tensor->get_element_type();
}

void ov::descriptor::Output::set_names(const std::unordered_set<std::string>& names) {
get_tensor().set_names(names);
}

void ov::descriptor::Output::add_names(const std::unordered_set<std::string>& names) {
get_tensor().add_names(names);
}
12 changes: 3 additions & 9 deletions src/core/src/descriptor/tensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
#include "openvino/core/descriptor_tensor.hpp"
#include "openvino/core/except.hpp"
#include "openvino/core/node.hpp"
#include "openvino/core/type/element_iterator.hpp"
#include "openvino/op/util/symbolic_info.hpp"

ov::descriptor::Tensor::Tensor(const element::Type& element_type,
Expand Down Expand Up @@ -70,9 +71,7 @@ const ov::Shape& ov::descriptor::Tensor::get_shape() const {
}

size_t ov::descriptor::Tensor::size() const {
const bool bitwidth_less_than_byte = m_element_type.bitwidth() < 8;
return bitwidth_less_than_byte ? (shape_size(get_shape()) * m_element_type.bitwidth() + 7) >> 3
: (shape_size(get_shape()) * m_element_type.size());
return element::get_memory_size(get_element_type(), shape_size(get_shape()));
}

const std::unordered_set<std::string>& ov::descriptor::Tensor::get_names() const {
Expand Down Expand Up @@ -106,13 +105,8 @@ void ov::descriptor::Tensor::add_names(const std::unordered_set<std::string>& na
}

void ov::descriptor::Tensor::clone_from(const ov::descriptor::Tensor& old) {
{
AtomicGuard lock(m_shape_changing);
m_partial_shape = old.get_partial_shape();
m_shape_changed = true;
}
set_tensor_type(*this, old.get_element_type(), old.get_partial_shape());
set_names(old.get_names());
m_element_type = old.get_element_type();
m_lower_value = old.get_lower_value();
m_upper_value = old.get_upper_value();
m_value_symbol = old.get_value_symbol();
Expand Down
Loading
Loading