Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to latest 2021.1 #121

Merged
merged 51 commits into from
Dec 18, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
eec2fd8
[DOCS] [41549] Fix broken code block in Install OpenVINO from PyPI Re…
aalborov Oct 26, 2020
2cf8999
add animation (#2865)
ntyukaev Oct 30, 2020
a081dfe
Align `time_tests` with master branch from 4021e144 (#2881)
vurusovs Oct 30, 2020
a7ab76e
Added info on DockerHub CI Framework (#2919)
andrew-zaytsev Nov 3, 2020
313d889
Feature/azaytsev/cherry pick pr2541 to 2021 1 (#2960)
andrew-zaytsev Nov 3, 2020
14aa83f
See Also sections in MO Guide (#2770)
aalborov Nov 6, 2020
d2dc54f
Fixes (#3105)
aalborov Nov 16, 2020
78f8b6a
Renamed Benchmark App into Benchmark Tool in the menu (#3032)
aalborov Nov 16, 2020
bd3ba38
[DOC] Update Docker install guide (#3055) (#3200)
generalova-kate Nov 18, 2020
38892b2
Align time_tests with master (#3238)
vurusovs Nov 20, 2020
43a6e4c
Fix onnx tests versions (#3240)
rblaczkowski Nov 20, 2020
751ef42
[40929] DL Workbench in Get Started (#2740)
aalborov Nov 20, 2020
57eee6a
Links to DL Workbench Installation Guide (#2861)
aalborov Nov 20, 2020
9d5b200
[41545] Add links to DL Workbench from components that are available …
aalborov Nov 20, 2020
20fd0bc
Feature/azaytsev/change layout (#3295)
andrew-zaytsev Nov 23, 2020
6adaad6
Add several new models to `tgl_test_config.yml` in time_tests (#3269)
vurusovs Nov 24, 2020
f2a3d6b
Fix a typo in DL Workbench Get Started (#3338)
aalborov Nov 25, 2020
f5e2fff
ops math formula fix (#3333)
ntyukaev Nov 30, 2020
bff3381
Fix paths for `squeezenet1.1` in time_tests config (#3416)
vurusovs Nov 30, 2020
6260125
GNA Plugin doc review (#2922)
aalborov Dec 7, 2020
6374d44
Port PlaidML plugin forward to 2021.1 (#32)
tzerrell Oct 19, 2020
35651cd
Enable testing of BatchNorm (#33)
tzerrell Oct 19, 2020
b60f71a
Require specific path to shared library (#34)
Oct 19, 2020
5966062
Fix multiple outputs and add Split (#42)
tzerrell Oct 22, 2020
022e254
Swish (#47)
mwyi Oct 24, 2020
56e9add
Add Reverse & tests to PlaidML Plugin (#35)
tzerrell Oct 26, 2020
ed90660
Make separate PlaidMLProgramBuilder (#92)
tzerrell Oct 30, 2020
9b91499
Variadic Split (#91)
mwyi Nov 2, 2020
e15c466
Add BinaryConvolution (#93)
LiyangLingIntel Nov 3, 2020
4ac5e60
Add working tests back (#97)
cnamrata15 Nov 3, 2020
4e81cc1
Add bucketize op and tests (#90)
XinWangIntel Nov 3, 2020
3454f38
Add extract image patches op (#96)
XingHongChenIntel Nov 4, 2020
4320e6b
Hswish via ReLU (#95)
mwyi Nov 4, 2020
714c4a8
Add reorg_yolo op (#101)
XingHongChenIntel Nov 7, 2020
3f4722f
Remove conv bprop & fake quant tests (#106)
tzerrell Nov 10, 2020
bacee8b
add EmbeddingBagOffsetsSum op and tests (#100)
haoyouab Nov 10, 2020
c6457f9
Add LSTMCell (#102)
LiyangLingIntel Nov 10, 2020
2897093
Add RNNCell (#109)
tzerrell Nov 10, 2020
bace3d8
Add space_to_batch op (#104)
XingHongChenIntel Nov 10, 2020
3326f49
Add tests for MinMax, DepthToSpace (#105)
cnamrata15 Nov 10, 2020
d1f0000
Add GELU (#107)
LiyangLingIntel Nov 10, 2020
b796cd5
Add GRUCell (#110)
tzerrell Nov 10, 2020
cbf3f33
Fix support for using OpenVINO as a subproject (#111)
Nov 11, 2020
83f6ce4
Build fixes for newer compilers (#113)
Nov 11, 2020
b053f8c
add EmbeddingBagPackedSum op and tests (#114)
haoyouab Nov 12, 2020
9146822
Add shuffle_channels op and test. (#112)
XingHongChenIntel Nov 13, 2020
f38d9d9
Tests for squared difference op (#115)
cnamrata15 Nov 13, 2020
3edec51
Add acosh, asinh, atanh into tests (#118)
LiyangLingIntel Dec 3, 2020
11aeeb9
Reverse sequence (#116)
XingHongChenIntel Dec 4, 2020
fa8c54b
Add PriorBox op and test. (#117)
XinWangIntel Dec 4, 2020
249e62f
Remove obsolete PlaidML code (#120)
tzerrell Dec 18, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add working tests back (#97)
* Attempt to add tests back

* Change op name for logical_and to match tests

* Add Tests for
*Convert
*Convolution_Backprop_Data
*Fake_quantize
*SoftMax
*Tile
*Transpose

* Fix comparison and logical tests, use IE ref mode for now

* Remove cumsum and logical tests

* Remove comparison tests and it's fixes
  • Loading branch information
cnamrata15 authored and YangleiZouIntel committed Dec 18, 2020
commit 4ac5e60ea2cb3afb83db89fd3dc3bbe9c5c0d211
2 changes: 1 addition & 1 deletion inference-engine/src/plaidml_plugin/ops/logical_and.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ using namespace InferenceEngine; // NOLINT[build/namespaces]

namespace PlaidMLPlugin {

static OpRegistration reg("and", [](const Context& ctx) {
static OpRegistration reg("logicaland", [](const Context& ctx) {
IE_ASSERT(ctx.operands.size() == 2);
auto A = ctx.operands.at(0);
auto B = ctx.operands.at(1);
Expand Down
2 changes: 1 addition & 1 deletion inference-engine/src/plaidml_plugin/ops/logical_or.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ using namespace InferenceEngine; // NOLINT[build/namespaces]

namespace PlaidMLPlugin {

static OpRegistration reg("or", [](const Context& ctx) {
static OpRegistration reg("logicalor", [](const Context& ctx) {
IE_ASSERT(ctx.operands.size() == 2);
auto A = ctx.operands.at(0);
auto B = ctx.operands.at(1);
Expand Down
2 changes: 1 addition & 1 deletion inference-engine/src/plaidml_plugin/ops/logical_xor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ using namespace InferenceEngine; // NOLINT[build/namespaces]

namespace PlaidMLPlugin {

static OpRegistration reg("xor", [](const Context& ctx) {
static OpRegistration reg("logicalxor", [](const Context& ctx) {
IE_ASSERT(ctx.operands.size() == 2);
auto A = edsl::cast(ctx.operands.at(0), plaidml::DType::BOOLEAN); // cast to bool and use bitwise xor for now
auto B = edsl::cast(ctx.operands.at(1), plaidml::DType::BOOLEAN);
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
// Copyright (C) 2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>

#include "single_layer_tests/convert.hpp"
#include "common_test_utils/test_constants.hpp"

using namespace LayerTestsDefinitions;

namespace {
const std::vector<std::vector<size_t>> inShape = {{1, 2, 3, 4}};

const std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32,
//InferenceEngine::Precision::FP16,
//InferenceEngine::Precision::U8,
//InferenceEngine::Precision::I8,
};

INSTANTIATE_TEST_CASE_P(NoReshape, ConvertLayerTest,
::testing::Combine(
::testing::Values(inShape),
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(netPrecisions),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
ConvertLayerTest::getTestCaseName);

} // namespace
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
// Copyright (C) 2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>

#include "single_layer_tests/convolution_backprop_data.hpp"
#include "common_test_utils/test_constants.hpp"

using namespace LayerTestsDefinitions;

namespace {

const std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32,
//InferenceEngine::Precision::FP16
};

const std::vector<size_t> numOutChannels = {1, 5, 16};

/* ============= 2D ConvolutionBackpropData ============= */
const std::vector<std::vector<size_t >> inputShapes2D = {{1, 3, 30, 30},
{1, 16, 10, 10},
{1, 32, 10, 10}};
const std::vector<std::vector<size_t >> kernels2D = {{1, 1}, {3, 3}, {3, 5}};
const std::vector<std::vector<size_t >> strides2D = {{1, 1}, {1, 3}};
const std::vector<std::vector<ptrdiff_t>> padBegins2D = {{0, 0}};
const std::vector<std::vector<ptrdiff_t>> padEnds2D = {{0, 0}, {1, 1}};
const std::vector<std::vector<size_t >> dilations2D = {{1, 1}, {2, 2}};

const auto conv2DParams_ExplicitPadding = ::testing::Combine(
::testing::ValuesIn(kernels2D),
::testing::ValuesIn(strides2D),
::testing::ValuesIn(padBegins2D),
::testing::ValuesIn(padEnds2D),
::testing::ValuesIn(dilations2D),
::testing::ValuesIn(numOutChannels),
::testing::Values(ngraph::op::PadType::EXPLICIT)
);
const auto conv2DParams_AutoPadValid = ::testing::Combine(
::testing::ValuesIn(kernels2D),
::testing::ValuesIn(strides2D),
::testing::Values(std::vector<ptrdiff_t>({0, 0})),
::testing::Values(std::vector<ptrdiff_t>({0, 0})),
::testing::ValuesIn(dilations2D),
::testing::ValuesIn(numOutChannels),
::testing::Values(ngraph::op::PadType::VALID)
);

INSTANTIATE_TEST_CASE_P(ConvolutionBackpropData2D_ExplicitPadding, ConvolutionBackpropDataLayerTest,
::testing::Combine(
conv2DParams_ExplicitPadding,
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(inputShapes2D),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
ConvolutionBackpropDataLayerTest::getTestCaseName);

INSTANTIATE_TEST_CASE_P(ConvolutionBackpropData2D_AutoPadValid, ConvolutionBackpropDataLayerTest,
::testing::Combine(
conv2DParams_AutoPadValid,
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(inputShapes2D),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
ConvolutionBackpropDataLayerTest::getTestCaseName);

/* ============= 3D ConvolutionBackpropData ============= */
const std::vector<std::vector<size_t >> inputShapes3D = {{1, 3, 10, 10, 10},
{1, 16, 5, 5, 5},
{1, 32, 5, 5, 5}};
const std::vector<std::vector<size_t >> kernels3D = {{1, 1, 1}, {3, 3, 3}};
const std::vector<std::vector<size_t >> strides3D = {{1, 1, 1}};
const std::vector<std::vector<ptrdiff_t>> padBegins3D = {{0, 0, 0}};
const std::vector<std::vector<ptrdiff_t>> padEnds3D = {{0, 0, 0}, {1, 1, 1}};
const std::vector<std::vector<size_t >> dilations3D = {{1, 1, 1}, {2, 2, 2}};

const auto conv3DParams_ExplicitPadding = ::testing::Combine(
::testing::ValuesIn(kernels3D),
::testing::ValuesIn(strides3D),
::testing::ValuesIn(padBegins3D),
::testing::ValuesIn(padEnds3D),
::testing::ValuesIn(dilations3D),
::testing::ValuesIn(numOutChannels),
::testing::Values(ngraph::op::PadType::EXPLICIT)
);
const auto conv3DParams_AutoPadValid = ::testing::Combine(
::testing::ValuesIn(kernels3D),
::testing::ValuesIn(strides3D),
::testing::Values(std::vector<ptrdiff_t>({0, 0, 0})),
::testing::Values(std::vector<ptrdiff_t>({0, 0, 0})),
::testing::ValuesIn(dilations3D),
::testing::ValuesIn(numOutChannels),
::testing::Values(ngraph::op::PadType::VALID)
);

INSTANTIATE_TEST_CASE_P(ConvolutionBackpropData3D_ExplicitPadding, ConvolutionBackpropDataLayerTest,
::testing::Combine(
conv3DParams_ExplicitPadding,
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(inputShapes3D),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
ConvolutionBackpropDataLayerTest::getTestCaseName);

INSTANTIATE_TEST_CASE_P(ConvolutionBackpropData3D_AutoPadValid, ConvolutionBackpropDataLayerTest,
::testing::Combine(
conv3DParams_AutoPadValid,
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(inputShapes3D),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
ConvolutionBackpropDataLayerTest::getTestCaseName);

} // namespace
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
// Copyright (C) 2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>

#include "single_layer_tests/fake_quantize.hpp"
#include "common_test_utils/test_constants.hpp"

using namespace LayerTestsDefinitions;

namespace {

const std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32,
//InferenceEngine::Precision::FP16
};

const std::vector<std::vector<size_t>> inputShapes = {{1, 1, 1, 1}, {3, 10, 5, 6}};
const std::vector<std::vector<size_t>> constShapes = {{1}};
const std::vector<size_t> levels = {16, 255, 256};

const auto fqParams = ::testing::Combine(
::testing::ValuesIn(levels),
::testing::ValuesIn(constShapes)
);

INSTANTIATE_TEST_CASE_P(FakeQuantize, FakeQuantizeLayerTest,
::testing::Combine(
fqParams,
::testing::ValuesIn(netPrecisions),
::testing::ValuesIn(inputShapes),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
FakeQuantizeLayerTest::getTestCaseName);

} // namespace
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
// Copyright (C) 2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>

#include "single_layer_tests/softmax.hpp"
#include "common_test_utils/test_constants.hpp"

using namespace LayerTestsDefinitions;

namespace {

const std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32,
};

const std::vector<InferenceEngine::Layout> inputLayouts2D = {
InferenceEngine::Layout::NC,
};

const std::vector<InferenceEngine::SizeVector> inputShapes2D = {
InferenceEngine::SizeVector {1, 100},
InferenceEngine::SizeVector {100, 1},
InferenceEngine::SizeVector {10, 10},
};

const std::vector<size_t> axis2D = {
0, 1
};

const auto params2D = testing::Combine(
testing::ValuesIn(netPrecisions),
testing::ValuesIn(inputLayouts2D),
testing::ValuesIn(inputShapes2D),
testing::ValuesIn(axis2D),
testing::Values(CommonTestUtils::DEVICE_PLAIDML),
testing::Values(std::map<std::string, std::string>())
);

INSTANTIATE_TEST_CASE_P(
SoftMax2D,
SoftMaxLayerTest,
params2D,
SoftMaxLayerTest::getTestCaseName
);

const std::vector<InferenceEngine::SizeVector> inputShapes4D = {
InferenceEngine::SizeVector {1, 100, 1, 1},
InferenceEngine::SizeVector {1, 3, 4, 3},
InferenceEngine::SizeVector {2, 3, 4, 5},
};

const std::vector<size_t> axis4D = {0, 1, 2, 3};

const auto params4D = testing::Combine(
testing::ValuesIn(netPrecisions),
testing::Values(InferenceEngine::Layout::NCHW),
testing::ValuesIn(inputShapes4D),
testing::ValuesIn(axis4D),
testing::Values(CommonTestUtils::DEVICE_PLAIDML),
testing::Values(std::map<std::string, std::string>())
);

INSTANTIATE_TEST_CASE_P(
SoftMax4D,
SoftMaxLayerTest,
params4D,
SoftMaxLayerTest::getTestCaseName
);

} // namespace
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
// Copyright (C) 2020 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>

#include "single_layer_tests/tile.hpp"

using namespace LayerTestsDefinitions;

namespace {

const std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32
};

const std::vector<std::vector<size_t>> repeats = {
{1, 2, 3},
{2, 1, 1},
{2, 3, 1},
{2, 2, 2},
};

INSTANTIATE_TEST_CASE_P(Tile, TileLayerTest,
::testing::Combine(
::testing::ValuesIn(repeats),
::testing::ValuesIn(netPrecisions),
::testing::Values(std::vector<size_t>({2, 3, 4})),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
TileLayerTest::getTestCaseName);

INSTANTIATE_TEST_CASE_P(Tile6d, TileLayerTest,
::testing::Combine(
::testing::Values(std::vector<size_t>({1, 1, 1, 2, 1, 2})),
::testing::ValuesIn(netPrecisions),
::testing::Values(std::vector<size_t>({1, 4, 3, 1, 3, 1})),
::testing::Values(CommonTestUtils::DEVICE_PLAIDML)),
TileLayerTest::getTestCaseName);

} // namespace
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
// Copyright (C) 2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <vector>

#include "single_layer_tests/transpose.hpp"
#include "common_test_utils/test_constants.hpp"

using namespace LayerTestsDefinitions;

namespace {

const std::vector<InferenceEngine::Precision> netPrecisions = {
InferenceEngine::Precision::FP32,
};

const std::vector<std::vector<size_t>> inputShapes = {
std::vector<size_t>{1, 3, 100, 100},
};

const std::vector<std::vector<size_t>> inputOrder = {
std::vector<size_t>{0, 3, 2, 1},
std::vector<size_t>{},
};

const auto params = testing::Combine(
testing::ValuesIn(inputOrder),
testing::ValuesIn(netPrecisions),
testing::ValuesIn(inputShapes),
testing::Values(CommonTestUtils::DEVICE_PLAIDML)
);

INSTANTIATE_TEST_CASE_P(
Transpose,
TransposeLayerTest,
params,
TransposeLayerTest::getTestCaseName
);

} // namespace
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,4 @@ void ComparisonLayerTest::SetUp() {
TEST_P(ComparisonLayerTest, ComparisonTests) {
Run();
}
} // namespace LayerTestsDefinitions
} // namespace LayerTestsDefinitions
Original file line number Diff line number Diff line change
Expand Up @@ -85,4 +85,4 @@ void LogicalLayerTest::SetUp() {
TEST_P(LogicalLayerTest, LogicalTests) {
Run();
}
} // namespace LayerTestsDefinitions
} // namespace LayerTestsDefinitions