-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Col2Im CPU op #12311
Add Col2Im CPU op #12311
Conversation
This pull request introduces 1 alert when merging bedbfcd into c40f73a - view on LGTM.com new alerts:
|
21f94d9
to
d6cc30b
Compare
This pull request introduces 4 alerts when merging d6cc30b into 51a7998 - view on LGTM.com new alerts:
|
d6cc30b
to
c5b72eb
Compare
This pull request introduces 4 alerts when merging c5b72eb into 51a7998 - view on LGTM.com new alerts:
|
a5fbfa0
to
4cfde71
Compare
This pull request introduces 1 alert when merging 4cfde71 into 148b1ef - view on LGTM.com new alerts:
|
This pull request introduces 5 alerts when merging 71100b3 into 315e006 - view on LGTM.com new alerts:
|
71100b3
to
72e056c
Compare
This pull request introduces 4 alerts when merging 72e056c into 5d1173f - view on LGTM.com new alerts:
|
This pull request introduces 4 alerts when merging 346dea5 into d1497bd - view on LGTM.com new alerts:
|
346dea5
to
4382687
Compare
This pull request introduces 4 alerts when merging 4382687 into 97268e0 - view on LGTM.com new alerts:
|
c189515
to
0841772
Compare
This pull request introduces 4 alerts when merging 0841772 into 37995a7 - view on LGTM.com new alerts:
|
0841772
to
4e1cfbd
Compare
This pull request introduces 1 alert when merging 4e1cfbd into 0d9a02e - view on LGTM.com new alerts:
|
c2d3822
to
13e326b
Compare
This pull request introduces 1 alert when merging 13e326b into 3e78f3c - view on LGTM.com new alerts:
|
13e326b
to
21bc8f5
Compare
Signed-off-by: Liqun Fu <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add unit tests? Also, why does the title say contrib op?
Signed-off-by: Liqun Fu <[email protected]>
Anubis run is necessary. It is easier to catch regression on a specific PR rather than hunt then down later. |
Signed-off-by: Liqun Fu <[email protected]>
Test cases are added to ONNX. These tests are run in CI. I disabled col2im_pads tests because there is a minor typo which will fail one test. Other tests for col2im are still running. The kernel was added as contrib op first before ONNX 1.13 is available. I removed "contrib" from the title. [Edit]: I just brought back the tests @thiagocrepaldi wrote. |
Signed-off-by: Liqun Fu <[email protected]>
Signed-off-by: Liqun Fu <[email protected]>
Good point! @thiagocrepaldi already had test in this PR. I somehow removed the tests. I just brought back the tests Thiago wrote. |
Signed-off-by: Liqun Fu <[email protected]>
Signed-off-by: Liqun Fu <[email protected]>
…ror with ReactNative CI Signed-off-by: Liqun Fu <[email protected]>
5893bd9
to
bc25103
Compare
**Description** This PR implements N-dimensional Col2Im as a contrib CPU Op as specified by ONNX's onnx/onnx#3948 **Motivation and Context** - Col2Im enables models such as: - [SS-DCNet](https://github.com/xhp-hust-2018-2011/SS-DCNet) - [DSTT](https://github.com/ruiliu-ai/DSTT) - It also serves to document the ORT's obscure `math::Col2ImNd` utility Signed-off-by: Liqun Fu <[email protected]> Co-authored-by: Liqun Fu <[email protected]>
@thiagocrepaldi @liqunfu
|
Oh no, there is a typo in the ONNX test! It should actually be import torch
col = torch.tensor([[[1.0, 6.0, 11.0, 16.0, 21.0, 26, 31, 36, 41, 46, 51, 56, 61, 66, 71], # (1, 5, 15)
[2.0, 7.0, 12.0, 17.0, 22.0, 27, 32, 37, 42, 47, 52, 57, 62, 67, 72],
[ 3.0, 8.0, 13.0, 18.0, 23.0, 28, 33, 38, 43, 48, 53, 58, 63, 68, 73],
[ 4.0, 9.0, 14.0, 19.0, 24.0, 29, 34, 39, 44, 49, 54, 59, 64, 69, 74],
[ 5.0, 10.0, 15.0, 20.0, 25.0, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75]]])
kernel_size = (1, 5)
output_size = (5, 5)
padding = (0, 1)
# Col2Im results
col2im = torch.nn.Fold(kernel_size=kernel_size,
padding=padding,
output_size=output_size)
im = col2im(col)
print(f'Col2Im output:\n{im}\n\t with shape {im.shape}') Results in Col2Im output:
tensor([[[[ 8., 21., 24., 27., 24.],
[ 38., 66., 69., 72., 54.],
[ 68., 111., 114., 117., 84.],
[ 98., 156., 159., 162., 114.],
[128., 201., 204., 207., 144.]]]])
with shape torch.Size([1, 1, 5, 5]) |
That is odd, on ONNX main branch, this is correct. 24 is there and was fixed by @liqun at onnx/onnx#4769 It is safe to ignore this failure |
Description
This PR implements N-dimensional Col2Im as a contrib CPU Op as specified by ONNX's onnx/onnx#3948
Motivation and Context
math::Col2ImNd
utility