-
Notifications
You must be signed in to change notification settings - Fork 6.8k
[MXNET-80] Fix average pooling kernel size assignment error #10000
Conversation
Thanks for submitting this PR! This should fix the issue in https://discuss.gluon.ai/t/topic/5015/7. You can remove the |
src/operator/nn/pooling-inl.h
Outdated
// check if filter size assigned correctly | ||
if (param.global_pool == false) { | ||
CHECK_GT(param.kernel.ndim(), 0U) | ||
<< "A positive number must be assigned as filter size"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need a better error message.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about "Must assign a positive kernel size"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can use "You need to set the kernel size if global pooling is not used".
The error code show that the change somehow breaks the cpp package. I need to investigate the status of the CPP package. |
@CoinCheung The error is due to that |
pooling_convention = 'valid' | ||
|
||
|
||
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}}) | ||
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type, | ||
pooling_convention=pooling_convention, global_pool=True, name='pool')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please help to fix the indent.
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}}) | ||
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type, | ||
pooling_convention=pooling_convention, global_pool=True, name='pool')) | ||
|
||
|
||
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}}) | ||
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pool_type=pool_type, | ||
pooling_convention=pooling_convention, global_pool=True, name='pool')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please help to fix the indent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I need to remove the blank lines? I only removed the kernel parameter assignment did not touch these white lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can do it if you have time. It should be due to the difference between windows and unix.
pooling_convention = 'valid' | ||
|
||
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}}) | ||
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we should also remove the kernel
phrases in 2d tests? I wrote this test case to make sure that there is a same behavior for global pooling w/ and w/o pad
and stride
. It seems kernel
should also be checked in the same way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But kernel
is not used if global_pool=True.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot prevent user from inputing a kernel
to global pooling. Kernel size will be reset to image shape if global_pool
is true. Code here: https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling-inl.h#L143
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. We should then use add back one test case that uses the kernel argument.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So shall I remove kernel only in the "even number" test cases and leave the odd test case with their kernel? Such as:
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type, # keep the kernel for checking
pooling_convention=pooling_convention, global_pool=True, name='pool'))
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(pool_type=pool_type, # remove kernel along with the missing pad and stride
pooling_convention=pooling_convention, global_pool=True, name='pool'))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we shouldn't remove the current test points. We can add new test points into the test case. For me, the 3 test points below should have same result.
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type,
pooling_convention=pooling_convention, global_pool=True, name='pool'))
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
pooling_convention=pooling_convention, global_pool=True, name='pool'))
# below is new test point
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(pool_type=pool_type,
pooling_convention=pooling_convention, global_pool=True, name='pool'))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@CoinCheung, we can then change to append new tests in the list instead of replacing the current tests. We could change the code following the comment by @TaoLv
Hi, the community has passed to vote about associating the code changes with JIRA (https://lists.apache.org/thread.html/ab22cf0e35f1bce2c3bf3bec2bc5b85a9583a3fe7fd56ba1bbade55f@%3Cdev.mxnet.apache.org%3E) We have updated the guidelines for contributors in https://cwiki.apache.org/confluence/display/MXNET/Development+Process, please ensure that you have created a JIRA at https://issues.apache.org/jira/projects/MXNET/issues/ to describe your work in this pull request and include the JIRA title in your PR as [MXNET-xxxx] your title where MXNET-xxxx is the JIRA id |
.enforce_nonzero() | ||
.describe("Pooling kernel size: (y, x) or (d, y, x)"); | ||
|
||
DMLC_DECLARE_FIELD(pool_type) | ||
DMLC_DECLARE_FIELD(pool_type).set_default(pool_enum::kMaxPooling) // add default pooling method | ||
.add_enum("max", pool_enum::kMaxPooling) | ||
.add_enum("avg", pool_enum::kAvgPooling) | ||
.add_enum("sum", pool_enum::kSumPooling) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realized that we need to change the order of the DMLC_DECLARE_FIELD here. In the original version, the parameters that do not have default values will be set first and then goes the params with default values. So the order will be kernel
, pool_type
, global_pool
, cudnn_off
, ... After we add default values to kernel
and pool_type
, the order becomes global_pool
, cudnn_off
, kernel
, pool_type
, ... Thus, the way to solve the problem is to change the order:
DMLC_DECLARE_FIELD(kernel).set_default(TShape()) // add default value here
.enforce_nonzero()
.describe("Pooling kernel size: (y, x) or (d, y, x)");
DMLC_DECLARE_FIELD(pool_type).set_default(pool_enum::kMaxPooling) // add default pooling method
.add_enum("max", pool_enum::kMaxPooling)
.add_enum("avg", pool_enum::kAvgPooling)
.add_enum("sum", pool_enum::kSumPooling)
.describe("Pooling type to be applied.");
DMLC_DECLARE_FIELD(global_pool).set_default(false)
.describe("Ignore kernel size, do global pooling based on current input feature map. ");
DMLC_DECLARE_FIELD(cudnn_off).set_default(false)
.describe("Turn off cudnn pooling and use MXNet pooling operator. ");
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I see in the original version that the order is: global_pool(false), cudnn_off(false), kernel(no default value), pool_type(no default value), ....
In the original version, "kernel" and "pool_type" do not have default value, but they go after "global_pool" and "cudnn_off" which are with default values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, maybe I have misunderstood what you said, I will have a try.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried a few times, and it failed at this position:
https://github.com/apache/incubator-mxnet/blob/94f68fc8fd21611b7f5c148cb0e5d134efe58f87/src/operator/nn/pooling.cc#L55
But I do not understand why it requires stride and kernel have the same length.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no need to check this if global_pool is turned on.
@CoinCheung I've created the JIRA issue for you. See https://issues.apache.org/jira/browse/MXNET-80 |
src/operator/nn/pooling.cc
Outdated
<< "stride and kernel should have the same length"; | ||
CHECK_EQ(param.pad.ndim(), param.kernel.ndim()) | ||
<< "pad and kernel should have the same length"; | ||
attrs->parsed = std::move(param); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need to parse the param
We may also need to revise the shape assignment logic: https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling.cc#L103-L215 |
@sxjscience |
I think it has not handled the case when kernel.ndim()=0
Get Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
From: CoinCheung <[email protected]>
Sent: Tuesday, March 13, 2018 12:33:34 AM
To: apache/incubator-mxnet
Cc: Xingjian SHI; Mention
Subject: Re: [apache/incubator-mxnet] [MXNET-80] Fix average pooling kernel size assignment error (#10000)
@sxjscience<https://github.com/sxjscience>
But I did not see logic error in this function. From the printed error message, I see no CHECK() failure triggered, and in every scenario with global_pool, the output shape is set to [-1,1] or [-1,1,1,] or [-1,1,1,1].
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#10000 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AE8D7lrKXOv4rxBdrwh-iqpHWKqseDbFks5td3ZNgaJpZM4Sd6lm>.
|
I tried but failed. So what should be the correct behavior when kernel.ndim() is 0? @sxjscience |
src/operator/nn/pooling.cc
Outdated
oshape[2] = 1; | ||
oshape[3] = 1; | ||
oshape[4] = 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to push the oshape to out_shape
. See https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling.cc#L163-L168.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, you can use a for-loop instead of if.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is another mismatch at this line:
https://github.com/apache/incubator-mxnet/blob/c9ec3118688c233a66ad847003a9e8d2d09e5952/src/operator/nn/pool.h#L690
and I have no idea how I could fix this as this function pool() have no global_pool as input parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel it is sort of dangerous to modify function definitions and add a input parameter. Do we have other choices ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no need to revise this function. All you need to do is to prepare the correct initial values. You can check the logic here https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling-inl.h#L135-L149 . If global_pool is used, kernel is set to be TShape(ishape.data()+ishape.ndim()-param_.kernel.ndim(), ishape.data()+ishape.ndim())
, pad is set to be all 0 and stride is set to be all 1. You can write a function to do this. For example:
TShape kernel = param_.kernel;
TShape padding = param_.pad;
TShape stride = param_.stride;
if(param_.global_pool) {
kernel = TShape(ishape.data()+ishape.ndim()-param_.kernel.ndim(), ishape.data()+ishape.ndim());
padding = TShape(ishape.ndim() - 2);
for(int i = 0; i < ishape.ndim() - 2; i++) {
padding[i] = 0;
}
stride = TShape(ishape.ndim() - 2);
}
pool(s, in_data.dptr<DType>(), in_data.shape_, out_data.shape_,
kernel,
padding,
stride,
param_.pool_type, req, out_data.dptr<DType>());
src/operator/nn/pooling.cc
Outdated
oshape[3] = 1; | ||
oshape[4] = 1; | ||
} | ||
} else if (param.kernel.ndim() == 1) { | ||
CHECK_EQ(dshape.ndim(), 3U) | ||
<< "Pooling: Input data should be 3D in (batch, channel, x)"; | ||
if (param.global_pool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These if
can be removed because the global_pool case is handled before.
src/operator/nn/pooling.cc
Outdated
<< "stride and kernel should have the same length"; | ||
CHECK_EQ(param.pad.ndim(), param.kernel.ndim()) | ||
<< "pad and kernel should have the same length"; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Simplify the logic here. If you have used global_pool, there is no need to check the stride, kernel or pad.
if(! param.global_pool) {
// CHECK kernel, pad, stride
}
param.stride[0]; | ||
oshape[3] = 1 + | ||
(dshape[3] + 2 * param.pad[1] - param.kernel[1]) / | ||
param.stride[1]; | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The if (param.global_pool)
is removed and the else
should also be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ignore this comment. I've misinterpreted the code.
src/operator/nn/pooling-inl.h
Outdated
if (param_.global_pool) { | ||
for (index_t i = 0; i < padding.ndim(); i++) { | ||
kernel = TShape(ishape.data()+ishape.ndim()-param_.kernel.ndim(), ishape.data()+ishape.ndim()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sxjscience
It still does not work even though kernel is set accordingly. By the way, I do not think the original implementation does wrong here, as it also considers the global_pool and resets the kernel.
Also I am not familiar with the source codes and the codes are in such a mass that I failed to find the definition of TShape
. Would you please show me where TShape
is defined?
Changing the order of declare field is not allowed. It will break API compabitility |
@piiswrong I find that the params without default values are parsed first and then goes the params with default values. |
@piiswrong We changed the order because the CPP package could not compile without changing it. |
src/operator/nn/pooling-inl.h
Outdated
if (param_.global_pool) { | ||
for (index_t i = 0; i < padding.ndim(); i++) { | ||
kernel = TShape(ishape.data() + ishape.ndim()-param_.kernel.ndim(), | ||
ishape.data() + ishape.ndim()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I’ve found the error. Here it should be ishape.data() +2, ishape.data() + ishape.ndim())
src/operator/nn/pooling-inl.h
Outdated
if (param_.global_pool) { | ||
for (index_t i = 0; i < padding.ndim(); i++) { | ||
kernel = TShape(ishape.data() + ishape.ndim() - param_.kernel.ndim(), | ||
ishape.data() + ishape.ndim()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
@CoinCheung Could you rebase to the current master? You can follow the guide in https://mxnet.incubator.apache.org/community/contribute.html?highlight=rebase and https://makandracards.com/makandra/527-squash-several-git-commits-into-a-single-commit |
I tried but am not sure if this is right, please correct me if that is not you want. |
@CoinCheung We should stash the commits into 1 commit. Use |
src/operator/nn/pool.h
Outdated
@@ -687,7 +687,7 @@ inline void pool(mshadow::Stream<cpu>* s, const DType* in_data, const TShape& is | |||
LOG(FATAL) << "Unknown pooling type " << pool_type; | |||
} | |||
} else { | |||
LOG(FATAL) << "Unsupported " << kernel.ndim() << "-D pooling"; | |||
LOG(FATAL) << "Unsupported " << kernel.ndim() << "-D non-avg pooling"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why non-avg here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used to misunderstand the logic there and I change it back now. It might be fine now.
@CoinCheung There are some problems with the CI. You need to rebase and stash the commits and use |
@sxjscience I did nothing since the last commit. But the newly submitted code cannot be compiled. |
@CoinCheung It's not your fault. This is due to problems in the CI. You need to rebase against the master to trigger the build again. Here is a good guide You can use the following commands:
You will see a file open up and change all the |
modify white space and other format errors remove wrap line whitespace format error remove whitespace at the end of line183 change error message add default pooling type to pool_enum::kMaxPooling add pooling without kernel test cases adjust pooling parameter order and add associated test points remove wrong error test points ignore kernel size check if global_pool is assigned to be true modify whitespace line length adjust adjust linelength finally learned to use cpplint switch off all shape checks if global_pool is assigned parse parameter when global_pool used modify pooling shape inference logic change a way to infer pooling shape add push oshape change kernel shape prepare pooling parameter shapes check lint pooling parameters preparation modify kernel shape computation method modify a bit pooling_v1 more modification of pooling_v1 remove "avg pool" tiny changes change pooling args order back use size_t instead of int use changed order and only try tiny changes try no kernel indicated to python interface with original order useless modify for recommit
@sxjscience changing the order back will even not pass all the compiling. Shall I remove the /example and the /cpp-package directory to avoid these cpp errors? |
Have you tested whether the original python codes could still be run when we change the order?
Get Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
From: CoinCheung <[email protected]>
Sent: Thursday, March 22, 2018 7:14:08 PM
To: apache/incubator-mxnet
Cc: Xingjian SHI; Mention
Subject: Re: [apache/incubator-mxnet] [MXNET-80] Fix average pooling kernel size assignment error (#10000)
@sxjscience<https://github.com/sxjscience> changing the order back will even not pass all the compiling. Shall I remove the /example and the /cpp-package directory to avoid these cpp errors?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#10000 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AE8D7lBZ75m7LRW5jMairjWVX4ocWsUVks5thFpwgaJpZM4Sd6lm>.
|
@sxjscience I tried and find it does not work. Given a kernel without indicating with |
@CoinCheung Let's first change the order to make sure the test passed and I'll discuss with @piiswrong about whether it is acceptable. |
@sxjscience I changed the order and it passed all the test. |
We need to either break the backward compatibility of the CPP package or the backward compatibility of the Python package. For CPP package, the Pooling layer is used like this https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/googlenet.cpp#L88-L89 |
We merely changed the order in the function body rather than the function declarations, and it did pass the test. Why do we still need to break some compatibility? |
In the python side, we can call functions in two ways. 1)Using kwargs, i.e mx.nd.Pooling(data, kernel=kernel, pad=pad, ...) 2) Using args, i.e, mx.nd.Pooling(data, kernel, pad) Our current change will break the backward compatibility of the second case. However, I think the second case is seldom used and it’s okay to break the compatibility.
Get Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
From: CoinCheung <[email protected]>
Sent: Friday, March 23, 2018 7:42:19 PM
To: apache/incubator-mxnet
Cc: Xingjian SHI; Mention
Subject: Re: [apache/incubator-mxnet] [MXNET-80] Fix average pooling kernel size assignment error (#10000)
We merely changed the order in the function body rather than the function declarations, and it did pass the test. Why do we still need to break some compatibility?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#10000 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AE8D7vIhmJyIk3rSb9JL3KPC2Sjv3gitks5thbKKgaJpZM4Sd6lm>.
|
Have we changed the order of python api args by changing the order in the C++ source code function body. Do you mean after the changing, the python api is also changed?
To
|
Yes.
Get Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
From: CoinCheung <[email protected]>
Sent: Sunday, March 25, 2018 6:08:29 PM
To: apache/incubator-mxnet
Cc: Xingjian SHI; Mention
Subject: Re: [apache/incubator-mxnet] [MXNET-80] Fix average pooling kernel size assignment error (#10000)
Have we changed the order of python api args by changing the order in the C++ source code function body. Do you mean after the changing, the python api is also changed?
like changing python api from:
mxnet.symbol.Pooling(data=None, global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)
To
mxnet.symbol.Pooling(data=None, kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#10000 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AE8D7pqyHUoAu5FTipm3EOVgQTaX-Ifsks5tiD-MgaJpZM4Sd6lm>.
|
@CoinCheung I just discussed it with Eric. We agree that the current way is acceptable. Please resolve the conflict and we can merge. |
@CoinCheung can you please resolve the conflicts. |
It failed somewhere I don`t think associated with this pull request. |
@sxjscience @piiswrong Good to merge now ? |
Yes, good to merge. We need to mention the API change in the release note. mxnet.symbol.Pooling(data=None, global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs) to mxnet.symbol.Pooling(data=None, kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs) |
…0000) * fix average pooling kernel size assignment error modify white space and other format errors remove wrap line whitespace format error remove whitespace at the end of line183 change error message add default pooling type to pool_enum::kMaxPooling add pooling without kernel test cases adjust pooling parameter order and add associated test points remove wrong error test points ignore kernel size check if global_pool is assigned to be true modify whitespace line length adjust adjust linelength finally learned to use cpplint switch off all shape checks if global_pool is assigned parse parameter when global_pool used modify pooling shape inference logic change a way to infer pooling shape add push oshape change kernel shape prepare pooling parameter shapes check lint pooling parameters preparation modify kernel shape computation method modify a bit pooling_v1 more modification of pooling_v1 remove "avg pool" tiny changes change pooling args order back use size_t instead of int use changed order and only try tiny changes try no kernel indicated to python interface with original order useless modify for recommit * no order change and test kernel= * change order
…0000) * fix average pooling kernel size assignment error modify white space and other format errors remove wrap line whitespace format error remove whitespace at the end of line183 change error message add default pooling type to pool_enum::kMaxPooling add pooling without kernel test cases adjust pooling parameter order and add associated test points remove wrong error test points ignore kernel size check if global_pool is assigned to be true modify whitespace line length adjust adjust linelength finally learned to use cpplint switch off all shape checks if global_pool is assigned parse parameter when global_pool used modify pooling shape inference logic change a way to infer pooling shape add push oshape change kernel shape prepare pooling parameter shapes check lint pooling parameters preparation modify kernel shape computation method modify a bit pooling_v1 more modification of pooling_v1 remove "avg pool" tiny changes change pooling args order back use size_t instead of int use changed order and only try tiny changes try no kernel indicated to python interface with original order useless modify for recommit * no order change and test kernel= * change order
Description
The pooling operator still complains about not assigning the kernel size when the "global_pool" parameter is assigned to be "True";
Checklist
Essentials
make lint
)Changes
Comments