Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-80] Fix average pooling kernel size assignment error #10000

Merged
merged 4 commits into from
Apr 9, 2018

Conversation

CoinCheung
Copy link
Contributor

@CoinCheung CoinCheung commented Mar 6, 2018

Description

The pooling operator still complains about not assigning the kernel size when the "global_pool" parameter is assigned to be "True";

Checklist

Essentials

  • Passed code style checking (make lint)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@CoinCheung CoinCheung requested a review from cjolivier01 as a code owner March 6, 2018 00:45
@sxjscience
Copy link
Member

sxjscience commented Mar 6, 2018

Thanks for submitting this PR! This should fix the issue in https://discuss.gluon.ai/t/topic/5015/7. You can remove the kernel= phrases in the test cases (https://github.com/apache/incubator-mxnet/blob/master/tests/python/gpu/test_operator_gpu.py#L914-L935). Also, you may run make pylint to check the lint locally.

// check if filter size assigned correctly
if (param.global_pool == false) {
CHECK_GT(param.kernel.ndim(), 0U)
<< "A positive number must be assigned as filter size";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need a better error message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about "Must assign a positive kernel size"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can use "You need to set the kernel size if global pooling is not used".

@sxjscience
Copy link
Member

The error code show that the change somehow breaks the cpp package. I need to investigate the status of the CPP package.

@sxjscience
Copy link
Member

sxjscience commented Mar 7, 2018

@CoinCheung The error is due to that pool_type has no default value. We should change this line to set the default to pool_enum::kMaxPooling https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling-inl.h#L63.

pooling_convention = 'valid'


ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type,
pooling_convention=pooling_convention, global_pool=True, name='pool'))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please help to fix the indent.

ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type,
pooling_convention=pooling_convention, global_pool=True, name='pool'))


ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pool_type=pool_type,
pooling_convention=pooling_convention, global_pool=True, name='pool'))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please help to fix the indent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I need to remove the blank lines? I only removed the kernel parameter assignment did not touch these white lines.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can do it if you have time. It should be due to the difference between windows and unix.

pooling_convention = 'valid'

ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling_v1(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we should also remove the kernel phrases in 2d tests? I wrote this test case to make sure that there is a same behavior for global pooling w/ and w/o pad and stride. It seems kernel should also be checked in the same way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But kernel is not used if global_pool=True.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We cannot prevent user from inputing a kernel to global pooling. Kernel size will be reset to image shape if global_pool is true. Code here: https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling-inl.h#L143

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. We should then use add back one test case that uses the kernel argument.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So shall I remove kernel only in the "even number" test cases and leave the odd test case with their kernel? Such as:

        ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
        sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type,  # keep the kernel for checking
                                       pooling_convention=pooling_convention, global_pool=True, name='pool'))

        ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
        sym_list.append(mx.sym.Pooling(pool_type=pool_type,  # remove kernel along with the missing pad and stride
                                       pooling_convention=pooling_convention, global_pool=True, name='pool'))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we shouldn't remove the current test points. We can add new test points into the test case. For me, the 3 test points below should have same result.

ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(kernel=kernel, pad=pad, stride=stride, pool_type=pool_type,
                               pooling_convention=pooling_convention, global_pool=True, name='pool'))

ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(kernel=kernel, pool_type=pool_type,
                               pooling_convention=pooling_convention, global_pool=True, name='pool'))
# below is new test point
ctx_list.append({'ctx': mx.cpu(0), 'pool_data': data, 'type_dict': {'pool_data': np.float32}})
sym_list.append(mx.sym.Pooling(pool_type=pool_type,
                               pooling_convention=pooling_convention, global_pool=True, name='pool'))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CoinCheung, we can then change to append new tests in the list instead of replacing the current tests. We could change the code following the comment by @TaoLv

@CodingCat
Copy link
Contributor

Hi, the community has passed to vote about associating the code changes with JIRA (https://lists.apache.org/thread.html/ab22cf0e35f1bce2c3bf3bec2bc5b85a9583a3fe7fd56ba1bbade55f@%3Cdev.mxnet.apache.org%3E)

We have updated the guidelines for contributors in https://cwiki.apache.org/confluence/display/MXNET/Development+Process, please ensure that you have created a JIRA at https://issues.apache.org/jira/projects/MXNET/issues/ to describe your work in this pull request and include the JIRA title in your PR as [MXNET-xxxx] your title where MXNET-xxxx is the JIRA id

.enforce_nonzero()
.describe("Pooling kernel size: (y, x) or (d, y, x)");

DMLC_DECLARE_FIELD(pool_type)
DMLC_DECLARE_FIELD(pool_type).set_default(pool_enum::kMaxPooling) // add default pooling method
.add_enum("max", pool_enum::kMaxPooling)
.add_enum("avg", pool_enum::kAvgPooling)
.add_enum("sum", pool_enum::kSumPooling)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realized that we need to change the order of the DMLC_DECLARE_FIELD here. In the original version, the parameters that do not have default values will be set first and then goes the params with default values. So the order will be kernel, pool_type, global_pool, cudnn_off, ... After we add default values to kernel and pool_type, the order becomes global_pool, cudnn_off, kernel, pool_type, ... Thus, the way to solve the problem is to change the order:

    DMLC_DECLARE_FIELD(kernel).set_default(TShape())  // add default value here
    .enforce_nonzero()
    .describe("Pooling kernel size: (y, x) or (d, y, x)");

    DMLC_DECLARE_FIELD(pool_type).set_default(pool_enum::kMaxPooling)  // add default pooling method
    .add_enum("max", pool_enum::kMaxPooling)
    .add_enum("avg", pool_enum::kAvgPooling)
    .add_enum("sum", pool_enum::kSumPooling)
    .describe("Pooling type to be applied.");

    DMLC_DECLARE_FIELD(global_pool).set_default(false)
    .describe("Ignore kernel size, do global pooling based on current input feature map. ");

    DMLC_DECLARE_FIELD(cudnn_off).set_default(false)
    .describe("Turn off cudnn pooling and use MXNet pooling operator. ");

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I see in the original version that the order is: global_pool(false), cudnn_off(false), kernel(no default value), pool_type(no default value), ....

In the original version, "kernel" and "pool_type" do not have default value, but they go after "global_pool" and "cudnn_off" which are with default values.

https://github.com/apache/incubator-mxnet/blob/94f68fc8fd21611b7f5c148cb0e5d134efe58f87/src/operator/nn/pooling-inl.h#L59

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, maybe I have misunderstood what you said, I will have a try.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried a few times, and it failed at this position:
https://github.com/apache/incubator-mxnet/blob/94f68fc8fd21611b7f5c148cb0e5d134efe58f87/src/operator/nn/pooling.cc#L55
But I do not understand why it requires stride and kernel have the same length.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no need to check this if global_pool is turned on.

@sxjscience sxjscience changed the title fix average pooling kernel size assignment error [MXNET-80]Fix average pooling kernel size assignment error Mar 12, 2018
@sxjscience
Copy link
Member

@CoinCheung I've created the JIRA issue for you. See https://issues.apache.org/jira/browse/MXNET-80

@sxjscience sxjscience changed the title [MXNET-80]Fix average pooling kernel size assignment error [MXNET-80] Fix average pooling kernel size assignment error Mar 12, 2018
<< "stride and kernel should have the same length";
CHECK_EQ(param.pad.ndim(), param.kernel.ndim())
<< "pad and kernel should have the same length";
attrs->parsed = std::move(param);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We still need to parse the param

@sxjscience
Copy link
Member

We may also need to revise the shape assignment logic: https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling.cc#L103-L215

@CoinCheung
Copy link
Contributor Author

@sxjscience
But I did not see logic error in this function. From the printed error message, I see no CHECK() failure triggered, and in every scenario with global_pool, the output shape is set to [-1,1] or [-1,1,1,] or [-1,1,1,1].

@sxjscience
Copy link
Member

sxjscience commented Mar 13, 2018 via email

@CoinCheung
Copy link
Contributor Author

I tried but failed. So what should be the correct behavior when kernel.ndim() is 0? @sxjscience

oshape[2] = 1;
oshape[3] = 1;
oshape[4] = 1;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, you can use a for-loop instead of if.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is another mismatch at this line:
https://github.com/apache/incubator-mxnet/blob/c9ec3118688c233a66ad847003a9e8d2d09e5952/src/operator/nn/pool.h#L690
and I have no idea how I could fix this as this function pool() have no global_pool as input parameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel it is sort of dangerous to modify function definitions and add a input parameter. Do we have other choices ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need to revise this function. All you need to do is to prepare the correct initial values. You can check the logic here https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/pooling-inl.h#L135-L149 . If global_pool is used, kernel is set to be TShape(ishape.data()+ishape.ndim()-param_.kernel.ndim(), ishape.data()+ishape.ndim()), pad is set to be all 0 and stride is set to be all 1. You can write a function to do this. For example:

TShape kernel = param_.kernel;
TShape padding = param_.pad;
TShape stride = param_.stride;
if(param_.global_pool) {
   kernel = TShape(ishape.data()+ishape.ndim()-param_.kernel.ndim(), ishape.data()+ishape.ndim());
   padding = TShape(ishape.ndim() - 2);
   for(int i = 0; i < ishape.ndim() - 2; i++) {
      padding[i] = 0;
   }
   stride = TShape(ishape.ndim() - 2);
}
pool(s, in_data.dptr<DType>(), in_data.shape_, out_data.shape_,
     kernel,
     padding,
     stride,
     param_.pool_type, req, out_data.dptr<DType>());

oshape[3] = 1;
oshape[4] = 1;
}
} else if (param.kernel.ndim() == 1) {
CHECK_EQ(dshape.ndim(), 3U)
<< "Pooling: Input data should be 3D in (batch, channel, x)";
if (param.global_pool) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These if can be removed because the global_pool case is handled before.

<< "stride and kernel should have the same length";
CHECK_EQ(param.pad.ndim(), param.kernel.ndim())
<< "pad and kernel should have the same length";
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplify the logic here. If you have used global_pool, there is no need to check the stride, kernel or pad.

if(! param.global_pool) {
// CHECK kernel, pad, stride
}

param.stride[0];
oshape[3] = 1 +
(dshape[3] + 2 * param.pad[1] - param.kernel[1]) /
param.stride[1];
} else {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The if (param.global_pool) is removed and the else should also be removed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore this comment. I've misinterpreted the code.

if (param_.global_pool) {
for (index_t i = 0; i < padding.ndim(); i++) {
kernel = TShape(ishape.data()+ishape.ndim()-param_.kernel.ndim(), ishape.data()+ishape.ndim());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sxjscience
It still does not work even though kernel is set accordingly. By the way, I do not think the original implementation does wrong here, as it also considers the global_pool and resets the kernel.
Also I am not familiar with the source codes and the codes are in such a mass that I failed to find the definition of TShape. Would you please show me where TShape is defined?

@piiswrong
Copy link
Contributor

piiswrong commented Mar 16, 2018

Changing the order of declare field is not allowed. It will break API compabitility

@sxjscience
Copy link
Member

@piiswrong I find that the params without default values are parsed first and then goes the params with default values.

@sxjscience
Copy link
Member

@piiswrong We changed the order because the CPP package could not compile without changing it.

if (param_.global_pool) {
for (index_t i = 0; i < padding.ndim(); i++) {
kernel = TShape(ishape.data() + ishape.ndim()-param_.kernel.ndim(),
ishape.data() + ishape.ndim());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I’ve found the error. Here it should be ishape.data() +2, ishape.data() + ishape.ndim())

if (param_.global_pool) {
for (index_t i = 0; i < padding.ndim(); i++) {
kernel = TShape(ishape.data() + ishape.ndim() - param_.kernel.ndim(),
ishape.data() + ishape.ndim());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

@sxjscience
Copy link
Member

@CoinCheung
Copy link
Contributor Author

I tried but am not sure if this is right, please correct me if that is not you want.

@sxjscience
Copy link
Member

sxjscience commented Mar 18, 2018

@CoinCheung We should stash the commits into 1 commit. Use git push --force after stashing the commits.

@@ -687,7 +687,7 @@ inline void pool(mshadow::Stream<cpu>* s, const DType* in_data, const TShape& is
LOG(FATAL) << "Unknown pooling type " << pool_type;
}
} else {
LOG(FATAL) << "Unsupported " << kernel.ndim() << "-D pooling";
LOG(FATAL) << "Unsupported " << kernel.ndim() << "-D non-avg pooling";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why non-avg here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used to misunderstand the logic there and I change it back now. It might be fine now.

@sxjscience
Copy link
Member

sxjscience commented Mar 23, 2018

@CoinCheung There are some problems with the CI. You need to rebase and stash the commits and use git push --force

@CoinCheung
Copy link
Contributor Author

@sxjscience I did nothing since the last commit. But the newly submitted code cannot be compiled.

@sxjscience
Copy link
Member

sxjscience commented Mar 23, 2018

@CoinCheung It's not your fault. This is due to problems in the CI. You need to rebase against the master to trigger the build again. Here is a good guide

You can use the following commands:

git remote add apache https://github.com/apache/incubator-mxnet.git
git fetch apache
git rebase -i apache/master

You will see a file open up and change all the pick into s except the first one and save&close the file.
After the rebase is finished, use git push --force to force update. (Remember to add --force)

modify white space and other format errors

remove wrap line whitespace format error

remove whitespace at the end of line183

change error message

add default pooling type to pool_enum::kMaxPooling

add pooling without kernel test cases

adjust pooling parameter order and add associated test points

remove wrong error test points

ignore kernel size check if global_pool is assigned to be true

modify whitespace

line length adjust

adjust linelength

finally learned to use cpplint

switch off all shape checks if global_pool is assigned

parse parameter when global_pool used

modify pooling shape inference logic

change a way to infer pooling shape

add push oshape

change kernel shape

prepare pooling parameter shapes

check lint

pooling parameters preparation

modify kernel shape computation method

modify a bit pooling_v1

more modification of pooling_v1

remove "avg pool"

tiny changes

change pooling args order back

use size_t instead of int

use changed order and only try tiny changes

try no kernel indicated to python interface with original order

useless modify for recommit
@CoinCheung
Copy link
Contributor Author

@sxjscience changing the order back will even not pass all the compiling. Shall I remove the /example and the /cpp-package directory to avoid these cpp errors?

@sxjscience
Copy link
Member

sxjscience commented Mar 23, 2018 via email

@CoinCheung
Copy link
Contributor Author

@sxjscience I tried and find it does not work. Given a kernel without indicating with kernel= will let python believe it is the first unassigned argument data. It might work only if all the other arguments before kernel are given, because in this way, the kernel input without kernel= is assigned to the first unassigned argument.

@sxjscience
Copy link
Member

@CoinCheung Let's first change the order to make sure the test passed and I'll discuss with @piiswrong about whether it is acceptable.

@CoinCheung
Copy link
Contributor Author

@sxjscience I changed the order and it passed all the test.

@sxjscience
Copy link
Member

We need to either break the backward compatibility of the CPP package or the backward compatibility of the Python package. For CPP package, the Pooling layer is used like this https://github.com/apache/incubator-mxnet/blob/master/cpp-package/example/googlenet.cpp#L88-L89

@CoinCheung
Copy link
Contributor Author

We merely changed the order in the function body rather than the function declarations, and it did pass the test. Why do we still need to break some compatibility?

@sxjscience
Copy link
Member

sxjscience commented Mar 24, 2018 via email

@CoinCheung
Copy link
Contributor Author

Have we changed the order of python api args by changing the order in the C++ source code function body. Do you mean after the changing, the python api is also changed?
like changing python api from:

mxnet.symbol.Pooling(data=None, global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)

To

mxnet.symbol.Pooling(data=None,  kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)

@sxjscience
Copy link
Member

sxjscience commented Mar 26, 2018 via email

@sxjscience
Copy link
Member

@CoinCheung I just discussed it with Eric. We agree that the current way is acceptable. Please resolve the conflict and we can merge.

@anirudh2290
Copy link
Member

@CoinCheung can you please resolve the conflicts.

@CoinCheung
Copy link
Contributor Author

It failed somewhere I don`t think associated with this pull request.

@anirudh2290
Copy link
Member

@sxjscience @piiswrong Good to merge now ?

@sxjscience
Copy link
Member

sxjscience commented Apr 9, 2018

Yes, good to merge. We need to mention the API change in the release note.

mxnet.symbol.Pooling(data=None, global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)

to

mxnet.symbol.Pooling(data=None,  kernel=_Null, pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)

@sxjscience sxjscience merged commit 1e532bf into apache:master Apr 9, 2018
rahul003 pushed a commit to rahul003/mxnet that referenced this pull request Jun 4, 2018
…0000)

* fix average pooling kernel size assignment error

modify white space and other format errors

remove wrap line whitespace format error

remove whitespace at the end of line183

change error message

add default pooling type to pool_enum::kMaxPooling

add pooling without kernel test cases

adjust pooling parameter order and add associated test points

remove wrong error test points

ignore kernel size check if global_pool is assigned to be true

modify whitespace

line length adjust

adjust linelength

finally learned to use cpplint

switch off all shape checks if global_pool is assigned

parse parameter when global_pool used

modify pooling shape inference logic

change a way to infer pooling shape

add push oshape

change kernel shape

prepare pooling parameter shapes

check lint

pooling parameters preparation

modify kernel shape computation method

modify a bit pooling_v1

more modification of pooling_v1

remove "avg pool"

tiny changes

change pooling args order back

use size_t instead of int

use changed order and only try tiny changes

try no kernel indicated to python interface with original order

useless modify for recommit

* no order change and test kernel=

* change order
zheng-da pushed a commit to zheng-da/incubator-mxnet that referenced this pull request Jun 28, 2018
…0000)

* fix average pooling kernel size assignment error

modify white space and other format errors

remove wrap line whitespace format error

remove whitespace at the end of line183

change error message

add default pooling type to pool_enum::kMaxPooling

add pooling without kernel test cases

adjust pooling parameter order and add associated test points

remove wrong error test points

ignore kernel size check if global_pool is assigned to be true

modify whitespace

line length adjust

adjust linelength

finally learned to use cpplint

switch off all shape checks if global_pool is assigned

parse parameter when global_pool used

modify pooling shape inference logic

change a way to infer pooling shape

add push oshape

change kernel shape

prepare pooling parameter shapes

check lint

pooling parameters preparation

modify kernel shape computation method

modify a bit pooling_v1

more modification of pooling_v1

remove "avg pool"

tiny changes

change pooling args order back

use size_t instead of int

use changed order and only try tiny changes

try no kernel indicated to python interface with original order

useless modify for recommit

* no order change and test kernel=

* change order
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants