MO should support LRN k param with caffe model, rather than fixed to 1 #716
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I found when mo convert caffe model, the k param of LRN layer was hardcoded to 1, which should read from prototxt.
for example:
layer {
name: "norm1"
type: "LRN"
bottom: "relu1"
top: "norm1"
lrn_param {
local_size: 5
k: 2.000000
alpha: 0.000500
beta: 0.750000
}
}
When convert to mo, k will be set to 1, correct value should keep to 2.
The issue seems only for mo caffe.
I fixed it by modify mo code lrn_ext.py, and verified using 2 modes, the conversion result is correct.