Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

一点代码细节 #5

Open
mxh1125587844 opened this issue Dec 13, 2021 · 7 comments
Open

一点代码细节 #5

mxh1125587844 opened this issue Dec 13, 2021 · 7 comments

Comments

@mxh1125587844
Copy link

朋友我想问下,通道注意力机制实现最后,为什么是返回torch.sigmoid(x)*residual,而不是直接返回torch.sigmoid(x),看你那个图好像torch.sigmoid(x)就是输出特征图诶

@mxh1125587844
Copy link
Author

然后γ作为尺度因子,请问有没有严谨的解释,希望你能看到0.0

@haikunzhang95
Copy link

I also have the same doubt. @Christian-lyc

@yunxi1
Copy link

yunxi1 commented Apr 19, 2022

确实,我也觉得代码有问题,应该把residual相关代码删掉才符合论文的公式和图解

@yunxi1
Copy link

yunxi1 commented Apr 25, 2022

我两种情况都跑了一遍,不乘residual效果非常差,乘了会涨点一点点,所以也许是论文表述有问题

@HS12707
Copy link

HS12707 commented Jun 5, 2022

@yunxi1 朋友你好,我刚接触深度学习,这篇代码我没怎么看懂,readme里面也就给了NAM三个字,能简单说一说这篇代码的结构和怎么运行吗?如果方便的话,可以用[email protected]联系我吗?真的非常谢谢!

@zhangjinglin888
Copy link

唉,是真的不太行。我用来处理传感器数据的,还不如cam机制

@yanwencheng000
Copy link

residual就是个封装, 你不在这个封装residual, 出去还是需要*特征图的。就是帮你简化了一步。 可以参考下CBAM实现就知道了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants