-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
一点代码细节 #5
Comments
然后γ作为尺度因子,请问有没有严谨的解释,希望你能看到0.0 |
I also have the same doubt. @Christian-lyc |
确实,我也觉得代码有问题,应该把residual相关代码删掉才符合论文的公式和图解 |
我两种情况都跑了一遍,不乘residual效果非常差,乘了会涨点一点点,所以也许是论文表述有问题 |
@yunxi1 朋友你好,我刚接触深度学习,这篇代码我没怎么看懂,readme里面也就给了NAM三个字,能简单说一说这篇代码的结构和怎么运行吗?如果方便的话,可以用[email protected]联系我吗?真的非常谢谢! |
|
他residual就是个封装, 你不在这个封装residual, 出去还是需要*特征图的。就是帮你简化了一步。 可以参考下CBAM实现就知道了 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
朋友我想问下,通道注意力机制实现最后,为什么是返回torch.sigmoid(x)*residual,而不是直接返回torch.sigmoid(x),看你那个图好像torch.sigmoid(x)就是输出特征图诶
The text was updated successfully, but these errors were encountered: