You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
For FP16, reduction operators tend to loss precision if the accumulation data type remains fp16. Instead, the accumulation dtype should be in fp32. i.e.
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it.
Here are my recommended labels: Feature
For FP16, reduction operators tend to loss precision if the accumulation data type remains fp16. Instead, the accumulation dtype should be in fp32. i.e.
We should do it for the norm op.
Reference impl for softmax: #14098
The text was updated successfully, but these errors were encountered: