-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
don't auto-recompute attention or linear #1648
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, just have one question. Also, we should add a test with a simple 2 layer model to verify sdpa or linear is not recomputed in backward.
Thank you @t-vi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you @t-vi
That's a good quick fix, but can we revert to the previous default behavior: don't do any recomputation except for fused operations? Current logic doesn't use the information on whether the operation will be fused. To overwrite the default rule we can propagate |
Maybe, I had that as one of the options in the issue, we went for this for now. To my mind there are multiple parts:
|
Fixes: #1646
Thank you @kshitij12345 for the detailed issue.