Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release KV cache memory occupation when not needed #2588

Closed
wants to merge 20 commits into from

Conversation

fzyzcjy
Copy link
Collaborator

@fzyzcjy fzyzcjy commented Dec 26, 2024

Motivation

This PR depends on #2586, thus there are much more code in the diff panel than its actual content.

Please see #2542

Modifications

Checklist

  • Format your code according to the Contributor Guide.
  • Add unit tests as outlined in the Contributor Guide.
  • Update documentation as needed, including docstrings or example tutorials.

Copy link
Contributor

@merrymercy merrymercy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The design makes sense. The only downside is that it is not compatible with cuda graph. Does you have any ideas to make it compatible with cuda graph? Can we try to hack into the pytorch memory allocator and reuse buffer or fix the cuda pointers?

@fzyzcjy
Copy link
Collaborator Author

fzyzcjy commented Dec 26, 2024

@merrymercy Thank you! I posted in #2542 (comment)

@fzyzcjy
Copy link
Collaborator Author

fzyzcjy commented Dec 27, 2024

I will temporarily close this since #2542 (comment) seems more promising :)

@fzyzcjy fzyzcjy closed this Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants