Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix memory leak in src/llama.cpp #8958

Closed

Conversation

mjtalkiewicz
Copy link

Fix memory leak in src/llama.cpp
Free batch before returning from read_kv_cache_meta.

Free `batch` before returning from `read_kv_cache_meta`.
@mofosyne mofosyne added Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix bugfix fixes an issue or bug labels Aug 10, 2024
@compilade
Copy link
Collaborator

Note that this leak no longer exists in #8526 because a llama_batch is no longer used there. (it's instead a llama_ubatch with buffers allocated from the llama_sbatch of a llama_context)

See the relevant lines (after pressing on Load diff for src/llama.cpp)

@mjtalkiewicz
Copy link
Author

Thank you for the update.

@mjtalkiewicz mjtalkiewicz deleted the mjtalkiewicz-patch-1 branch August 22, 2024 20:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugfix fixes an issue or bug Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants