-
Notifications
You must be signed in to change notification settings - Fork 743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Dump for very large RDB files (> 30 GBs) is Slow #23
Comments
@jsrawan-mobo Thanks for taking the time to investigate this! I am painfully aware of the sub-optimal performance. I have been tracking it under issue#1, but haven't really found the motivation to fix it yet. It seems you have made some fixes/enhancements. Did you miss a pull request? Can you point me where you have made these fixes? |
See Pull Request #24. It not completely done, but you can try and see the performance improvement by skipping past the lzf_decompress() and storing the index to a deep dump later. If you like where its headed, i can cleanup and do a proper pull request. |
Have you been able to improve it ? Would it be possible to realease it ? Thanks, |
I hadn't looked at this in a few years, seems like this project went stale. The pull request I put up does work in quick mode like this if you want to give it a try
I'd be willing to fix this up if someone finds use for it, or fork the repo. |
For very large RDB, the memory dump can take upwards of 30 minutes. Even slower, the "key" feature requires a sequential scan over the whole file.
Finally trying to further introspect a data structure like a hash, list, set to find out which field is taking up the most memory. In my case I use celery as worker queue, and some tasks can be gigantic.
So I've made some enhancements such as the following
i) Reduce time to about 5 minutes to dump in quick mode
ii) Allow re-seeking for key contents in seconds, and limit mode
iii) Allow for verbose dumping of hash/list/set to file structure
The text was updated successfully, but these errors were encountered: