Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: reduce memory requirements for mesh branch scaling #132

Merged
merged 1 commit into from
Apr 6, 2020

Conversation

danielolsen
Copy link
Contributor

Purpose

Reduce memory requirements for using the design_transmission module, and as a side-benefit simplify the code.

What is the code doing

Previously, we needed enough memory to store a congestion dataframe five times: we loaded CONGU, we loaded CONGL, we created numpy array versions of both, and we created a new dataframe of the same size to hold the element-wise maximization of these two dataframes. We did a 'clean-up' of CONGU and CONGL afterwards, but that doesn't help us if we don't have enough peak memory.

Instead, we can make use of the fact that element-wise, only one of CONGU and CONGL will be non-zero: we can simply add them instead of performing a type conversion, performing a numpy element-wise maximization, and then storing that result into a new dataframe. I think this new format only requires enough memory to store a congestion dataframe two or three times, depending on the internals of pandas DataFrame addition.

Perhaps most importantly, this function makes use of sparse dataframes as introduced in Breakthrough-Energy/PostREISE#96: we no longer expand the sparse dataframes into non-sparse numpy arrays. Even when starting with sparse dataframes, trying to run the previous code created a MemoryError on our laptops, while the new code runs without a problem.

Time Estimate

Half an hour or less.

@danielolsen danielolsen force-pushed the mesh_branch_reduce_memory branch from e4a51d4 to ee944d6 Compare April 5, 2020 03:32
Copy link
Collaborator

@rouille rouille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this should use less memory. The operation might also be faster.

@danielolsen
Copy link
Contributor Author

Tested successfully using sparse dataframe pickles for CONGU and CONGL, as introduced in Breakthrough-Energy/PostREISE#96.

@danielolsen danielolsen merged commit 106fb00 into develop Apr 6, 2020
@danielolsen danielolsen deleted the mesh_branch_reduce_memory branch April 6, 2020 18:19
@ahurli ahurli mentioned this pull request Mar 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants