-
Notifications
You must be signed in to change notification settings - Fork 85
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Use .csv instead of .dat files for non-indexed parameters
This commit updates Switch to use two-row .csv files instead of .dat files for non-indexed parameters. The first row holds a list of parameter names and the second row holds their values. The upgrade script for 2.0.5 also converts .dat files into .csv files. This means non-indexed inputs can be viewed and edited with the same tools as all other inputs, and also means users don't need to learn the ampl/pyomo .dat data description language. The following files have been converted to .csv: financials.dat, trans_params.dat, spillage_penalty.dat, spinning_reserve_params.dat, lost_load_cost.dat, hydrogen.dat However, Switch still uses gen_multiple_fuels.dat, which defines indexed sets (future work will convert this to .csv too).
- Loading branch information
Showing
11 changed files
with
189 additions
and
70 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
from __future__ import print_function | ||
|
||
import os | ||
from switch_model.upgrade.manager import upgrade_plugins | ||
upgrade_module, upgrade_from, upgrade_to = upgrade_plugins[-1] | ||
|
||
if __name__ == '__main__': | ||
print( | ||
"Re-running upgrade from {} to {} for all subdirectories of current directory" | ||
.format(upgrade_from, upgrade_to) | ||
) | ||
|
||
for dirpath, dirnames, filenames in os.walk('.'): | ||
if 'switch_inputs_version.txt' in filenames: | ||
print('upgrading {}'.format(dirpath)) | ||
upgrade_module.upgrade_input_dir(dirpath) |
Oops, something went wrong.
244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find extra-wide tables with a single row extremely awkward to review or edit compared to long and skinny tables with key & value columns. Text editors are basically unusable for this, and spreadsheet editors are just quite awkward, with a tendency to truncate most cells in the display.
What is the motivation for using wide style instead of narrow style for this?
244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. I'll implement parsing support in load_aug for individual key,value csv files to replace .dat files if the optional argument
singleton
is set toTrue
. Should be easy to fiddle with the code subsequently if more discussion is warranted.If I/we can figure out a trivial & robust way to auto-detect
singleton
from inspecting the parameters, we could drop the explicitsingleton
argument, but I'm doubtful changing that trigger from an explicit request to an implicit inference would be an improvement. From the Zen of Python (PEP 20)244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After starting implementation, I realized it would be more explicit and clear to define a new function instead of doing more overloading of our
load_aug
wrapper around Pyomo'sDataPortal.load
function. One reason this seems better to me is it implements distinct parsing fromDataPortal.load
, rather than just customizing calls toDataPortal.load
as our load_aug[mented] function was intended to do.My currect spec is:
Where switch_data is a data portal object. Previously
.dat
files didn't use any of the other keyword arguments forload_aug
, sinceDataPortal.load
had such a simple API for.dat
file parsing.244a9c9
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My reason for doing this within load_aug is that it allows us to use the same behaviors as we already use without duplication of code (i.e., optional/auto_select/params arguments, autodetection of optional params, soon --input-aliases). And load_aug already has a code path (that you wrote) for spotting non-indexed parameters and loading a non-indexed (wide) csv. So it would just require a minor change to that code path (read the file and generate data values instead of reading it directly).
One point though is that we can't just use .load() to read the vertical csv because the "name" and "value" columns may conflict with other param names (or just other singleton parameter files). So we need to be careful to use exactly the same parsing rules as other csv files (e.g. support similar quoting and treatment of null values) while loading it some other way. But that's not insurmountable, and there's not really an argument for doing that in a separate function instead of in the singleton path in load_aug.
So if we are going to keep the behaviors we already have in load_aug, it seems to me to make sense to just use load_aug for these singleton files.
On the other hand, I don't like a lot of aspects of our current arguments to load_aug: I think
optional_params
should be similar toparam
, listing components rather than names (e.g., hydro_system currently hasoptional_params=['mod.wnode_constant_inflow', 'mod.wnode_constant_consumption']
due to this kind of confusion. Further,param
should probably be calledparams
, or maybemandatory_params
, and any particular param should probably only appear inparams
oroptional_params
, not both (again to avoid duplication and maintenance problems). Andauto_select
should be the default/only behavior for all calls. And really, all of this should actually be handled by tagging the param or set with its filename at creation time (along with tagging it with documentation info), and any column should be optional if and only if a default is defined. Then we could automate the load_aug calls and the user wouldn't have to learn or memorize any arguments. This would minimize duplication and enhance readability and maintainability. It would also support use of automatic tools to generate documentation, including cross-references between component names, the module where they're defined and the table where the values are read from.So, with that in mind, it may not matter much how we handle reading the csv files for non-indexed parameters for now. But since we don't know how soon that transition will come, we might as well keep it logical, which probably means reusing load_aug (or at least most of its behavior).