Replies: 2 comments 1 reply
-
I would output those different scenarios into single csv file. Could look like this: They could then be imported to Spine DB using the importer. You would need one mapping to import scenario/alternatives and another mapping to import the data for each alternative. Then you would have those 100 scenarios (with the difference in data included) available and Toolbox could parallelize the runs (settings --> engine let's you limit the parallelization). |
Beta Was this translation helpful? Give feedback.
-
Sorry, I pasted a wrong screen-shot. Here is a right one to show where I am. |
Beta Was this translation helpful? Give feedback.
-
A question from @zhaokov (moved from #1702)
In this sample project, I need to add a scenario factor vector to adjust the price. Below is the code for price adjusting:
#-------------------------
price_df_ini = pd.read_excel(flnm_in, sheet_name='price')
k = np.array([0.2, 0.4, 0.3]); #--- 1, 0.6, 0.5
price_df = price_df_ini;
price_df['Price'] = price_df['Price'] * k;
#-------------------------
I will create 100 different vectors between 0 and 1 to adjust the price, run the calculation for each, and get the average total cost of the 100 scenarios. I can use a "For" loop to handle 100 different factor vectors. However, I am very interested to use the function Spinetoolbox provides. I see you mentioned scenario analysis in the documentation, but it seems a little more complex than I need.
Here is my quick thought. I can generate 100 files (say, csv files) to respectively contain the 100 factor vectors and name the files as 1.csv, 2.csv, ..., 100.csv. This way, maybe I can indicate the file name ID (from 1 to 100) in a scenario table. I want to know how Spinetoolbox sets up a scenario analysis procedure to handle the 100 calculations parallelly? Is there a simple way to do my work? I appreciate any help and suggestion from you. Thank you again!
Beta Was this translation helpful? Give feedback.
All reactions