forked from ufs-community/regional_workflow
-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merging head of NOAA-EMC develop #3
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…257) ## DESCRIPTION OF CHANGES: As it currently exists, the generate_FV3SAR_wflow.sh script can fail in some non-obvious ways if the correct python environment is not available. This PR adds some checks at the beginning of this script for the required python version and python modules that are needed to successfully set up the regional workflow, so the script will immediately fail with a descriptive error message if the necessary python environment is not available. The error messaging is still not ideal, as the structure of this script does not allow for a simple exit upon error and still outputs some spurious error messages, but this is an improvement over the previous behavior in the case of a bad python environment. ## TESTS CONDUCTED: Ran checks on Cheyenne, Jet, and Hera for the script's expected behavior. When the correct python environment was loaded, behavior did not change, as expected. When python versions or modules were insufficient, the script now fails immediately, and outputs a descriptive error message about the unmet requirement. Also ran the end-to-end workflow tests on Hera, there was no change in behavior.
DESCRIPTION OF CHANGES: Add a subdirectory to ush/ containing wrapper scripts each wrapper script runs one workflow task the experiment-generation step MUST be completed before running these scripts an example batch submit script is provided for PBSpro (cheyenne) and Slurm (hera) a README file is provided with basic instructions (including which tasks in which order) users on other systems will need to customize a batch script for their system TESTS CONDUCTED: A single experiment was run using the standard rocoto job control, on hera and cheyenne. This same experiment was run using the wrapper scripts on hera and cheyenne. ISSUE (optional): Fixes issue #133
* Remove all references to /lfs3 on Jet * Add Ben and Ratko to the CODEOWNERS file * Replace hard-coded make_orog module file with build-level module file in UFS_UTILS * Remove hard-coded make_sfc_climo module file * Rename all FV3-SAR and SAR-FV3 to FV3-LAM, rename all JPgrid to ESGgrid. Remove fix files in regional_workflow and source from fix_am and EMC_post. * Add alpha/kappa parameter back in exregional_make_grid.sh * Remove dash from FV3LAM_wflow.xml * Change FIXam to FIXgsm to source Thompson CCN file * Remove old, unused grid stanza from exregional_run_post.sh * Change Jet locations of fix_am/fix_orog to EMC paths
…265) * Bug fix for background map and removal of yaml Change from background_img() to imshow() to remove need for a .json file to show a background map image. Remove the color palette conversion to allow for background image and contours to display. Remove yaml mentions as it is not in use. Comment out QPF plotting since it is not available the files I am testing with. Others can comment back in if this is a local error. Added information in header to point users to the guide for downloading files needed for Cartopy. * Delete env.yaml Remove env.yaml since it is not needed in the python plotting script. * Change data source name and wind barb plotting - Name of data input file changed from "rrfs" to "RRFS". - Wind barb spacing now dependent on model grid spacing. Wind barbs are now placed every 90-100km, which should be fine for CONUS plotting. * Change data source file name and wind barb plotting - File name for data source changed from 'rrfs' to 'RRFS'. - Wind barbs now plot every 90-100km. This is dependent on grid resolution (see variable 'skip') and should be good for CONUS. Co-authored-by: David Wright <[email protected]>
* Remove all references to /lfs3 on Jet * Add Ben and Ratko to the CODEOWNERS file * Replace hard-coded make_orog module file with build-level module file in UFS_UTILS * Remove hard-coded make_sfc_climo module file * Make sure DO_ENSEMBLE is set correctly to either "TRUE" or "FALSE" * Remove !!python/none entries for all "nam_sfcperts" and "nam_stochy" namelist sections of FV3.input.yml * Remove echo statements used for testing. * Add # to line
## Description of changes: * Update grid parameters of GSD_HRRR3km grid of type JPgrid (including its task layout and blocksize) and enable running with it in NCO mode. The new grid parameters are set to values specified by Christina Holt of GSL. * Update the write-component grid parameters associated with this grid such that the former is within the latter. * Add two WE2E tests in NCO mode to run on this grid -- one using FV3GFS for ICs and LBCs (nco_GSD_HRRR3km_FV3GFS_FV3GFS) and another using HRRRX for ICs and RAPX for LBCs (nco_GSD_HRRR3km_HRRRX_RAPX). First test works but the second doesn't due to a yet-unknown problem in chgres_cube, but the test may work with an older executable that GSL will use. * Bug fix: Add the FV3GFS_2017_gfdlmp_regional physics suite to the if-statements in exregional_make_ics.sh and exregional_make_lbcs.sh that set numsoil_out. * Improvement: Add "else" clauses to the if-statements in exregional_make_ics.sh and exregional_make_lbcs.sh that check for the physics suite to set various parameters. These "else" clauses print out an error message whenever the specified physics suite is not covered by the if-statement. ## Tests conducted: ### On hera: Ran all the WE2E tests including the two new ones (nco_GSD_HRRR3km_FV3GFS_FV3GFS and nco_GSD_HRRR3km_HRRRX_RAPX). All tests except regional_010 pass (regional_010 test was already broken in the original develop branch). ### On jet: Ran the two new WE2E tests nco_GSD_HRRR3km_FV3GFS_FV3GFS and nco_GSD_HRRR3km_HRRRX_RAPX as well as regional_002 and regional_003. All passed, although some tasks (make_lbcs and run_fcst) for the nco_GSD_HRRR3km_FV3GFS_FV3GFS and nco_GSD_HRRR3km_HRRRX_RAPX tests took more than 1 try to succeed. Also tested versions of nco_GSD_HRRR3km_FV3GFS_FV3GFS and nco_GSD_HRRR3km_HRRRX_RAPX in which the external files are staged. Again, both succeeded but some tasks take more than 1 try to succeed.
## DESCRIPTION OF CHANGES: Changes to three WE2E tests needed to run successfully on hera that were left out of PR #273. ## TESTS CONDUCTED: Same as PR#273.
* RRFS_v1beta test finished * remove config.sh * revoke change on DT_ATMOS * add if condition for RRFS_v1beta * bug fix for atmos_nthreads * FIXsar to FIXLAM
…ts (#282) * Changes to post file names to make model lowercase and move ${fhr} to three values instead of two * Change "rrfs" to "${NET}" instead * Add missing "f" to file name * Remove restriction on ${fhr} being only two digits * Change fhour to three digit variable and at "f" to filename for Python plotting script. * Add new "post_fhr" variable to support dynamic 2- and 3-digit forecast hour formats in post filenames * Use "print_err_msg_exit" if forecast hour length is too long or short. * Fix indentations
* Changes to post file names to make model lowercase and move ${fhr} to three values instead of two * Change "rrfs" to "${NET}" instead * Add missing "f" to file name * Remove restriction on ${fhr} being only two digits * Change fhour to three digit variable and at "f" to filename for Python plotting script. * Add new "post_fhr" variable to support dynamic 2- and 3-digit forecast hour formats in post filenames * Use "print_err_msg_exit" if forecast hour length is too long or short. * Fix indentations * Remove 0 from dynf0#fhr# and phyf0#fhr# dependencies for post task in XML template
* RRFS_v1beta test finished * remove config.sh * revoke change on DT_ATMOS * add if condition for RRFS_v1beta * bug fix for atmos_nthreads * FIXsar to FIXLAM * WE2E test configuration * update the temporary solution for copying the required files in using RRFS_v1beta * indents adjust
* adding files for getting nomads data new files: ush/NOMADS_get_extrn_mdl_files_grib.sh ush/NOMADS_get_extrn_mdl_files_nemsio.sh * updated code for getting data online in one file new file: ush/NOMADS_get_extrn_mdl_files.sh deleted files: ush/NOMADS_get_extrn_mdl_files_grib.sh ush/NOMADS_get_extrn_mdl_files_nemsio.sh * Enable ensemble forecasts (#245) * Modify workflow to enable ensemble forecasts. Summary of modifications: ------------------------ * Introduce the new workflow variables DO_ENSEMBLE and NUM_ENS_MEMBERS. The user can enable ensemble forecasts by setting DO_ENSEMBLE to "TRUE" and NUM_ENS_MEMBERS to the number of ensemble members to use. * When running ensemble forecasts, create/insert a set of ensemble member directories and create the cycle directories under these member directories. These ensemble member directories are placed at the directory level that cycle directory levels would be placed when not running ensemble forecasts. * Regardless of whether or not ensembles are enabled, change location where external model files are staged so that they are not in the cycle directories but instead one (without ensembles) or two (with ensembles) directory levels up. In the case with ensembles, this needs to be done so that the external model files are not duplicated within each ensemble member directory; they do not need to be because all ensemble members use the same external model files. This is also done for the case without ensembles in order to minimize the difference in worklfow behavior between the with and without ensemble cases. To make this change of location of external model files, the new workflow variable EXTRN_MDL_FILES_BASEDIR is introduced (it is not a user-specified variable but a secondary one). * Add two new WE2E tests for running ensemble forecasts, one in community mode (community_ensemble) and another in NCO mode (nco_ensemble). Modifications common to more than one file (used below in listing of file-by-file modifications): ------------------------------------------------------------------------------------------------ (A) Fix/add/delete comments and/or informational and/or error messages. (B) Remove commented out code. (C) Change location where external model files are staged so that they are not in the cycle directories (which are now underneath each ensemble member directory) but instead one level up. This needs to be done so that the external model files are not duplicated within each ensemble member directory; they do not need to be because all ensemble members use the same external model files. (D) Add a call to the new function set_FV3nml_stoch_params() that takes a base FV3 namelist file and generates from it a new FV3 namelist file for each ensemble member containing a unique set of stochastic parameters (relative to other ensemble members) and places it at the top level of that ensemble member's directory (so that all cycles in that member directory can create symlinks to it). (E) Rename the variable FV3_NML_BASE_FN to FV3_NML_BASE_SUITE_FN to clarify that it specifies the name of the FV3 namelist file for the base physics suite (which is used to generate the namelist file specific to the user-specified physic suite). This is done to better distinguish this base namelist file from the base namelist file used to generate namelist files for the various ensemble members. (The name of the latter is specified in the new workflow variable FV3_NML_BASE_ENS_FN.) (F) Introduce the new workflow variable FV3_NML_BASE_ENS_FN that specifies the name to use for the base FV3 namelist file from which to generate the namelist file for each ensemble member. This variable is not used if not running ensemble forecasts (i.e. if DO_ENSEMBLE is not set to "TRUE"). (G) Add the local variable dummy_cyc_dir that specifies the (dummy) directory with respect to which to set the relaive paths of the fixed files (i.e. those in the FIXam directory) in the FV3 namelist file. When running ensembles, this path is two levels up from the cycle directory; without ensembles, it is only one level up (as was originally the case). File-by-file description of modifications: ----------------------------------------- jobs/JREGIONAL_GET_EXTRN_MDL_FILES: (C) jobs/JREGIONAL_RUN_FCST: * Change CYCLE_DIR to cycle_dir since it is a local variable in this context (it is an argument to the script exregional_run_fcst.sh). jobs/JREGIONAL_RUN_POST: * For NCO mode, change directory in which output from the RUN_POST_TN task is stored such that if running ensemble forecasts, subdirectories are created under COMOUT_BASEDIR for each ensemble member. This is done via the variable SLASH_ENSMEM_DIR, which is set to either "/mem$NN" where $NN is the member number (if running ensemble forecasts) or to a null string (if not running ensemble forecasts). For community mode, the output from the post task is under CYCLE_DIR, which now gets set in the rocoto XML such that it is under an ensemble member directory (see below description of modifications to ush/templates/FV3SAR_wflow.xml). modulefiles/tasks/hera/make_ics.local: * Add wgrib2 (must have been removed by mistake?). modulefiles/tasks/hera/make_lbcs.local: * Add wgrib2 (must have been removed by mistake?). scripts/exregional_make_grid.sh: (A), (D) scripts/exregional_make_ics.sh: (C) scripts/exregional_make_lbcs.sh: (C) scripts/exregional_run_fcst.sh: * Change CYCLE_DIR to cycle_dir since it is a local variable. * Add a check such that if running ensemble forecasts, the symlink for the FV3 namelist file that must be present in the cycle directory points to the namelist file at the top level of the ensemble directory under which that cycle directory is located. tests/baseline_configs/config.community_ensemble.sh: * New workflow configuration file to perform WE2E test of ensemble forecasts in community mode. tests/baseline_configs/config.nco_ensemble.sh: * New workflow configuration file to perform WE2E test of ensemble forecasts in NCO mode. tests/baselines_list.txt: * Add two new WE2E tests for running ensemble forecasts, one in community mode (community_ensemble) and another in NCO mode (nco_ensemble). ush/config_defaults.sh: (A), (E), (F) * Introduce the new workflow variable DO_ENSEMBLE that specifies whether or not to run ensemble forecasts. Enable ensemble forecasts by setting DO_ENSEMBLE to "TRUE". * Introduce the new workflow variable NUM_ENS_MEMBERS that specifies the number of ensemble members. This variable is not used if DO_ENSEMBLE is not set to "TRUE". ush/generate_FV3SAR_wflow.sh: (A), (B), (D), (E), (G) * Add new ensemble-related parameters to the "settings" variable that is used to customize the jinja2 template for the rocoto XML file. These new parameters allow the resulting XML to loop over ensemble members, rename rocoto tasks and log files such that they contain the member number (and are thus unique), and modify cycle directories so that they are member-specific. ush/set_FV3nml_sfc_climo_filenames.sh: (G) ush/set_FV3nml_stoch_params.sh * File to define new function that takes a base FV3 namelist file and generates from it a new FV3 namelist file for each ensemble member containing a unique set of stochastic parameters (relative to other ensemble members) and places it at the top level of that ensemble member's directory (so that all cycles in that member directory can create symlinks to it). ush/setup.sh: (E), (F) * Rename FCST_LEN_HRS_MAX to fcst_len_hrs_max since it is a local variable. * Introduce the new workflow variable EXTRN_MDL_FILES_BASEDIR that specifies the base directory under which the external model files will be staged. Under this directory, a subdirectory will be created for each external model (one for ICs, another for LBCs if different from the one for ICs), and under these, subdirectories will be created for each cycle in which to stage the files. Note that EXTRN_MDL_FILES_BASEDIR is a secondary variable in the sense that it is not user-specifiable. * Rename FV3_NML_BASE_FP to FV3_NML_BASE_SUITE_FP for the same reason as renaming of FV3_NML_BASE_FN to FV3_NML_BASE_SUITE_FN (see (E) above). * Create new workflow variable FV3_NML_BASE_ENS_FP that specifies the full path to the base FV3 namelist file from which the namelist files for the individual ensemble members are generated. * Introduce the new workflow array variable ENS_MEMBER_DIRS. If running ensemble forecasts, set its elements to the ensemble member directories immediately under the experiment directory. * If running ensemble forecasts, create the ensemble directories specified in the new workflow array variable ENS_MEMBER_DIRS. * Record new variables to the workflow variable definitions file. ush/templates/FV3SAR_wflow.xml: * Bug fix - Change file name "make_grid_task_complete.txt" to "&MAKE_GRID_TN;_task_complete.txt" to make it changeable with the task name. * Remove CYCLE_BASEDIR as an environment variable from the GET_EXTRN_ICS_TN and GET_EXTRN_LBCS_TN tasks. This variable is no longer needed because the external model files are now staged outside of the cycle directories (under EXTRN_MDL_FILES_BASEDIR). * Place a jinja2-controlled metatask around all tasks starting with MAKE_ICS_TN that loops over all ensemble members if do_ensemble is set to TRUE. * For tasks that are within the metatask that loops over the ensemble members, add a string (uscore_ensmem_name) that identifies the ensemble member to task names, corresponding job names, and log file names. Note that this variable is set to an empty string if not running ensemble forecasts. * For tasks that are within the metatask that loops over the ensemble members, add a string (slash_ensmem_dir) that inserts the ensemble member directory to the definition of CYCLE_DIR (since when running ensembles, the cycle directories are under the member directories). Note that this variable is set to an empty string if not running ensemble forecasts. * Set the ensemble index (ENSMEM_INDX) as an environment variable in the RUN_FCST_TN task. This is needed in the ex-script exregional_run_fcst.sh to be able to the symlink in the cycle directory to the FV3 namelist file in the correct ensemble member directory. Note that this variable is set to an empty string if not running ensemble forecasts (in that case, it is not used). * Set the ensemble member subdirectory preceded by a slash (SLASH_ENSMEM_DIR) as an environment variable in the RUN_POST_TN task. This is needed in NCO mode when setting the directory in which to place the output of UPP. Note that this variable is set to an empty string if not running ensemble forecasts. ush/valid_param_vals.sh: * Specify valid values for the new workflow variable DO_ENSEMBLE. * Minor changes to code comments. * Apparently there is now a requirement in the FV3 code that consv_te be set to 0 on any regional grid. Make this change for the FV3_CPT_v0 suite (which is the only one for which consv_te had been set to a nonzero value). * Bug fix in a diag_table that should already be in the develop branch. * Bug fix -- Fix inconsistency in the way the ensemble member directories are named in different scripts. The workflow generation scripts create ensemble directories named, e.g., mem1, mem2, ..., mem8, but the exregional_run_fcst.sh script assumes they are mem01, mem02, ..., mem08. Make these consistent. Now, the naming convention used depends on whether or not leading zeros are included in NUM_ENS_MEMBERS. For example, if NUM_ENS_MEMBERS is set to "8", then the member directory names will be mem1, mem2, ..., mem8; and if NUM_ENS_MEMBERS is set to "08", then the member directory names will be mem01, mem02, ..., mem08. * Add new WE2E test to test use of leading zeros in ensemble member names. * Change directory structure so that ensemble member directories are beneath the cycle directories (instead of the opposite). Details below. Summary of modifications: ------------------------ * Place cycle directories above ensemble member directories, i.e. each cycle directory will contain a full set of ensemble member subdirectories that are used as the run directories. Previously, it was the other way around, i.e. each member directory contained all cycle subdirectories. * Move the external model directories into each cycle directory (instead of being in their own directory called extrn_mdl_files under the main experiment directory). * During the experiment generation step, generate the full list of cycle dates/times to run and create a directory for each cycle. * Add capability to have more than one cycle. This capability was previously present but was inadvertently disabled during transition to generating the rocoto XML using a jinja2 template. * In order to test the capability of the workflow to run multiple cycles (possibly on different days), modify the WE2E tests community_ensemble_2mems and nco_ensemble so that there are two cycle hours per day. Also, modify community_ensemble_2mems so that the starting and ending days are different (one day later). Modifications common to more than one file (used below in listing of file-by-file modifications): ------------------------------------------------------------------------------------------------ (A) Change the location where external model files are stored to be under each cycle directory (instead of a separate directory specified by EXTRN_MDL_FILES_BASEDIR under the main experiment directory). (B) Insert the new environment variable SLASH_ENSMEM_SUBDIR anywhere CYCLE_DIR appears. This variable is passed in by the rocoto XML. If not running ensemble forecasts, it is simply set to an empty string, and if running ensembles, it is set to the string "/${ensmem_subdir}" where ensmem_subdir is the subdirectory of the current ensemble member under the current cycle directory. This allows the subdirectories containing ICS, LBCS, and RESTART files to be placed directly under the current cycle directory when NOT running ensembles and for them to be placed under the current ensemble member directory (which is one level down from the current cycle directory) when running ensembles. (C) For clarity, add new local variable run_dir that gets set to the run directory based on the current cycle and, if applicable, the ensemble member. (D) Call the new function create_diag_table_files (in the new file create_diag_table_files.sh) to create diagnostics table files. (D) For correctness, rename the local variable dummy_cyc_dir to dummy_run_dir. (E) Remove any use of EXTRN_MDL_FILES_BASEDIR since it is no longer needed as a workflow variable. (F) Remove any use of ENS_MEMBER_DIRS since it is no longer needed as a workflow variable. (V) Remove unused code. (W) Edit informational and/or error messages. (X) Remove trailing whitespace. (Y) Remove commented out code. (Z) Edit comments. File-by-file description of modifications: ----------------------------------------- jobs/JREGIONAL_GET_EXTRN_MDL_FILES: (A) jobs/JREGIONAL_MAKE_ICS: (B) jobs/JREGIONAL_MAKE_LBCS: (B) jobs/JREGIONAL_RUN_FCST: (B), (C), (Z) * Pass in ENSMEM_INDX and SLASH_ENSMEM_SUBDIR as arguments to the function exregional_run_fcst(). jobs/JREGIONAL_RUN_POST: (B), (Z) * Create the new local variable run_dir in which to store the path to the run directory (for the current cycle and possibly ensemble member). * In NCO mode, change location where ensemble directories are created to be under the cycle directory instead of above it (analogous change to NCO mode as is done in (B) for community mode). * In community mode, place the postprd subdirectory under the run directory instead of under CYCLE_DIR (since now, cycle directories are one level up if running ensembles and thus would be the incorrect place to create postprd). * Create the new argument cdate to the function exregional_run_post() and pass in the environment variable CDATE for its value (this is instead of using CDATE directly in exregional_run_post()). * Change the argument cycle_dir of the function exregional_run_post to run_dir since that's what we really want in that function. This is instead of passing in cycle_dir and then forming run_dir. scripts/exregional_make_grid.sh: (D) scripts/exregional_make_ics.sh: (A), (X) scripts/exregional_make_lbcs.sh: (A), (X) scripts/exregional_run_fcst.sh: (C), (W) * Introduce new input arguments ensmem_indx and slash_ensmem_subdir that get set to the rocoto-specified environment variables ENSMEM_INDX and SLASH_ENSMEM_SUBDIR, respectively, in the call to this function in jobs/JREGIONAL_RUN_FCST. * Change cycle_dir to run_dir in most places to make the directory name more general. This is because the run directory will be the cycle directory only when not running ensembles. When running ensembles, the run directory will be one of the ensemble member directories, which will be one level down from the current cycle directory. * Fix typo where there is an extra "}" printed after $target in error messages. * Use the new workflow array variable FV3_NML_ENSMEM_FPS (which contains the full paths to the FV3 namelist files for each ensemble member) when creating a link in the run directory to the FV3 namelist file for the current ensemble member. Note that these namelist files are cycle-independent and thus are created only once during the experiment generation step. * Move the creation of diagnostics table files to a new function (in ush/create_diag_table_files.sh), and call that function during experiment generation (in ush/generate_FV3SAR_wflow.sh) instead of here in exregional_run_fcst.sh. We do this because the diagnostics table files depend only on the cycle, not the ensemble member. Thus, since we know the cycles to run at experiment generation time, we generate the diagnostics file for each cycle then and place each in its corresponding cycle directory. * If running ensembles, create symlinks in the run directory to the diagnostics table and model configure files in the cycle directory (which will be one level up from the run directory). We don't do this when NOT running ensembles because in that case, the run directory is the cycle directory (and these two files already exist in that directory; they are created during experiment generation time). scripts/exregional_run_post.sh: (C), (Y) * Create the new input argument cdate (which gets set to the global variable CDATE in the call to this function (exregional_run_post)) and use it instead of the global variable CDATE. * Change the argument cycle_dir to run_dir since that's more useful in this function. This is instead of passing in cycle_dir and then forming run_dir. * Make the local variables "POST_..." lowercase to follow the convention that local variables be in lower case. tests/baseline_configs/config.community_ensemble_2mems.sh: * Modify settings in this test configuration so that the starting and ending days of the cycles are not the same and so that there are two cycle hours per day. This is to have more thorough testing of the ensembles feature in community mode. tests/baseline_configs/config.nco_ensemble.sh: * Modify settings in this test configuration so that there are two cycle hours per day. This is to have more thorough testing of the ensembles feature in NCO mode. ush/create_diag_table_files.sh: * New file containing a function that creates a diagnostics table file for each cycle date and places it in the corresponding cycle directory. ush/create_model_config_files.sh: * New file containing a function that creates a model configuration file for each cycle date and places it in the corresponding cycle directory. ush/set_cycle_dates.sh: * New function that sets all the cycle dates/times to run. ush/generate_FV3SAR_wflow.sh: (D) * For clarity and consistency with other scripts, change variable name from slash_ensmem_dir to slash_ensmem_subdir. * Change "settings" variable used to set parameters in the jinja template for the rocoto XML to add capability to have more than one cycle. This capability was previously present but was inadvertently disabled during transition to generating the rocoto XML using a jinja2 template. * Use the new workflow array variable ALL_CDATES (containing all the cycle dates/times to run) to create all the cycle directories. Previously, the cycle directories were created during the make_ics or make_lbcs task, but it is clearer to do it during experiment generation. Also, it now must be done during experiment generation because now, the model configuration file(s) and possibly also the diagnostics table file(s) (if the MAKE_GRID_TN step is being skipped), which are cycle-dependent but ensemble-member-independent, are created and placed in the cycle directories during experiment generation. * Call the new function create_model_config_files() to create a model configuration file within each cycle directory. * If not running the MAKE_GRID_TN task, call the new function create_diag_table_files() to create a diagnostics table file within each cycle directory. ush/set_FV3nml_sfc_climo_filenames.sh: (D) ush/set_FV3nml_stoch_params.sh: (Z) * For consistency with other scripts, rename the variable fv3_nml_ens_fp to fv3_nml_ensmem_fp. * Use the new workflow array variable FV3_NML_ENSMEM_FPS (which contains the full paths to the FV3 namelist files for each ensemble member) to set the full path to the current ensemble member's FV3 namelist file. ush/setup.sh: (E), (F), (V), (Z) * Call the new function set_cycle_dates() to set the new worklow array variable ALL_CDATES containing all the cycle days/times to be run. * Set the new workflow variable NUM_CYCLES to the number of elements in ALL_CDATES. * Set the new workflow array variable ENSMEM_NAMES containing the names of the ensemble members. * Set the new workflow array variable FV3_NML_ENSMEM_FPS containing the full paths to the FV3 namelist files of the ensemble members. * Remove creation of ensemble member directories. These are now created in the j-jobs of the MAKE_ICS_TN or the MAKE_LBCS_TN task (whichever runs first). ush/templates/FV3SAR_wflow.xml: (Y) * Modify jinja code to allow for multiple cycles to be run. This capability was previously present but was inadvertently disabled during transition to generating the rocoto XML using a jinja2 template. * Add CYCLE_DIR as an environment variable to the GET_EXTRN_ICS_TN and GET_EXTRN_LBCS_TN tasks. * In the MAKE_ICS_TN, MAKE_LBCS_TN, RUN_FCST_TN, and RUN_POST_TN tasks, change the way CYCLE_DIR is set so that it does not include the ensemble member subdirectory. * In the MAKE_ICS_TN, MAKE_LBCS_TN, RUN_FCST_TN, and RUN_POST_TN tasks, create the new environment variable SLASH_ENSMEM_SUBDIR that gets set to a null string if not running ensembles and to the string "/${name_of_ensemble_member}" when running ensembles. * Bug fixes. These bugs were introduced during the previous merge of the develop branch into this fork (feature/ensemble). * To test the multiple-days and multiple-cycle-hours capabilities with ensemble forecasts in community mode, change WE2E test "community_ensemble_008" to include 2 days and 2 cycle hours per day (instead of 1 and 1, respectively). * Add WCOSS changes for the feature/ensemble branch * Minor changes to code comments. Co-authored-by: Benjamin.Blake EMC <[email protected]> * updated codes according to the comments modified files: scripts/exregional_get_extrn_mdl_files.sh ush/NOMADS_get_extrn_mdl_files.sh ush/config_defaults.sh ush/generate_FV3SAR_wflow.sh ush/valid_param_vals.sh * Change filename: ush/generate_FV3SAR_wflow.sh -> ush/generate_FV3LAM_wflow.sh * adding one test to WE2W for downloading files new file for the test: tests/baseline_configs/config.user_download_extrn_files.sh modified file for the test: tests/baselines_list.txt modified files for recent changes(SAR to LAM, JPgrid to ESGgrid) scripts/exregional_get_extrn_mdl_files.sh ush/config_defaults.sh ush/valid_param_vals.sh Co-authored-by: Linlin.Pan <[email protected]> Co-authored-by: gsketefian <[email protected]> Co-authored-by: Benjamin.Blake EMC <[email protected]>
* Remove all references to /lfs3 on Jet * Add Ben and Ratko to the CODEOWNERS file * Replace hard-coded make_orog module file with build-level module file in UFS_UTILS * Remove hard-coded make_sfc_climo module file * Add changes for merged chgres_cube code * Add changes for merged chgres_cube code * Minor tweak to FCST_LEN_HRS default in config.community.sh * Changes to make the release version of chgres_cube run in regional_workflow * Changes for regional_grid build on Jet * Changes to regional_grid build for Hera * Change regional_grid makefile for hera * Remove leading zero from FCST_LEN_HRS in config.community.sh * Remove /sorc directory * Remove build module files for codes originally in the regional_workflow repository. Remove run-time make_grid module file for all platforms. Will be sourced from UFS_UTILS from now on. * Update regional grid template for newest code * Copy make_grid module file from UFS_UTILS * Add make_grid.local file for most platforms * Remove alpha and kappa parameters from the regional_grid namelist * Modifications to file type conventions in the chgres_cube namelist for FV3GFS and GSMGFS nemsio files * Set convert_nst=False for global grib2 FV3GFS files when running chgres_cube * Add tracers back into nemsio file processing * Changes to the make_lbcs ex-script (remove all surface-related variables) * Fix for modulefiles * Fixes after merging authoritative repo into fork * Add Thompson climo to chgres_cube namelist for appropriate external model/SDF combinations * Commit new locations for Thompson climo fix file * Change FIXsar to FIXLAM * Change gfs_bndy.nc to gfs.bndy.nc * Move file * Bug fixes to setup.sh and exregional_make_ics.sh * Add support for NAM grib2 files * Path fix * Typo fix * Fix extension on UPP grib2 files * Bug fix for if statement * Add .grib2 extension to soft links * Fix nsoill_out values based on LSM scheme in CCPP suite * Fix grib2 extensions * Add if statement for varmap tables when using Thompson MP and initializing from non-RAP/HRRR data * Final modifications to support NAM grib2 files in regional_workflow * Set climo as default for soil variables when using HRRRX (user will need change this if they know these variables are available for the dates they are running). * Add FV3_CPT_v0 to varmap if statement * Changes to post file names to make model lowercase and move ${fhr} to three values instead of two * Change "rrfs" to "${NET}" instead * Revert "Add FV3_CPT_v0 to varmap if statement" This reverts commit b04ad0b. * Add if statement to set all ad-hoc scheme magnitudes to -999.0 if not being used.
…semble forecasts; remove obsolete physics suites; get WE2E tests to run on cheyenne (#287) ## DESCRIPTION OF CHANGES: ### Bugs fixed: * In exregional_make_orog.sh, remove the else-statement that causes the script to exit if the suite is not FV3_RRFS_v1beta. * In exregional_run_fcst.sh, remove lines that create a symlink in the run directory to the model_configure file in the cycle directory. These lines seem to have been inadvertantly reintroduced into the script and cause ensemble forecasts to fail. ### Other modifications: * Remove suites FV3_GSD_SAR_v1 and FV3_RRFS_v0 from workflow since they are no longer in ufs-weather-model. Also remove the WE2E test configuration files for these suites (config.regional_013.sh and config.regional_016.sh). * In exregional_make_orog.sh, for the RRFS_v1beta suite, modify the command that copies the orography statistics files needed by the drag parameterization such that only files matching *_ls*.nc and *_ss*.nc are copied instead of everything (because the source directory may contain other files that do not need to be copied). * In the WE2E configuration file for the RRFS_v1beta suite (config.FV3_RRFS_v1beta.sh), change the location where the additional orography files needed by this suite are copied from to a common location rather than a user directory. * Remove unused script create_model_config_files.sh. * Rename the function (and file) create_model_config_file(.sh) to create_model_configure_file(.sh) because the file that this function creates is called model_configure, not model_config. * Modify WE2E test configuration files as well as the test run script (tests/run_experiments.sh) to get the tests to run more easily on cheyenne. Still need to make a manual change to the settings in run_experiments.sh, but this made it possible to run the tests. ## TESTS CONDUCTED: Ran all 26 WE2E tests both **on hera and cheyenne**. 24 of the 26 succeeded. Details: * regional_010 failed, but it was already broken. * user_download_extrn_files failed. It seems to have failed to obtain the external model files from NOMADS (and this step is done during workflow generation, not as part of any workflow task). This test is completely unrelated to this PR, so the failure may have already existed in the develop branch. * The remaining 24 tests (including the one for the FV3_RRFS_v1beta suite) succeeded without problems. Note that the FV3_RRFS_v1beta suite was also tested on the GSD_HRRR3km grid. This failed at around hour 4 (for a 6-hour forecast) with a very non-informative error. This test was also tried previously with hash 8165575 from the NCAR fork of ufs-weather-model (in the dtc/develop branch), and that finished successfully. Not clear what changed between these two versions of ufs-weather-model. ## OTHER CONTRIBUTORS: @JeffBeck-NOAA
…sed to avoid bug with do_sppt/skeb/shum namelist entries (#291) * Remove all references to /lfs3 on Jet * Add Ben and Ratko to the CODEOWNERS file * Replace hard-coded make_orog module file with build-level module file in UFS_UTILS * Remove hard-coded make_sfc_climo module file * Fixes after updating fork with authoritative repo * Set ad-hoc stochastic physics scheme magnitudes to -999.0 when not used to avoid bug with do_sppt/skeb/shum namelist entries * Change stanzas in setup.sh for setting the ad-hoc schemes to "TRUE" and "FALSE" from lowercase to uppercase, and move the "if false, then -999.0" block from the generate to the setup script
* Remove all references to /lfs3 on Jet * Add Ben and Ratko to the CODEOWNERS file * Replace hard-coded make_orog module file with build-level module file in UFS_UTILS * Remove hard-coded make_sfc_climo module file * Fixes after updating fork with authoritative repo * Set ad-hoc stochastic physics scheme magnitudes to -999.0 when not used to avoid bug with do_sppt/skeb/shum namelist entries * Add nrows to input.nml, HALO_BLEND to config_defaults.sh, and apply HALO_BLEND user-defined value during generate step. * Add nrows_blend to the template namelist file. * Add comment in config_defaults.sh to set HALO_BLEND to zero if the user wants to shut it off.
… GSD_HRRR25km grid. (#295) ## DESCRIPTION OF CHANGES: * Add new subconus 3km grid of ESGgrid type named GSD_SUBCONUS3km. * Allow this as well as the GSD_HRRR25km, GSD_HRRR13km, and GSD_RAP13km grids to run in NCO mode. * Add new WE2E test for the GSD_SUBCONUS3km grid in NCO mode named nco_GSD_SUBCONUS3km_HRRRX_RAPX. * Add new WE2E test for the GSD_HRRR25km grid in NCO mode named nco_GSD_HRRR25km_HRRRX_RAPX. ## TESTS CONDUCTED: Ran the two new WE2E tests on hera -- nco_GSD_SUBCONUS3km_HRRRX_RAPX and nco_GSD_HRRR25km_HRRRX_RAPX. Both completed successfully. The changes do not affect any of the other WE2E tests, so those were not run.
This PR modifies the script that runs the WE2E tests (run_experiments.sh) as well as the individual WE2E configuration files to allow tests to run on hera and cheyenne without the need to manually change settings (e.g. directories) in the individual test configuration files. This capability can be easily extended to other platforms by adding appropriate stanzas in run_experiments.sh. ## DESCRIPTION OF CHANGES: * Set the following workflow parameters in the run_experiments.sh script and write them to the workflow configuration file instead of having them defined in each WE2E configuration file (i.e. remove them from each WE2E configuration file): MACHINE, ACCOUNT, EXPT_SUBDIR, USE_CRON_TO_RELAUNCH, CRON_RELAUNCH_INTVL_MNTS, VERBOSE Note that all these parameters except EXPT_SUBDIR can now be set on the command line when calling run_experiments.sh. If they are not set on the command line, they get set to default values. Also, EXPT_SUBDIR always gets set to the name of the WE2E test. * Add new arguments stmp, ptmp, and verbose to run_experiments.sh so that users can specify them on the command line if they don't like the defaults. * In run_experiments.sh, source the default workflow configuration file (config_defaults.sh) to have all user-specifiable workflow variables defined in some way (even if some of those are set to nonsensical default values). * Put in a check to make sure the CCPP physics suite definition file exists in the ufs-weather-model repo. * Bug fix: Change default blending halo (HALO_BLEND) to 0 (no blending) to avoid bug in halo-blending PR. Bug is that the make_lbcs task does not create a blending zone (i.e. it assumes halo_blend is zero). ## TESTS CONDUCTED: Ran all WE2E tests except user_donwload_extrn_files on hera and cheyenne. All passed except regional_010, which has a preexisting bug related to FV3 namelist settings. Did not run user_donwload_extrn_files because it interrupts the progression of the test script (it needs improvements; but this PR does not affect downloading of external model files from NOMADS). Note that on cheyenne, the make_ics, make_lbcs, and run_post tasks often have to be run multiple times before they succeed (especially the latter two).
christinaholtNOAA
pushed a commit
that referenced
this pull request
Jul 14, 2021
* Fix to post flat file. * Create MET and METplus config files under ush/templates/parm * Added script to pull and reorg ccpa data. Added a script to run gridstat with METplus. Updated MET and METplus config files. * Added new jjob for running grid-stat vx. Updated setup.sh to include grid-stat vx. Updated run_gridstatvx script. * Fixed typo on script name from ksh to sh * Moved some hard coded items out from the script to the XML * Updates to get METplus to run with fewer hard-coded paths. * Updates to add grid-stat task to XML generation. * Bug fixes for adding grid-stat to XML generation * Updates to remove hard-coded paths in config files * Change log dir to put master_metplus log file with other logs under log/, rather than default logs/. * Updates to generate xml without hard-coded paths for MET * Add hera gridstat module file * Add METplus point-stat task for both sfc and upper air * Small tweaks to remove hard coded paths and add some flexibility * Updates for adding point-stat into auto-generated xml * Add in function to set point-stat task to FALSE * Final tweaks to get it to generate the xml correctly * Minor updates to run ensure 0,6,12,18 * Tweaks to var list for Point-Stat * Add METplus settings to config_defaults * Move quote for end of settings and fix extra comment. * Fix typos to populate templates correctly * Updated to include SCRIPTSDIR and other MET specific settings along with updates to FHR syntax * Update module loads on hera * Fixed comment for BOTH_VARn_THRESH to avoid syntax issues * Added files to run grid_stat for a variety of accumulation intervals, including 3, 6, and 24h * Added module load hpss * Remove module load informatino from these scripts * Updated the method of turning on/off vx tasks using jinja template if statement * Remove commented out lines of code. Fixed typo. Removed gen_wflow.out file. * Updated pull scripts to have file names dependent on date to pull from HPSS. Updated to export a few more local variables that METplus conf needed in scripts. Updated workflow to use service queue (for now) to for 1h grid_stat and point_stat run and default for 3+h accumulation grid_stat runs) * moved common_hera.conf to common.conf - no platform specific information included that needs to be handled. * Remove common_hera.conf * Add scripts to pull and process MRMS data from NOAA HPSS * Updates for REFC vx tasks * updates to obs pull scripts * Update for adding in reflectivity verification using MRMS analyses and updating name of model output to RRFS rather than HRRR * Updates to account for CCPA issues on HPSS - day off for 00-05 UTC directories * Verification mods to feature/add metplus (#1) * Remove unused/outdated code (#313) ## DESCRIPTION OF CHANGES: * In setup.sh and generate_FV3LAM_wflow.sh, remove temporary codes that fix bugs in the FV3_GFS_2017_gfdlmp_regional suite definition file because those bugs have been fixed (in the ufs-weather-model repo). * In setup.sh, remove block of code that is no longer necessary because chgres_cube can now initialize from external model data with either 4 or 9 soil levels, and run with LSMs of either 4 or 9 soil levels. * Remove modifications to LD_LIBRARY_PATH in exregional_run_fcst.sh. * For the make_ics and make_lbcs tasks, move the setting of APRUN and other machine-specific actions from the J-job to the ex-script in order to be consistent with the other workflow tasks. * Fix indentation and edit comments. * Remove unused file load_fv3gfs_modules.sh. ## TESTS CONDUCTED: Ran two WE2E tests on hera, new_ESGgrid and new_GFDLgrid: * new_ESGgrid uses the FV3_GFS_2017_gfdlmp_regional suite. The test was successful. * new_GFDLgrid uses the FV3_GFS_2017_gfdlmp suite. The test was successful. ## ISSUE (optional): This resolves issue #198. * Add and call a function that checks for use of Thompson microphysics parameterization in the SDF and if so, adjusts certain workflow arrays to contain the names and other associated values of the fixed files needed by this parameterization so that those files are automatically copied and/or linked to. (#319) ## DESCRIPTION OF CHANGES: Add and call a function that checks for use of Thompson microphysics parameterization in the suite definition file (SDF). If not, do nothing. If so, add to the appropriate workflow arrays the names and other associated values of the fixed files needed by this parameterization so that they are automatically copied and/or linked to instead of being regenerated from scratch in the run_fcst task. ## TESTS CONDUCTED: On hera, ran two WE2E tests, one in NCO mode (nco_RRFS_CONUS_25km_HRRRX_RAPX) and the other in community mode (suite_FV3_GSD_v0). These use suites FV3_GSD_SAR and FV3_GSD_v0, respectively, and both of these call Thompson microphysics. Both succeeded. ## ISSUE (optional): This PR resolves issue #297. * RRFS_v1beta SDF changes after reverting from GSL to GFS GWD suite (#322) (#327) ## DESCRIPTION OF CHANGES: Removed checks on the RRFS_v1beta SDF implemented for use with the GSL GWD suite (now uses the GFS GWD suite). No longer copies staged orography files necessary for the GSL GWD suite. ## TESTS CONDUCTED: Runs to completion on Hera. End-to-end runs DOT_OR_USCORE and suite_FV3_RRFS_v1beta succeeded on Cheyenne. Co-authored-by: JeffBeck-NOAA <[email protected]> * Update FV3.input.nml for fhzero = 1.0 * Updated conf files for file name conventions. * Updated MET scripts and MRMS pull scripts. * Adjust RRFS_CONUS_... grids (#294) ## DESCRIPTION OF CHANGES: * Adjust RRFS_CONUS_25km, RRFS_CONUS_13km, and RRFS_CONUS_3km grid parameters so that: * All grids, including their 4-cell-wide halos, lie completely within the HRRRX domain. * All grids have dimensions nx and ny that factor "nicely", i.e. they don't have factors greather than 7. * The write-component grids corresponding to these three native grids cover as much of the native grids as possible without going outside of the native grid boundaries. The updated NCL scripts (see below) were used to generate the write-component grid parameters. * For the RRFS_CONUS_13km grid, reduce the time step (DT_ATMOS) from 180sec to 45sec. This is necessary to get a successful forecast with the GSD_SAR suite, and thus likely also the RRFS_v1beta suite. * Modify WE2E testing system as follows: * Add new tests with the RRFS_CONUS_25km, RRFS_CONUS_13km, and RRFS_CONUS_3km grids that use the GFS_v15p2 and RRFS_v1beta suites (which are now the ones officially supported in the first release of the short-range weather app) instead of the GFS_v16beta and GSD_SAR suites, respectively. * For clarity, rename the test configuration files that use the GFS_v16beta and GSD_SAR suites so they include the suite name. * Update list of WE2E tests (baselines_list.txt). * Update the NCL plotting scripts to be able to plot grids with the latest version of the workflow. ## TESTS CONDUCTED: On hera, ran tests with all three grids with the GFS_v15p2 and RRFS_v1beta suites (a total of 6 tests). All were successful. * Remove redundant model_configure.${CCPP_PHYS_SUITE} template files; use Jinja2 to create model_configure (#321) ## DESCRIPTION OF CHANGES: * Remove model_configure template files whose names depend on the physics suite, i.e. files with names of the form model_configure.${CCPP_PHYS_SUITE}. Only a single template file is needed because the contents of the model_configure file are not suite dependent. This leaves just one template file (named model_configure). * Change the function create_model_configure_file.sh and the template file model_configure so they use jinja2 instead of sed to replace placeholder values. * Absorb the contents of the write-component template files wrtcmp_lambert_conformal, wrtcmp_regional_latlon, and wrtcmp_rotated_latlon into the new jinja2-compliant model_configure file. We can do this because Jinja2 allows use of if-statements in the template file. * In the new model_configure jinja2 template file, include comments to explain the various write-component parameters. ## TESTS CONDUCTED: On Hera, ran the two WE2E tests new_ESGgrid and new_GFDLgrid. The first uses a "lambert_conformal" type of write-component grid, and the second uses a "rotated_latlon" type of write-component grid. (The write-component also allows "regional_latlon" type grids, which is just the usual earth-relative latlon coordinate system, but we do not have any cases that use that.) Both tests succeeded. ## ISSUE (optional): This PR resolves issue #281. * Add Thompson ice- and water-friendly aerosol climo file support (#332) * Add if statement in set_thompson_mp_fix_files.sh to source Thompson climo file when using a combination of a Thompson-based SDF and non-RAP/HRRR external model data * Modify if statement based on external models for Thompson climo file * Remove workflow variable EMC_GRID_NAME (#333) ## DESCRIPTION OF CHANGES: * Remove the workflow variable EMC_GRID_NAME. Henceforth, PREDEF_GRID_NAME is the only variable that can be used to set the name of the predefined grid to use. * Make appropriate change of variable name (EMC_GRID_NAME --> PREDEF_GRID_NAME) in the WE2E test configuration files. * Change anywhere the "conus" and "conus_c96" grids are specified to "EMC_CONUS_3km" and "EMC_CONUS_coarse", respectively. * Rename WE2E test configuration files with names containing the strings "conus" and "conus_c96" by replacing these strings with "EMC_CONUS_3km" and "EMC_CONUS_coarse", respectively. * Update the list of WE2E test names (tests/baselines_list.txt). * Bug fixes not directly related to grids: * In config.nco.sh, remove settings of QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST since these are now set automatically (due to another PR). * In the template file FV3LAM_wflow.xml, add the ensemble member name after RUN_FCST_TN in the dependency of the run_post metatask. ## TESTS CONDUCTED: Since this change only affects runs in NCO mode, the following NCO-mode WE2E tests were rerun on hera, all successfully: ``` nco_EMC_CONUS_3km SUCCESS nco_EMC_CONUS_coarse SUCCESS nco_EMC_CONUS_coarse__suite_FV3_GFS_2017_gfdlmp SUCCESS nco_RRFS_CONUS_25km_HRRRX_RAPX SUCCESS nco_RRFS_CONUS_3km_FV3GFS_FV3GFS SUCCESS nco_RRFS_CONUS_3km_HRRRX_RAPX SUCCESS nco_ensemble SUCCESS ``` * Port workflow to Orion (#309) ## DESCRIPTION OF CHANGES: * Add stanzas for Orion where necessary. * Add new module files for Orion. * On Orion, both the slurm partition and the slurm QOS need to be specified in the rocoto XML in order to be able to have wall times longer than 30 mins (the partition needs to be specified because it is by default "debug", which has a limit of 30 mins). Thus, introduce modifications to more easily specify slurm partitions: * Remove the workflow variables QUEUE_DEFAULT_TAG, QUEUE_HPSS_TAG, and QUEUE_FCST_TAG that are currently used to determine whether QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST specify the names of queue/QOS's or slurm partitions. * Add the workflow variables PARTITION_DEFAULT_TAG, PARTITION_HPSS_TAG, and PARTITION_FCST_TAG. These will be used to specify slurm partitions only, and the variables QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST will be used to specify queues/QOS's only. IMPORTANT NOTE: On Orion, in order to load the regional_workflow environment needed for generating an experiment, the user must first issue the following commands: ``` module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles module load miniconda3 conda activate regional_workflow ``` ## TESTS CONDUCTED: Ran 11 WE2E tests on Orion, Hera, and Cheyenne. Results on Orion: ``` community_ensemble_2mems SUCCESS DOT_OR_USCORE SUCCESS grid_GSD_HRRR_AK_50km FAILURE - In the run_fcst task. * Error message: !!! (1) Error in subr radiation_aerosols: unrealistic surface pressure = 1 NaN new_ESGgrid SUCCESS new_GFDLgrid SUCCESS regional_001 SUCCESS regional_002 SUCCESS suite_FV3_GFS_v15p2 SUCCESS suite_FV3_GFS_v16beta SUCCESS suite_FV3_GSD_SAR SUCCESS suite_FV3_GSD_v0 SUCCESS ``` Results on Hera: ``` community_ensemble_2mems SUCCESS DOT_OR_USCORE SUCCESS grid_GSD_HRRR_AK_50km SUCCESS new_ESGgrid SUCCESS new_GFDLgrid SUCCESS regional_001 SUCCESS regional_002 SUCCESS suite_FV3_GFS_v15p2 SUCCESS suite_FV3_GFS_v16beta SUCCESS suite_FV3_GSD_SAR SUCCESS suite_FV3_GSD_v0 SUCCESS ``` Results on Cheyenne: ``` community_ensemble_2mems SUCCESS DOT_OR_USCORE SUCCESS grid_GSD_HRRR_AK_50km FAILURE - In run_fcst task. * Error message: !!! (1) Error in subr radiation_aerosols: unrealistic surface pressure = 1 NaN new_ESGgrid SUCCESS new_GFDLgrid SUCCESS regional_001 SUCCESS regional_002 SUCCESS suite_FV3_GFS_v15p2 SUCCESS suite_FV3_GFS_v16beta SUCCESS suite_FV3_GSD_SAR SUCCESS suite_FV3_GSD_v0 SUCCESS ``` All succeed except GSD_HRRR_AK_50km on Orion and Cheyenne. It is not clear why grid_GSD_HRRR_AK_50km fails on Orion and Cheyenne but not Hera. Seems to point to a bug in the forecast model. These two failures are not so important since this grid will soon be deprecated. Also tested successfully on Jet by @JeffBeck-NOAA and on Odin and Stampede by @ywangwof. ## ISSUE: This resolves Issue #152. ## CONTRIBUTORS: @JeffBeck-NOAA @ywangwof @christinaholtNOAA * Removed comments from exregional_get_mrms_files.sh and removed fhzero from FV3.input.yml * Update FV3.input.nml for fhzero = 1.0 * Updated conf files for file name conventions. * Updated MET scripts and MRMS pull scripts. * Removed comments from exregional_get_mrms_files.sh and removed fhzero from FV3.input.yml Co-authored-by: gsketefian <[email protected]> Co-authored-by: Michael Kavulich <[email protected]> Co-authored-by: JeffBeck-NOAA <[email protected]> Co-authored-by: Jamie Wolff <[email protected]> * Change cov_thresh for REFL to be a true max in nbrhood as SPC does. * Separated Pull Data Scripts from Run Vx Scripts: Feature/add_metplus (#2) * Job script for get_obs_ccpa * Jobs script for get_obs_mrms * Jobs script for get_obs_ndas * Added external variables necessary to get_ccpa script * Updated workflow template with separate get obs tasks * Separated pull scripts from run scripts * Added necessary defaults/values for defining pull tasks * Added module files, default config.sh options, and changed dependencies for vx tasks * Changed name of new workflow to FV3LAM_wflow.xml * Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh * Adjusted the community and default config files based on comments * Updated FV3LAM workflow * Fixed discrepancies in config.community.sh * Fixed discrepancies in config_defaults.sh * Fixed discrepancies in config_defaults.sh round 2 * Fixed discrepancies in config_defaults.sh round 3 * Fixed discrepancies in config_defaults.sh round 4 * Fixed discrepancies in config.community.sh round 2 * Fixed discrepancies in config.community.sh round 3 * Fixed discrepancies in generate_FV3LAM_wflow.sh * Fixed discrepancies in generate_FV3LAM_wflow.sh round 2 * Fixed discrepancies in generate_FV3LAM_wflow.sh round 3 * Updated FV3LAM_wflow template * Fixed Vx Task Dependencies in Workflow: Feature/add metplus (#3) * Job script for get_obs_ccpa * Jobs script for get_obs_mrms * Jobs script for get_obs_ndas * Added external variables necessary to get_ccpa script * Updated workflow template with separate get obs tasks * Separated pull scripts from run scripts * Added necessary defaults/values for defining pull tasks * Added module files, default config.sh options, and changed dependencies for vx tasks * Changed name of new workflow to FV3LAM_wflow.xml * Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh * Adjusted the community and default config files based on comments * Updated FV3LAM workflow * Fixed discrepancies in config.community.sh * Fixed discrepancies in config_defaults.sh * Fixed discrepancies in config_defaults.sh round 2 * Fixed discrepancies in config_defaults.sh round 3 * Fixed discrepancies in config_defaults.sh round 4 * Fixed discrepancies in config.community.sh round 2 * Fixed discrepancies in config.community.sh round 3 * Fixed discrepancies in generate_FV3LAM_wflow.sh * Fixed discrepancies in generate_FV3LAM_wflow.sh round 2 * Fixed discrepancies in generate_FV3LAM_wflow.sh round 3 * Updated FV3LAM_wflow template * Fixed the dependencies of the vx tasks * Manual merge with develop that didn't seem to work before. Trying to get feature branch updated so it will run again! * Add local module files * Add environment variable for SCRIPTSDIR * Remove echo statement * Remove old module files * Update to config_default for walltime for ndas pull. Update to metplus parm for obs file template. Update to FV3LAM xml to not include 00 hour for verification * Update template to remove full path * Verification channges for obs. (#4) * Verification channges for obs. * Update config_defaults.sh for vx description * Update config_defaults.sh to remove extraneous MET info. Co-authored-by: Michelle Harrold <[email protected]> * Pull in updates from develop that were not merging properly. Small change to config.community to turn off vx tasks by default. * Did manual merge of these files because it was not handled properly automatically * Adding additional variables to METplus for regional workflow (#5) * Updates to address comments in PR review * Changed name of get_obs to remove _tn * Missed removal of on get_obs Co-authored-by: michelleharrold <[email protected]> Co-authored-by: gsketefian <[email protected]> Co-authored-by: Michael Kavulich <[email protected]> Co-authored-by: JeffBeck-NOAA <[email protected]> Co-authored-by: Lindsay <[email protected]> Co-authored-by: Michelle Harrold <[email protected]> Co-authored-by: PerryShafran-NOAA <[email protected]>
christinaholtNOAA
pushed a commit
that referenced
this pull request
Nov 29, 2021
* Fix to post flat file. * Create MET and METplus config files under ush/templates/parm * Added script to pull and reorg ccpa data. Added a script to run gridstat with METplus. Updated MET and METplus config files. * Added new jjob for running grid-stat vx. Updated setup.sh to include grid-stat vx. Updated run_gridstatvx script. * Fixed typo on script name from ksh to sh * Moved some hard coded items out from the script to the XML * Updates to get METplus to run with fewer hard-coded paths. * Updates to add grid-stat task to XML generation. * Bug fixes for adding grid-stat to XML generation * Updates to remove hard-coded paths in config files * Change log dir to put master_metplus log file with other logs under log/, rather than default logs/. * Updates to generate xml without hard-coded paths for MET * Add hera gridstat module file * Add METplus point-stat task for both sfc and upper air * Small tweaks to remove hard coded paths and add some flexibility * Updates for adding point-stat into auto-generated xml * Add in function to set point-stat task to FALSE * Final tweaks to get it to generate the xml correctly * Minor updates to run ensure 0,6,12,18 * Tweaks to var list for Point-Stat * Add METplus settings to config_defaults * Move quote for end of settings and fix extra comment. * Fix typos to populate templates correctly * Updated to include SCRIPTSDIR and other MET specific settings along with updates to FHR syntax * Update module loads on hera * Fixed comment for BOTH_VARn_THRESH to avoid syntax issues * Added files to run grid_stat for a variety of accumulation intervals, including 3, 6, and 24h * Added module load hpss * Remove module load informatino from these scripts * Updated the method of turning on/off vx tasks using jinja template if statement * Remove commented out lines of code. Fixed typo. Removed gen_wflow.out file. * Updated pull scripts to have file names dependent on date to pull from HPSS. Updated to export a few more local variables that METplus conf needed in scripts. Updated workflow to use service queue (for now) to for 1h grid_stat and point_stat run and default for 3+h accumulation grid_stat runs) * moved common_hera.conf to common.conf - no platform specific information included that needs to be handled. * Remove common_hera.conf * Add scripts to pull and process MRMS data from NOAA HPSS * Updates for REFC vx tasks * updates to obs pull scripts * Update for adding in reflectivity verification using MRMS analyses and updating name of model output to RRFS rather than HRRR * Updates to account for CCPA issues on HPSS - day off for 00-05 UTC directories * Verification mods to feature/add metplus (#1) * Remove unused/outdated code (#313) ## DESCRIPTION OF CHANGES: * In setup.sh and generate_FV3LAM_wflow.sh, remove temporary codes that fix bugs in the FV3_GFS_2017_gfdlmp_regional suite definition file because those bugs have been fixed (in the ufs-weather-model repo). * In setup.sh, remove block of code that is no longer necessary because chgres_cube can now initialize from external model data with either 4 or 9 soil levels, and run with LSMs of either 4 or 9 soil levels. * Remove modifications to LD_LIBRARY_PATH in exregional_run_fcst.sh. * For the make_ics and make_lbcs tasks, move the setting of APRUN and other machine-specific actions from the J-job to the ex-script in order to be consistent with the other workflow tasks. * Fix indentation and edit comments. * Remove unused file load_fv3gfs_modules.sh. ## TESTS CONDUCTED: Ran two WE2E tests on hera, new_ESGgrid and new_GFDLgrid: * new_ESGgrid uses the FV3_GFS_2017_gfdlmp_regional suite. The test was successful. * new_GFDLgrid uses the FV3_GFS_2017_gfdlmp suite. The test was successful. ## ISSUE (optional): This resolves issue #198. * Add and call a function that checks for use of Thompson microphysics parameterization in the SDF and if so, adjusts certain workflow arrays to contain the names and other associated values of the fixed files needed by this parameterization so that those files are automatically copied and/or linked to. (#319) ## DESCRIPTION OF CHANGES: Add and call a function that checks for use of Thompson microphysics parameterization in the suite definition file (SDF). If not, do nothing. If so, add to the appropriate workflow arrays the names and other associated values of the fixed files needed by this parameterization so that they are automatically copied and/or linked to instead of being regenerated from scratch in the run_fcst task. ## TESTS CONDUCTED: On hera, ran two WE2E tests, one in NCO mode (nco_RRFS_CONUS_25km_HRRRX_RAPX) and the other in community mode (suite_FV3_GSD_v0). These use suites FV3_GSD_SAR and FV3_GSD_v0, respectively, and both of these call Thompson microphysics. Both succeeded. ## ISSUE (optional): This PR resolves issue #297. * RRFS_v1beta SDF changes after reverting from GSL to GFS GWD suite (#322) (#327) ## DESCRIPTION OF CHANGES: Removed checks on the RRFS_v1beta SDF implemented for use with the GSL GWD suite (now uses the GFS GWD suite). No longer copies staged orography files necessary for the GSL GWD suite. ## TESTS CONDUCTED: Runs to completion on Hera. End-to-end runs DOT_OR_USCORE and suite_FV3_RRFS_v1beta succeeded on Cheyenne. Co-authored-by: JeffBeck-NOAA <[email protected]> * Update FV3.input.nml for fhzero = 1.0 * Updated conf files for file name conventions. * Updated MET scripts and MRMS pull scripts. * Adjust RRFS_CONUS_... grids (#294) ## DESCRIPTION OF CHANGES: * Adjust RRFS_CONUS_25km, RRFS_CONUS_13km, and RRFS_CONUS_3km grid parameters so that: * All grids, including their 4-cell-wide halos, lie completely within the HRRRX domain. * All grids have dimensions nx and ny that factor "nicely", i.e. they don't have factors greather than 7. * The write-component grids corresponding to these three native grids cover as much of the native grids as possible without going outside of the native grid boundaries. The updated NCL scripts (see below) were used to generate the write-component grid parameters. * For the RRFS_CONUS_13km grid, reduce the time step (DT_ATMOS) from 180sec to 45sec. This is necessary to get a successful forecast with the GSD_SAR suite, and thus likely also the RRFS_v1beta suite. * Modify WE2E testing system as follows: * Add new tests with the RRFS_CONUS_25km, RRFS_CONUS_13km, and RRFS_CONUS_3km grids that use the GFS_v15p2 and RRFS_v1beta suites (which are now the ones officially supported in the first release of the short-range weather app) instead of the GFS_v16beta and GSD_SAR suites, respectively. * For clarity, rename the test configuration files that use the GFS_v16beta and GSD_SAR suites so they include the suite name. * Update list of WE2E tests (baselines_list.txt). * Update the NCL plotting scripts to be able to plot grids with the latest version of the workflow. ## TESTS CONDUCTED: On hera, ran tests with all three grids with the GFS_v15p2 and RRFS_v1beta suites (a total of 6 tests). All were successful. * Remove redundant model_configure.${CCPP_PHYS_SUITE} template files; use Jinja2 to create model_configure (#321) ## DESCRIPTION OF CHANGES: * Remove model_configure template files whose names depend on the physics suite, i.e. files with names of the form model_configure.${CCPP_PHYS_SUITE}. Only a single template file is needed because the contents of the model_configure file are not suite dependent. This leaves just one template file (named model_configure). * Change the function create_model_configure_file.sh and the template file model_configure so they use jinja2 instead of sed to replace placeholder values. * Absorb the contents of the write-component template files wrtcmp_lambert_conformal, wrtcmp_regional_latlon, and wrtcmp_rotated_latlon into the new jinja2-compliant model_configure file. We can do this because Jinja2 allows use of if-statements in the template file. * In the new model_configure jinja2 template file, include comments to explain the various write-component parameters. ## TESTS CONDUCTED: On Hera, ran the two WE2E tests new_ESGgrid and new_GFDLgrid. The first uses a "lambert_conformal" type of write-component grid, and the second uses a "rotated_latlon" type of write-component grid. (The write-component also allows "regional_latlon" type grids, which is just the usual earth-relative latlon coordinate system, but we do not have any cases that use that.) Both tests succeeded. ## ISSUE (optional): This PR resolves issue #281. * Add Thompson ice- and water-friendly aerosol climo file support (#332) * Add if statement in set_thompson_mp_fix_files.sh to source Thompson climo file when using a combination of a Thompson-based SDF and non-RAP/HRRR external model data * Modify if statement based on external models for Thompson climo file * Remove workflow variable EMC_GRID_NAME (#333) ## DESCRIPTION OF CHANGES: * Remove the workflow variable EMC_GRID_NAME. Henceforth, PREDEF_GRID_NAME is the only variable that can be used to set the name of the predefined grid to use. * Make appropriate change of variable name (EMC_GRID_NAME --> PREDEF_GRID_NAME) in the WE2E test configuration files. * Change anywhere the "conus" and "conus_c96" grids are specified to "EMC_CONUS_3km" and "EMC_CONUS_coarse", respectively. * Rename WE2E test configuration files with names containing the strings "conus" and "conus_c96" by replacing these strings with "EMC_CONUS_3km" and "EMC_CONUS_coarse", respectively. * Update the list of WE2E test names (tests/baselines_list.txt). * Bug fixes not directly related to grids: * In config.nco.sh, remove settings of QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST since these are now set automatically (due to another PR). * In the template file FV3LAM_wflow.xml, add the ensemble member name after RUN_FCST_TN in the dependency of the run_post metatask. ## TESTS CONDUCTED: Since this change only affects runs in NCO mode, the following NCO-mode WE2E tests were rerun on hera, all successfully: ``` nco_EMC_CONUS_3km SUCCESS nco_EMC_CONUS_coarse SUCCESS nco_EMC_CONUS_coarse__suite_FV3_GFS_2017_gfdlmp SUCCESS nco_RRFS_CONUS_25km_HRRRX_RAPX SUCCESS nco_RRFS_CONUS_3km_FV3GFS_FV3GFS SUCCESS nco_RRFS_CONUS_3km_HRRRX_RAPX SUCCESS nco_ensemble SUCCESS ``` * Port workflow to Orion (#309) ## DESCRIPTION OF CHANGES: * Add stanzas for Orion where necessary. * Add new module files for Orion. * On Orion, both the slurm partition and the slurm QOS need to be specified in the rocoto XML in order to be able to have wall times longer than 30 mins (the partition needs to be specified because it is by default "debug", which has a limit of 30 mins). Thus, introduce modifications to more easily specify slurm partitions: * Remove the workflow variables QUEUE_DEFAULT_TAG, QUEUE_HPSS_TAG, and QUEUE_FCST_TAG that are currently used to determine whether QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST specify the names of queue/QOS's or slurm partitions. * Add the workflow variables PARTITION_DEFAULT_TAG, PARTITION_HPSS_TAG, and PARTITION_FCST_TAG. These will be used to specify slurm partitions only, and the variables QUEUE_DEFAULT, QUEUE_HPSS, and QUEUE_FCST will be used to specify queues/QOS's only. IMPORTANT NOTE: On Orion, in order to load the regional_workflow environment needed for generating an experiment, the user must first issue the following commands: ``` module use -a /apps/contrib/miniconda3-noaa-gsl/modulefiles module load miniconda3 conda activate regional_workflow ``` ## TESTS CONDUCTED: Ran 11 WE2E tests on Orion, Hera, and Cheyenne. Results on Orion: ``` community_ensemble_2mems SUCCESS DOT_OR_USCORE SUCCESS grid_GSD_HRRR_AK_50km FAILURE - In the run_fcst task. * Error message: !!! (1) Error in subr radiation_aerosols: unrealistic surface pressure = 1 NaN new_ESGgrid SUCCESS new_GFDLgrid SUCCESS regional_001 SUCCESS regional_002 SUCCESS suite_FV3_GFS_v15p2 SUCCESS suite_FV3_GFS_v16beta SUCCESS suite_FV3_GSD_SAR SUCCESS suite_FV3_GSD_v0 SUCCESS ``` Results on Hera: ``` community_ensemble_2mems SUCCESS DOT_OR_USCORE SUCCESS grid_GSD_HRRR_AK_50km SUCCESS new_ESGgrid SUCCESS new_GFDLgrid SUCCESS regional_001 SUCCESS regional_002 SUCCESS suite_FV3_GFS_v15p2 SUCCESS suite_FV3_GFS_v16beta SUCCESS suite_FV3_GSD_SAR SUCCESS suite_FV3_GSD_v0 SUCCESS ``` Results on Cheyenne: ``` community_ensemble_2mems SUCCESS DOT_OR_USCORE SUCCESS grid_GSD_HRRR_AK_50km FAILURE - In run_fcst task. * Error message: !!! (1) Error in subr radiation_aerosols: unrealistic surface pressure = 1 NaN new_ESGgrid SUCCESS new_GFDLgrid SUCCESS regional_001 SUCCESS regional_002 SUCCESS suite_FV3_GFS_v15p2 SUCCESS suite_FV3_GFS_v16beta SUCCESS suite_FV3_GSD_SAR SUCCESS suite_FV3_GSD_v0 SUCCESS ``` All succeed except GSD_HRRR_AK_50km on Orion and Cheyenne. It is not clear why grid_GSD_HRRR_AK_50km fails on Orion and Cheyenne but not Hera. Seems to point to a bug in the forecast model. These two failures are not so important since this grid will soon be deprecated. Also tested successfully on Jet by @JeffBeck-NOAA and on Odin and Stampede by @ywangwof. ## ISSUE: This resolves Issue #152. ## CONTRIBUTORS: @JeffBeck-NOAA @ywangwof @christinaholtNOAA * Removed comments from exregional_get_mrms_files.sh and removed fhzero from FV3.input.yml * Update FV3.input.nml for fhzero = 1.0 * Updated conf files for file name conventions. * Updated MET scripts and MRMS pull scripts. * Removed comments from exregional_get_mrms_files.sh and removed fhzero from FV3.input.yml Co-authored-by: gsketefian <[email protected]> Co-authored-by: Michael Kavulich <[email protected]> Co-authored-by: JeffBeck-NOAA <[email protected]> Co-authored-by: Jamie Wolff <[email protected]> * Change cov_thresh for REFL to be a true max in nbrhood as SPC does. * Job script for get_obs_ccpa * Jobs script for get_obs_mrms * Jobs script for get_obs_ndas * Added external variables necessary to get_ccpa script * Updated workflow template with separate get obs tasks * Separated pull scripts from run scripts * Added necessary defaults/values for defining pull tasks * Added module files, default config.sh options, and changed dependencies for vx tasks * Changed name of new workflow to FV3LAM_wflow.xml * Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh * Adjusted the community and default config files based on comments * Updated FV3LAM workflow * Fixed discrepancies in config.community.sh * Fixed discrepancies in config_defaults.sh * Fixed discrepancies in config_defaults.sh round 2 * Fixed discrepancies in config_defaults.sh round 3 * Fixed discrepancies in config_defaults.sh round 4 * Fixed discrepancies in config.community.sh round 2 * Fixed discrepancies in config.community.sh round 3 * Fixed discrepancies in generate_FV3LAM_wflow.sh * Fixed discrepancies in generate_FV3LAM_wflow.sh round 2 * Fixed discrepancies in generate_FV3LAM_wflow.sh round 3 * Updated FV3LAM_wflow template * Separated Pull Data Scripts from Run Vx Scripts: Feature/add_metplus (#2) * Job script for get_obs_ccpa * Jobs script for get_obs_mrms * Jobs script for get_obs_ndas * Added external variables necessary to get_ccpa script * Updated workflow template with separate get obs tasks * Separated pull scripts from run scripts * Added necessary defaults/values for defining pull tasks * Added module files, default config.sh options, and changed dependencies for vx tasks * Changed name of new workflow to FV3LAM_wflow.xml * Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh * Adjusted the community and default config files based on comments * Updated FV3LAM workflow * Fixed discrepancies in config.community.sh * Fixed discrepancies in config_defaults.sh * Fixed discrepancies in config_defaults.sh round 2 * Fixed discrepancies in config_defaults.sh round 3 * Fixed discrepancies in config_defaults.sh round 4 * Fixed discrepancies in config.community.sh round 2 * Fixed discrepancies in config.community.sh round 3 * Fixed discrepancies in generate_FV3LAM_wflow.sh * Fixed discrepancies in generate_FV3LAM_wflow.sh round 2 * Fixed discrepancies in generate_FV3LAM_wflow.sh round 3 * Updated FV3LAM_wflow template * Fixed the dependencies of the vx tasks * Fixed Vx Task Dependencies in Workflow: Feature/add metplus (#3) * Job script for get_obs_ccpa * Jobs script for get_obs_mrms * Jobs script for get_obs_ndas * Added external variables necessary to get_ccpa script * Updated workflow template with separate get obs tasks * Separated pull scripts from run scripts * Added necessary defaults/values for defining pull tasks * Added module files, default config.sh options, and changed dependencies for vx tasks * Changed name of new workflow to FV3LAM_wflow.xml * Added task get_obs_tn, removed config.sh, updated config_defaults and config.community.sh * Adjusted the community and default config files based on comments * Updated FV3LAM workflow * Fixed discrepancies in config.community.sh * Fixed discrepancies in config_defaults.sh * Fixed discrepancies in config_defaults.sh round 2 * Fixed discrepancies in config_defaults.sh round 3 * Fixed discrepancies in config_defaults.sh round 4 * Fixed discrepancies in config.community.sh round 2 * Fixed discrepancies in config.community.sh round 3 * Fixed discrepancies in generate_FV3LAM_wflow.sh * Fixed discrepancies in generate_FV3LAM_wflow.sh round 2 * Fixed discrepancies in generate_FV3LAM_wflow.sh round 3 * Updated FV3LAM_wflow template * Fixed the dependencies of the vx tasks * Manual merge with develop that didn't seem to work before. Trying to get feature branch updated so it will run again! * Add local module files * Add environment variable for SCRIPTSDIR * Remove echo statement * Remove old module files * Update to config_default for walltime for ndas pull. Update to metplus parm for obs file template. Update to FV3LAM xml to not include 00 hour for verification * Update template to remove full path * Verification channges for obs. (#4) * Verification channges for obs. * Update config_defaults.sh for vx description * Update config_defaults.sh to remove extraneous MET info. Co-authored-by: Michelle Harrold <[email protected]> * Initial METplus .confs and MET config files for EnsembleStat APCP * J-Job script for running ensemble stat * Exregional script for ensemble-stat * Added EnsembleStat.conf for A6 and A24. Added PCPCombine to A3, A6, and A24. * Added EnsembleStatConfig files for 6 and 24h * Copy of workflow template with precipitation ensemble tasks added. Will become main template when testing is complete * Added export statement for number of ensemble members * Added necessary task definitions in ush * Updated workflow to included ENTITY definitions for ensstat * Fixed typo * Added ens vx configs * Pull in updates from develop that were not merging properly. Small change to config.community to turn off vx tasks by default. * Added/mod files for point ens vx. * Updated metplus conf files for ens point vx * Did manual merge of these files because it was not handled properly automatically * Adding additional variables to METplus for regional workflow (#5) * Changes made based on meeting with Michelle and Jamie * Updating fork * Cleanup after merge * Added additional ens vx * Ensemble point vx mods * Additional updates for ens and det vx * ensgrid_mean and ensgrid_prob .conf files for APCP * Updates for ensemble vx. * Added mean and prob point-stat configs * Updates to ensgrid_vx * Updates for mean/prob vx. * Updates to FV3LAM_wflow.xml * Deterministic and ensembel vx updates. * Ensgrid mean * Update setup.sh * Changed workflow template title * Updates to deterministic and ensemble verification * Created EnsembleStat METplus conf and MET config files for REFC * Added reflectivity mean and prob METplus and MET config files. Updated APCP mean and prob METplus and MET config files. * Added all J-job scripts, exregional scripts, and necessary definitons for workflow generation for all ensgrid_mean and ensgrid_prob tasks * Updates to workflow to add ensgrid_vx * Changes I made to account for runtime errors. * Made changes to directory structures * Made changes to directory structures and variables * Changed log files and stage dir. * Changes for grid- and point-vx. * Updated METplus ensemble precip conf files. * Mods for ensemble and deterministic vx. * Change to GridStatConfig_REFC_mean * Updated EnsembleStat_REFC.conf * Updated to METv10.0.0 * Updated conf files for paths. * Updated FV3LAM_wflow.xml template. * Mods for vx dependencies * Updated for censor thresh in METplus conf files; changes to FV3LAM_wflow.xml after sync with develop. * Updated exregional_run_fcst.sh generate_FV3LAM_wflow.sh to address merge with develop. * Mods for ensemble precip vx, handling padded/non-padded ensemble member names, fixes for python environment for obs pull. * Changes to RETOP (units) and REFC (naming and level) verification. * Fix OUTPUT_BASE for deterministic vx. * Changes to some verification ex-scripts for syntax and path fixes. Included start end dates of incorrect 01-h CCPA data. Removed some extra lines in FV3LAM_wflow.xml template. * Changed comp. ref. variable name in GridStat_REFC_prob.conf * Changed comp. ref. level in GridStat_REFC_prob.conf * Updated logic for number padding in the directory name when running in ensemble mode. * Added MET ensemble vx WE2E test. * Modified location of obs to live outside cycle dir, allowing for obs to be shared across cycles. * Mods to address comments on PR575. * Updated ensemble METPlus conf files for changes to post output name. * Addessed comments in PR and mods for 10-m WIND. * Addressing final comments in PR. Co-authored-by: Jamie Wolff <[email protected]> Co-authored-by: gsketefian <[email protected]> Co-authored-by: Michael Kavulich <[email protected]> Co-authored-by: JeffBeck-NOAA <[email protected]> Co-authored-by: lindsayrblank <[email protected]> Co-authored-by: Michelle Harrold <[email protected]> Co-authored-by: PerryShafran-NOAA <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Merging in head of develop from authoritative repo.