Skip to content

Commit

Permalink
Merge branch 'develop' into bugfix_orog_eh
Browse files Browse the repository at this point in the history
  • Loading branch information
George Gayno committed Mar 19, 2024
2 parents 22a5e0b + 72fb94f commit 0dd5b46
Show file tree
Hide file tree
Showing 13 changed files with 114 additions and 13 deletions.
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.12.0
1.13.0
1 change: 1 addition & 0 deletions docs/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ https://github.com/ufs-community/UFS_UTILS.

## Documentation for Previous Versions of UFS_UTILS

* [UFS_UTILS Version 1.13.0](ver-1.13.0/index.html)
* [UFS_UTILS Version 1.12.0](ver-1.12.0/index.html)
* [UFS_UTILS Version 1.11.0](ver-1.11.0/index.html)
* [UFS_UTILS Version 1.10.0](ver-1.10.0/index.html)
Expand Down
70 changes: 70 additions & 0 deletions docs/source/ufs_utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -665,3 +665,73 @@ Run script
----------

To run, use the machine-dependent script under ./util/weight_gen

***************************************************
UFS_UTILS utilities
***************************************************

gdas_init
=========

Introduction
------------

The gdas_init utility is used to create coldstart initial conditions for global cycled and forecast-only experiments using the chgres_cube program. It has two components: one that pulls the input data required by chgres_cube from HPSS, and one that runs chgres_cube. The utility is only supported on machines with access to HPSS:

* Hera
* Jet
* WCOSS2
* S4 (Only the chgres_cube step is supported, not the data pull step.)

Location
--------

Find it here: ./util/gdas_init

Build UFS_UTILS and set 'fixed' directories
-------------------------------------------

Invoke the build script from the root directory:

::

./build_all.sh

Set the 'fixed' directories using the script in the './fix' subdirectory (where $MACHINE is 'hera', 'jet', 'wcoss2', or 's4'):

::

./link_fixdirs.sh emc $MACHINE

Configure for your experiment
-----------------------------

Edit the variables in the 'config' file for your experiment:

* **EXTRACT_DIR** - Directory where data extracted from HPSS is stored.
* **EXTRACT_DATA** - Set to 'yes' to extract data from HPSS. If data has been extracted and is located in EXTRACT_DIR, set to 'no'. On 's4' this step can't be run. Instead, the data must be pulled from another machine.
* **RUN_CHGRES** - To run chgres, set to 'yes'. To extract data only, set to 'no'.
* **yy/mm/dd/hh** - The year/month/day/hour of your desired experiment. Currently, does not support pre-ENKF GFS data, prior to 2012 May 21 00z. Use two digits.
* **LEVS** - Number of hybrid levels plus 1. To run with 127 levels, set LEVS to 128.
* **CRES_HIRES** - Resolution of the hires component of your experiment. Example: C768.
* **CRES_ENKF** - Resolution of the enkf component of the experiments.
* **UFS_DIR** - Location of your cloned UFS_UTILS repository.
* **OUTDIR** - Directory where the coldstart data output from chgres is stored.
* **CDUMP** - When 'gdas', will process gdas and enkf members. When 'gfs', will process gfs member for running free forecast only.
* **use_v16retro** - When 'yes', use v16 retrospective parallel data. The retrospective parallel tarballs can be missing or incomplete. So this option may not always work. Contact a UFS_UTILS repository manager if you encounter problems.

Note: This utility selects the ocean resolution in the set_fixed_files.sh script using a default based on the user-selected CRES value. For example, for a cycled experiment with a CRES_HIRES/CRES_ENKF of C384/C192, the ocean resolution defaults to 0.25/0.50-degree. To choose another ocean resolution, the user will need to manually modify the set_fixed_files.sh script.


Kick off the utility
--------------------

Submit the driver script (where $MACHINE is 'hera', 'jet', 'wcoss2', or 's4')

::

./driver.$MACHINE.sh

The standard output will be placed in log files in the current directory.

The converted output will be found in $OUTDIR, including the needed abias and radstat initial condition files (if CDUMP=gdas). The files will be in the needed directory structure for the global-workflow system, therefore a user can move the contents of their $OUTDIR directly into their $ROTDIR.
2 changes: 1 addition & 1 deletion modulefiles/build.jet.intel.lua
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Load environment to compile UFS_UTILS on Jet using Intel
hpss_ver=os.getenv("hpss_ver") or ""
load(pathJoin("hpss", hpss_ver))

prepend_path("MODULEPATH", "/mnt/lfs4/HFIP/hfv3gfs/role.epic/spack-stack/spack-stack-1.6.0/envs/unified-env/install/modulefiles/Core")
prepend_path("MODULEPATH", "/mnt/lfs4/HFIP/hfv3gfs/role.epic/spack-stack/spack-stack-1.6.0/envs/unified-env-rocky8/install/modulefiles/Core")

stack_intel_ver=os.getenv("stack_intel_ver") or "2021.5.0"
load(pathJoin("stack-intel", stack_intel_ver))
Expand Down
2 changes: 1 addition & 1 deletion reg_tests/grid_gen/driver.jet.sh
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ TEST1=$(sbatch --parsable --ntasks-per-node=24 --nodes=1 -t 0:20:00 -A $PROJECT_
#-----------------------------------------------------------------------------

LOG_FILE2=${LOG_FILE}02
TEST2=$(sbatch --parsable --ntasks-per-node=10 --nodes=3 -t 0:15:00 -A $PROJECT_CODE -q $QUEUE -J c96.viirs.bnu \
TEST2=$(sbatch --parsable --ntasks-per-node=12 --nodes=4 -t 0:15:00 -A $PROJECT_CODE -q $QUEUE -J c96.viirs.bnu \
--partition=xjet -o $LOG_FILE2 -e $LOG_FILE2 ./c96.viirs.bnu.sh)

#-----------------------------------------------------------------------------
Expand Down
6 changes: 3 additions & 3 deletions reg_tests/ice_blend/driver.jet.sh
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ fi

export WGRIB=/apps/wgrib/1.8.1.0b/bin/wgrib
export WGRIB2=${WGRIB2_ROOT}/bin/wgrib2
export COPYGB=/lfs4/HFIP/hfv3gfs/emc.nemspara/role.ufsutils/ufs_utils/grib_util/NCEPLIBS-grib_util/exec/bin/copygb
export COPYGB2=/lfs4/HFIP/hfv3gfs/emc.nemspara/role.ufsutils/ufs_utils/grib_util/NCEPLIBS-grib_util/exec/bin/copygb2
export CNVGRIB=/apps/cnvgrib/1.4.0/bin/cnvgrib
export COPYGB=/lfs4/HFIP/hfv3gfs/role.epic/spack-stack/spack-stack-1.6.0/envs/unified-env-rocky8/install/intel/2021.5.0/grib-util-1.3.0-hrqavdi/bin/copygb
export COPYGB2=/lfs4/HFIP/hfv3gfs/role.epic/spack-stack/spack-stack-1.6.0/envs/unified-env-rocky8/install/intel/2021.5.0/grib-util-1.3.0-hrqavdi/bin/copygb2
export CNVGRIB=/mnt/lfs4/HFIP/hfv3gfs/role.epic/spack-stack/spack-stack-1.6.0/envs/unified-env-rocky8/install/intel/2021.5.0/grib-util-1.3.0-hrqavdi/bin/cnvgrib

export HOMEreg=/lfs4/HFIP/hfv3gfs/emc.nemspara/role.ufsutils/ufs_utils/reg_tests/ice_blend

Expand Down
2 changes: 2 additions & 0 deletions util/gdas_init/driver.hera.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ module load prod_util/1.1.0
PROJECT_CODE=fv3-cpu
QUEUE=batch

export machine=hera

source config

if [ $EXTRACT_DATA == yes ]; then
Expand Down
4 changes: 3 additions & 1 deletion util/gdas_init/driver.jet.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,10 @@ PROJECT_CODE=hfv3gfs
QUEUE=batch
PARTITION=xjet

export machine=jet

# Needed for NDATE utility
module load prod_util/1.2.2
module load prod_util/2.1.1

source config

Expand Down
4 changes: 3 additions & 1 deletion util/gdas_init/driver.s4.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,15 @@ module load build.$target.$compiler
module list

# Needed for NDATE utility
module load prod_util/1.2.2
module load prod_util/2.1.1

PROJECT_CODE=star
QUEUE=s4

source config

export machine=s4

if [ $EXTRACT_DATA == yes ]; then

echo "HPSS is not accessible from S4! If you wish to run just the chgres portion, set EXTRACT_DATA=NO in the config file and try again."
Expand Down
2 changes: 2 additions & 0 deletions util/gdas_init/driver.wcoss2.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ PROJECT_CODE=GFS-DEV

source config

export machine=wcoss2

this_dir=$PWD

if [ $EXTRACT_DATA == yes ]; then
Expand Down
4 changes: 2 additions & 2 deletions util/gdas_init/run_v15.chgres.sh
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@ cat << EOF > fort.41
mosaic_file_target_grid="${FIX_ORO}/${ORO_DIR}/${CTAR}_mosaic.nc"
orog_dir_target_grid="${FIX_ORO}/${ORO_DIR}"
orog_files_target_grid="${ORO_NAME}.tile1.nc","${ORO_NAME}.tile2.nc","${ORO_NAME}.tile3.nc","${ORO_NAME}.tile4.nc","${ORO_NAME}.tile5.nc","${ORO_NAME}.tile6.nc"
mosaic_file_input_grid="${FIX_ORO}/${CINP}/${CINP}_mosaic.nc"
orog_dir_input_grid="${FIX_ORO}/${CINP}"
mosaic_file_input_grid="${FIX_ORO_INPUT}/${CINP}/${CINP}_mosaic.nc"
orog_dir_input_grid="${FIX_ORO_INPUT}/${CINP}"
orog_files_input_grid="${CINP}_oro_data.tile1.nc","${CINP}_oro_data.tile2.nc","${CINP}_oro_data.tile3.nc","${CINP}_oro_data.tile4.nc","${CINP}_oro_data.tile5.nc","${CINP}_oro_data.tile6.nc"
data_dir_input_grid="${INPUT_DATA_DIR}"
atm_core_files_input_grid="${YMDH}.fv_core.res.tile1.nc","${YMDH}.fv_core.res.tile2.nc","${YMDH}.fv_core.res.tile3.nc","${YMDH}.fv_core.res.tile4.nc","${YMDH}.fv_core.res.tile5.nc","${YMDH}.fv_core.res.tile6.nc","${YMDH}.fv_core.res.nc"
Expand Down
4 changes: 2 additions & 2 deletions util/gdas_init/run_v16retro.chgres.sh
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,8 @@ cat << EOF > fort.41
mosaic_file_target_grid="${FIX_ORO}/${ORO_DIR}/${CTAR}_mosaic.nc"
orog_dir_target_grid="${FIX_ORO}/${ORO_DIR}"
orog_files_target_grid="${ORO_NAME}.tile1.nc","${ORO_NAME}.tile2.nc","${ORO_NAME}.tile3.nc","${ORO_NAME}.tile4.nc","${ORO_NAME}.tile5.nc","${ORO_NAME}.tile6.nc"
mosaic_file_input_grid="${FIX_ORO}/${CINP}/${CINP}_mosaic.nc"
orog_dir_input_grid="${FIX_ORO}/${CINP}"
mosaic_file_input_grid="${FIX_ORO_INPUT}/${CINP}/${CINP}_mosaic.nc"
orog_dir_input_grid="${FIX_ORO_INPUT}/${CINP}"
orog_files_input_grid="${CINP}_oro_data.tile1.nc","${CINP}_oro_data.tile2.nc","${CINP}_oro_data.tile3.nc","${CINP}_oro_data.tile4.nc","${CINP}_oro_data.tile5.nc","${CINP}_oro_data.tile6.nc"
data_dir_input_grid="${INPUT_DATA_DIR}"
atm_core_files_input_grid="${YMDH}.fv_core.res.tile1.nc","${YMDH}.fv_core.res.tile2.nc","${YMDH}.fv_core.res.tile3.nc","${YMDH}.fv_core.res.tile4.nc","${YMDH}.fv_core.res.tile5.nc","${YMDH}.fv_core.res.tile6.nc","${YMDH}.fv_core.res.nc"
Expand Down
24 changes: 23 additions & 1 deletion util/gdas_init/set_fixed_files.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
#!/bin/bash

#---------------------------------------------------------------------------
# Set directory names and file names for orog data.
# Set directory names and file names for the target grid orog data.
# A default ocean resolution (OCNRES) based on CTAR is used.
#---------------------------------------------------------------------------

if [ ${CTAR} == 'C48' ] ; then
Expand All @@ -22,3 +23,24 @@ fi

ORO_DIR="${CTAR}"
ORO_NAME="${CTAR}.mx${OCNRES}_oro_data"

#---------------------------------------------------------------------------
# When using the v15/v16 tiled warm restart data as input to the chgres
# step, the input grid orography is needed (there is no orography record
# in the restart files). Since the restart data was created before the
# latest orog version (20231027), need to use a previous version.
#---------------------------------------------------------------------------

if [ "$machine" = 'hera' ] ; then
FIX_ORO_INPUT=/scratch1/NCEPDEV/global/glopara/fix/orog/20230615
elif [ "$machine" = 'wcoss2' ] ; then
FIX_ORO_INPUT=/lfs/h2/emc/global/noscrub/emc.global/FIX/fix/orog/20230615
elif [ "$machine" = 'jet' ] ; then
FIX_ORO_INPUT=/lfs4/HFIP/hfv3gfs/glopara/git/fv3gfs/fix/orog/20230615
elif [ "$machine" = 's4' ] ; then
FIX_ORO_INPUT=/data/prod/glopara/fix/orog/20230615
else
set +x
echo ERROR machine $machine not supported.
exit 3
fi

0 comments on commit 0dd5b46

Please sign in to comment.