Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CMake build to UFS_UTILS #91

Closed
GeorgeGayno-NOAA opened this issue Mar 16, 2020 · 44 comments
Closed

Add CMake build to UFS_UTILS #91

GeorgeGayno-NOAA opened this issue Mar 16, 2020 · 44 comments
Assignees

Comments

@GeorgeGayno-NOAA
Copy link
Collaborator

GeorgeGayno-NOAA commented Mar 16, 2020

v1 of the public release used CMake to build chgres_cube. Use CMake to build all programs in the repository. Remove the old build system.

GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue Mar 16, 2020
Add submodule to NOAA-EMC/CMakeModules repository.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue Mar 17, 2020
Add required cmake files to build chgres_cube.  Update
"build_chgres_cube.sh", "modulefiles/chgres_cube.wcoss_dell_p3"
and "machine-setup.sh" to invoke cmake on Dell.
@GeorgeGayno-NOAA
Copy link
Collaborator Author

Added the required cmake files from the public release branch to the 'feature/cmake' branch at cb887b6. The "build_chgres_cube.sh" was updated to invoke cmake from Dell. To get the config step to work, I had to updated the Modules/FindWGRIB2.cmake (under the [email protected]:NOAA-EMC/CMakeModules.git repo) as follows:

--- a/Modules/FindWGRIB2.cmake
+++ b/Modules/FindWGRIB2.cmake
@@ -5,7 +5,7 @@ endif()

 find_path (WGRIB2_INCLUDES
   wgrib2api.mod
-  HINTS ${WGRIB2_ROOT}/include)
+  HINTS ${WGRIB2_ROOT}/include ${WGRIB2_ROOT}/lib)

When I invoke the build script, the configure step works, but the build step fails:

-- Configuring done
-- Generating done
-- Build files have been written to: /gpfs/dell2/emc/modeling/noscrub/George.Gayno/ufs_utils.git/UFS_UTILS/build
+ make
Scanning dependencies of target chgres_cube.exe
[  8%] Building Fortran object sorc/chgres_cube.fd/CMakeFiles/chgres_cube.exe.dir/program_setup.f90.o
/gpfs/dell2/emc/modeling/noscrub/George.Gayno/ufs_utils.git/UFS_UTILS/sorc/chgres_cube.fd/program_setup.f90(343): error #7002: Error in opening the compiled module file.  Check INCLUDE paths.   [ESMF]
  use esmf
------^

The path to the esmf module directory is not being set. Not sure why that is.

@kgerheiser
Copy link
Contributor

CMake finds ESMF through the environment variable ESMFMKFILE. Is that set? It throws a warning when running cmake if it's not.

GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue Mar 17, 2020
Add back 'find_package' calls to "CMakeLists.txt".

Change fortran instrinsic 'ALOG' to 'LOG' in "surface.F90".

Point to official version of esmf v8 on Dell.
@GeorgeGayno-NOAA
Copy link
Collaborator Author

Update the branch following Kyle's suggestion (725dd1c). The compilation moved past the previous error but now fails at the link step. The path to the netcdf library is not being defined (but it is able to find the netcdf include directory):

[100%] Linking Fortran executable chgres_cube.exe
ld: cannot find -lnetcdff
ld: cannot find -lnetcdf
make[2]: *** [sorc/chgres_cube.fd/chgres_cube.exe] Error 1
make[1]: *** [sorc/chgres_cube.fd/CMakeFiles/chgres_cube.exe.dir/all] Error 2
make: *** [all] Error 2

@kgerheiser
Copy link
Contributor

Is that because of this? NOAA-EMC/CMakeModules#30

@GeorgeGayno-NOAA
Copy link
Collaborator Author

Is that because of this? NOAA-EMC/CMakeModules#30

I am almost certain that is the problem. But I can't figure out what to do.

@kgerheiser
Copy link
Contributor

kgerheiser commented Mar 17, 2020

If you run make VERBOSE=1 you can see the exact flags that are used and see what the problem is. Like, is it not adding correct -L or what. And did the CMake invocation spit out any errors for FindNetCDF?

@climbfuji
Copy link
Contributor

Have you tried adding this block to the top-level CMakeLists.txt in UFS_UTILS before find_package(NetCDF MODULE REQUIRED) is called?

# Add environment variable NETCDF to CMAKE_PREFIX_PATH
# for PkgConfig, and set cmake variable accordingly
if(NOT NETCDF)
  if(NOT DEFINED ENV{NETCDF})
    message(FATAL_ERROR "Environment variable NETCDF not set")
  else()
    list(APPEND CMAKE_PREFIX_PATH $ENV{NETCDF})
    set(NETCDF $ENV{NETCDF})
  endif()
  if(DEFINED ENV{NETCDF_FORTRAN})
    list(APPEND CMAKE_PREFIX_PATH $ENV{NETCDF_FORTRAN})
  endif()
endif()

@kgerheiser
Copy link
Contributor

@GeorgeGayno-NOAA
Copy link
Collaborator Author

Have you tried adding this block to the top-level CMakeLists.txt in UFS_UTILS before find_package(NetCDF MODULE REQUIRED) is called?

Add environment variable NETCDF to CMAKE_PREFIX_PATH

for PkgConfig, and set cmake variable accordingly

if(NOT NETCDF)
if(NOT DEFINED ENV{NETCDF})
message(FATAL_ERROR "Environment variable NETCDF not set")
else()
list(APPEND CMAKE_PREFIX_PATH $ENV{NETCDF})
set(NETCDF $ENV{NETCDF})
endif()
if(DEFINED ENV{NETCDF_FORTRAN})
list(APPEND CMAKE_PREFIX_PATH $ENV{NETCDF_FORTRAN})
endif()
endif()

I thought the above logic was to replace the find_package. I added that back and it now compiles. Thanks.

@climbfuji
Copy link
Contributor

climbfuji commented Mar 17, 2020 via email

GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue Mar 17, 2020
Add back package_find for netcdf.  Remove set of compiler and
compiler flags from dell build module, which are not used
by cmake.  Minor cleanup of "build_chgres_cube.sh".
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue Mar 17, 2020
Update compiler flags to those used by 'develop'.
Update install directory to that used by 'develop'.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue Mar 17, 2020
Updates to compile chgres_cube on Hera.
@GeorgeGayno-NOAA
Copy link
Collaborator Author

Using the branch at 24a7829, I can compile chgres_cube on Dell and Hera. Now I am having problems on Jet. It fails on the link step because the "-openmp" flag is obsolete when using intel v18. I don't have this problem on Dell and Hera, the correct flag "-qopenmp" is used. So how is the link.txt file created? And why does it contain the correct flag on Dell and Hera, but not Jet?

@climbfuji
Copy link
Contributor

I was looking at your branch. Firstly, the compiler flags are not set in the same way as they are in the release v1 branch (UFS_UTILS/sorc/chgres_cube.fd/CMakeLists.txt).

The way @aerorahul did the rework, the magic happens in UFS_UTILS/CMakeLists.txt. If the option OPENMP is ON (default is OFF), then the following call figures out the correct flags (or at least it should):

option(OPENMP "use OpenMP threading" OFF)
...
if(OPENMP)
  find_package(OpenMP REQUIRED COMPONENTS Fortran)
endif()

Then, in UFS_UTILS/sorc/chgres_cube.fd/CMakeLists.txt:

release v1 branch

...
if(CMAKE_Fortran_COMPILER_ID MATCHES "^(Intel)$")
  set(CMAKE_Fortran_FLAGS "-g -traceback")
  set(CMAKE_Fortran_FLAGS_RELEASE "-O3")
  set(CMAKE_Fortran_FLAGS_DEBUG
      "-O0 -check -check noarg_temp_created -check nopointer -warn -warn noerrors -fp-stack-check -fstack-protector-all -fpe0 -debug -ftrapuv"
  )
elseif(CMAKE_Fortran_COMPILER_ID MATCHES "^(GNU|Clang|AppleClang)$")
  set(CMAKE_Fortran_FLAGS "-g -fbacktrace -ffree-form -ffree-line-length-0")
  set(CMAKE_Fortran_FLAGS_RELEASE "-O3")
  set(CMAKE_Fortran_FLAGS_DEBUG
      "-O0 -ggdb -fno-unsafe-math-optimizations -frounding-math -fsignaling-nans -ffpe-trap=invalid,zero,overflow -fbounds-check"
  )
endif()
...
if(OpenMP_Fortran_FOUND)
  target_link_libraries(${exe_name} OpenMP::OpenMP_Fortran)
endif()

your branch

if(CMAKE_Fortran_COMPILER_ID MATCHES "^(Intel)$")
  set(CMAKE_Fortran_FLAGS "-g -traceback")
  set(CMAKE_Fortran_FLAGS_RELEASE "-O3 -fp-model precise -r8 -i4 -qopenmp -convert big_endian -assume byterecl")
  set(CMAKE_Fortran_FLAGS_DEBUG
      "-O0 -check -check noarg_temp_created -check nopointer -warn -warn noerrors -fp-stack-check -fstack-protector-all -fpe0 -debug -ftrapuv"
  )
elseif(CMAKE_Fortran_COMPILER_ID MATCHES "^(GNU|Clang|AppleClang)$")
  set(CMAKE_Fortran_FLAGS "-g -fbacktrace -ffree-form -ffree-line-length-0")
  set(CMAKE_Fortran_FLAGS_RELEASE "-O3")
  set(CMAKE_Fortran_FLAGS_DEBUG
      "-O0 -ggdb -fno-unsafe-math-optimizations -frounding-math -fsignaling-nans -ffpe-trap=invalid,zero,overflow -fbounds-check"
  )
endif()
...
if(OpenMP_Fortran_FOUND)
  target_link_libraries(${exe_name} OpenMP::OpenMP_Fortran)
endif()

Thus, the first thing I would do is to remove the hard-coded -qopenmp because this is against the logic to control and set the OpenMP flags using -DOPENMP=ON and CMake's own tools. Then I would try to compile it without turning OPENMP on, i.e. just omit the argument -DOPENMP=ON (because the default is OFF) or use -DOPENMP=OFF explicitly.

If this works, try it with -DOPENMP=ON. If need be, you can write the flags that CMake figured out to stdout using message(INFO "${...}"). See https://cmake.org/cmake/help/v3.15/module/FindOpenMP.html for the variables that the find_package(OpenMP) call is supposed to set.

GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue Mar 18, 2020
Updates to build on Jet.

Turn on openmp check in ./CMakeLists.txt.

Remove hard-coded openmp flag in .sorc/chgres_cube.fd/CMakeLists.txt.
The flag should be set automatically by cmake openmp check.
@GeorgeGayno-NOAA
Copy link
Collaborator Author

Per @climbfuji recommendation, updated the branch to remove the hard-coded openmp flag, and to turn on cmake's OPENMP check (54428ec). On Jet, the check works and adds the correct openmp flag during compilation, but not at link time.

-- Found OpenMP_Fortran: -qopenmp (found version "5.0")
-- Found OpenMP: TRUE (found version "5.0") found components: Fortran

I updated ./sorc/chgres_cube.fd/CMakeLists.txt to print this message. I assume that is where the link step occurs:

--- a/sorc/chgres_cube.fd/CMakeLists.txt
+++ b/sorc/chgres_cube.fd/CMakeLists.txt
@@ -42,6 +42,7 @@ target_link_libraries(
   ${WGRIB2_LIBRARIES}
   ${NETCDF_LIBRARIES})
 if(OpenMP_Fortran_FOUND)
+  message("found openmp " ${OpenMP_Fortran_FLAGS})
   target_link_libraries(${exe_name} OpenMP::OpenMP_Fortran)
 endif()

and the logic is being tripped with the correct flag:

-- Found sp: /lfs3/projects/hfv3gfs/nwprod/NCEPLIBS/lib/sp_v2.0.2/libsp_v2.0.2_d.a
found openmp -qopenmp
-- Configuring done

but it is still using -openmp at link time.

@climbfuji
Copy link
Contributor

@aerorahul do you have any insight on what might be going on?

@aerorahul
Copy link
Contributor

@GeorgeGayno-NOAA Can you point me to your branch and setup where you are trying to build this.
I get it is on Jet. I'll try to log on to Jet and check.

@aerorahul
Copy link
Contributor

@climbfuji @GeorgeGayno-NOAA
The -openmp flag is coming from L40 in esmf.mk.
Some one did not compile esmf properly, or did, but ignored the fact that openmp is not valid on jet.

@aerorahul
Copy link
Contributor

also, you don't need -openmp in ESMFF90LINKOPTS. That flag should be added to the application that links with OpenMP. ESMF provides a library only.

@climbfuji
Copy link
Contributor

Which ESMF version is it that you are using on Jet? I "usually" build ESMF with OPENMP=OFF.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

also, you don't need -openmp in ESMFF90LINKOPTS. That flag should be added to the application that links with OpenMP. ESMF provides a library only.

I have never used ESMFF90LINKOPS when compiling (maybe I should have?). chgres_cube works perfectly fine without it.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

Which ESMF version is it that you are using on Jet? I "usually" build ESMF with OPENMP=OFF.

module use /mnt/lfs3/projects/hfv3gfs/gwv/ljtjet/lib/modulefiles
module load esmflocal/ESMF_8_0_0_beta_snapshot_21

@aerorahul
Copy link
Contributor

also, you don't need -openmp in ESMFF90LINKOPTS. That flag should be added to the application that links with OpenMP. ESMF provides a library only.

I have never used ESMFF90LINKOPS when compiling (maybe I should have?). chgres_cube works perfectly fine without it.

@GeorgeGayno-NOAA
L40 in chgres_cube.fd/CMakeLists.txt links with esmf.
That invokes a linker to ESMF_INTERFACE_LINK_LIBRARIES. This is an imported target from FindESMF.cmake. See lines 100-103 in FindESMF.cmake

The L103 property is set in L96 of FindESMF.cmake which includes the ESMF_F90LINKOPTS. The latter is obtained from esmf.mk

GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 7, 2020
Update to build.jet per Rahul's suggestion.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 8, 2020
Updates to make threading optional.  Note: global_chgres
and global_cycle require threads.  So don't build them
when user does not choose to use threads.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 8, 2020
Bug fix to ./emcsfc_snow2mdl.fd/CMakeLists.txt
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 8, 2020
Move Intel compilation flags to root level CmakeLists.txt
file.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 8, 2020
Move GNU compilation flags to the root level CMakeLists.txt file.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 11, 2020
Add load of build module to the regression test and
gdas_init driver scripts (Hera).  Ensures consistent
module use thru the repository (per Rahul).
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 11, 2020
Point to Dusan's updated FindWGRIB2.cmake.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 11, 2020
Load build module in Jet regression test scripts.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 11, 2020
Load build modules within Dell reg test and gdas_util
driver scripts.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 12, 2020
Load build module within the Cray reg test and gdas_init
driver scripts.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 12, 2020
Additional clean up for the Cray driver scripts.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 12, 2020
Additions updates to Hera driver scripts.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 12, 2020
Additional cleanup for Dell driver scripts.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 13, 2020
Some additional driver script clean up.
GeorgeGayno-NOAA added a commit to GeorgeGayno-NOAA/UFS_UTILS that referenced this issue May 13, 2020
Additional Cray driver script clean up.
@GeorgeGayno-NOAA
Copy link
Collaborator Author

All reviewer comments have been addressed. See #101 for more details.

Next, compile and then rerun all regression tests on Jet, Hera, Cray and Dell.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

First, ensure the latest version of the branch (608e52e) compiles on all officially supported machines. This was confirmed on Orion, Hera, Jet, WCOSS-Cray and WCOSS-Dell for the Intel compiler (both Release" and "Debug" build types). On Hera, it was successfully compiled using the GNU compiler (both Release" and "Debug" build types). Additionally, Dusan was able to build it on his Linux workstation.

Next, the regression tests will be run.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

The grid generation driver scripts (under ./driver_scripts) were modified to remove individual module loads. These (608e52e) were rerun on Jet, Hera, WCOSS-Cray/Dell and confirmed to be working.

The GDAS initialization scripts (under ./util/gdas_init) were also modified to remove module loads. They were rerun on Hera, WCOSS-Cray/Dell (there is no Jet version) and confirmed to be working

@GeorgeGayno-NOAA
Copy link
Collaborator Author

The nemsio utilities (nemsio_read, nemsio_get, nemsio_chgdate, mkgfsnemsioctl) do not have regression tests, but I tested them using a canned case:

  • Hera: /scratch/NCEPDEV/da/George.Gayno/ufs_utils.git/hera_port/nemsio_utilities
  • Jet: /lfs2/HFIP/emcda/George.Gayno/ufs_utils.git/jet_port/nemsio_utilities
  • Venus: /gpfs/dell2/emc/modeling/noscrub/George.Gayno/ufs_utils.git/nemsio_utils
  • Surge: /gpfs/hps3/emc/global/noscrub/George.Gayno/ufs_utils.git/nemsio_utils

All tests worked correctly using 608e52e.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

The global_chgres program is nearly obsolete, but I ran a quick test using these canned cases:

  • Hera: /scratch1/NCEPDEV/da/George.Gayno/ufs_utils.git/hera_port/chgres_serial
  • Jet: /lfs3/HFIP/emcda/George.Gayno/ufs_utils.git/jet_port/chgres_serial
  • Venus: /gpfs/dell2/emc/modeling/noscrub/George.Gayno/ufs_utils.git/chgres_serial
  • Surge: /gpfs/hps3/emc/global/noscrub/George.Gayno/ufs_utils.git/chgres_serial

All tests worked correctly using the branch at 608e52e.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

The chgres_cube regression test suite (./reg_tests/chgres_cube) and ice_blend test (./reg_tests/ice_blend) were run on Jet, Hera, Venus and Surge using 608e52e. All tests passed.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

GeorgeGayno-NOAA commented May 14, 2020

The global_cycle regression test suite (./reg_tests/global_cycle) was run on Jet, Hera, Venus and Surge. The test passed on Jet, Hera and Venus. It failed on Surge. (used 608e52e)

The test compares the output tiled netcdf files to a baseline set using the nccmp utility. Two of the surface files (tile3 and tile6) had 'floating point' differences. Example:

Variable Group Count          Sum      AbsSum          Min          Max       Range         Mean      StdDev
tsea     /         8 -3.41061e-13 4.54747e-13 -5.68434e-14  5.68434e-14 1.13687e-13 -4.26326e-14 4.01944e-14
tisfc    /         3  -1.7053e-13  1.7053e-13 -5.68434e-14 -5.68434e-14           0 -5.68434e-14           0
tref     /        17  -1.7053e-13 9.66338e-13 -5.68434e-14  5.68434e-14 1.13687e-13 -1.00312e-14 5.76733e-14
tfinc    /         3  -1.7053e-13  1.7053e-13 -5.68434e-14 -5.68434e-14           0 -5.68434e-14           0
stc      /        12 -6.82121e-13 6.82121e-13 -5.68434e-14 -5.68434e-14           0 -5.68434e-14           0

These differences are insignificant. global_cycle is working correctly. The threshold used by nccmp should be increased.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

The snow2mdl regression test (./reg_tests/snow2mdl) was run on Jet, Hera, Venus and Surge (using 608e52e). The test runs snow2mdl to create a T1534 snow cover and snow depth analysis. The output is compared to a baseline file. The test passed on Jet, Hera and Venus. It failed on Surge.

On Surge, the output from the branch was compared to the baseline file in Grads. The snow cover records were identical. Six snow depth points near -73/Greenwich differed by 0.0001 meters. Differences are likely due to using compiling the branch with "O3" vs "O0" for 'develop' But I can not explain why this difference happened only on the Cray. Anyway, the output, while different, is correct.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

GeorgeGayno-NOAA commented May 15, 2020

The last regression test is for the grid generation codes (./reg_tests/grid_gen) (used 608e52e). The test creates a C96 global uniform grid and C96 regional grid. The 'grid', 'oro' and surface climo files are compared to a baseline set of data using the nccmp command. On Jet, Hera, Surge and Vensu, the global test passed, but the regional test failed.

The 'grid' files had differences in the grid box area record. But these are small compared to their magnitude (~10**7). Here is the difference (Venus) for the C96_grid.tile7.halo4.NC file:

Variable Group Count         Sum    AbsSum         Min       Max     Range        Mean    StdDev
area     /         9 -0.00901271 0.0991399 -0.00901273 0.0180254 0.0270382 -0.00100141 0.0122954

The differences on other machines were similar. So this is not a problem.

The other differences were noted in the filtered orography record. Here is the difference (Venus) for the C96_oro_data.tile7.halo4.nc file:

Variable  Group Count     Sum  AbsSum      Min     Max   Range      Mean   StdDev
orog_filt /       410 28.7892 138.521 -3.92627 10.1989 14.1251 0.0702175 0.985272

These differences are large enough to be concerning. Examining the differences in ncview, they were confined to the few rows along the lateral boundaries. The interior had no differences. Wanting to know what could be happening, I started to add some print statements. I then discovered an array dimension problem in filter_topo.F90. The orography was incorrectly dimensioned with a rank of 4 instead 3 in routine FV3_zs_filter . When I made the correction:

real, intent(IN):: stretch_fac
     logical, intent(IN) :: nested, regional
-    real, intent(inout):: phis(isd:ied,jsd,jed,ntiles)
+    real, intent(inout):: phis(isd:ied,jsd:jed,ntiles)
     real:: cd2
     integer mdim, n_del2, n_del4

I got this difference from the baseline file:

Variable  Group Count     Sum  AbsSum     Min     Max   Range     Mean  StdDev
orog_filt /       532 96.1873 280.998 -5.0249 25.0226 30.0475 0.180803 1.98246

That is very different. So I suspect there is a bug in the filter_topo code. And the test failure is unrelated to the CMake build. I plan to merge, but I will open an issue to look into any problems with the filter_topo program

@GeorgeGayno-NOAA
Copy link
Collaborator Author

I am seeing similar behavior with filter_topo in develop (270f9dc). If I fix the rank of phis and print out the value of phis(1,1), I get this difference in file C96_oro_data.tile7.halo4.nc:

Variable  Group Count     Sum  AbsSum      Min     Max   Range     Mean  StdDev
orog_filt /       396 103.033 318.697 -25.4385 26.2191 51.6576 0.260184 2.97099

If I add another print statement for phis(1,2), I get this difference:

Variable  Group Count       Sum AbsSum      Min     Max   Range        Mean   StdDev
orog_filt /       370 -0.375389 68.313 -2.69452 3.75537 6.44989 -0.00101457 0.484862

So there is likely some kind of memory leak going on with the 'regional' option.

So, the CMake build is not the culprit. Will merge the feature/cmake branch to develop.

@GeorgeGayno-NOAA
Copy link
Collaborator Author

Merged to 'develop' at 3ad7d83. Closing issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants