Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/fv3core fortran api #395

Merged
merged 22 commits into from
Dec 9, 2022
Merged

Feature/fv3core fortran api #395

merged 22 commits into from
Dec 9, 2022

Conversation

oelbert
Copy link
Contributor

@oelbert oelbert commented Nov 29, 2022

Purpose

This PR adds an API to call the Pace dycore from GEOS.

Code changes:

  • fv3core/pace/fv3core/initialization/geos_wrapper.py: Added a wrapper api to access the dynamical core from GEOS. Initialize it with a namelist, communicator, and backend, call it with numpy arrays and it returns a dictionary of numpy arrays.
  • tests/main/fv3core/test_init_from_geos.py: A simple test that checks that the GEOS wrapper is initialized correctly, populates the dycore state correctly, steps through the dycore, and outputs arrays in the output dictionary.
  • fv3core/pace/fv3core/init.py: Added the geos wrapper to init.py to make it importable at the base level.
  • util/pace/util/grid/eta.py: Added code to setup atmosphere with 91 vertical levels for GEOS runs.
  • util/pace/util/initialization/sizer.py: Revised the from_namelist method to work with our updated namelist configuration that doesn't use fv_core_nml as a key.

Checklist

Before submitting this PR, please make sure:

  • You have followed the coding standards guidelines established at Code Review Checklist.
  • Docstrings and type hints are added to new and updated routines, as appropriate
  • All relevant documentation has been updated or added (e.g. README, CONTRIBUTING docs)
  • For each public change and fix in pace-util, HISTORY has been updated
  • Unit tests are added or updated for non-stencil code changes

@oelbert oelbert marked this pull request as ready for review December 1, 2022 16:40
@oelbert oelbert requested review from elynnwu and jdahm December 1, 2022 16:42
from pace import fv3core


class GeosDycoreWrapper:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit] I believe NASA always refers it as GEOS


return self.output_dictionary

def _put_fortran_data_in_dycore(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wanted to check - is the idea here to not use translate fvdynamics anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, that's the idea

},
"initialization": {"type": "baroclinic"},
"nx_tile": 12,
"nz": 79,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you want to test the additional vertical case you added, either 91 or 72

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess so

@@ -204,14 +204,372 @@ def set_hybrid_pressure_coefficients(km: int) -> HybridPressureCoefficients:
1,
]
)

elif km == 91:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine since this is how we set up our npz=79 case, but we should probably merge this so it is possible to replace ak/bk with custom values.

class GeosDycoreWrapper:
"""
Provides an interface for the Geos model to access the Pace dycore.
Takes numpy arrays as inputs, returns a dictionary of numpy arrays as outputs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this only works for numpy and cpu backend? We should extend this to take cupy array as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally we can also make this work on GPU. The reason for the current input/output setup is to comport with Purnendu's code, which puts the Fortran data in numpy arrays and reads a dictionary back out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be gpu-functional now, I hope?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be safe. Probably you want to had some comments in there to explain the upload mechanism.

Fortran comes as CPU
Dycore state is init two a GPU storage thanks to the backend in QuantityFactory
safe_assign_arrayis capable of recognizing that source & destination are not in the same memory space and deals with the upload / download.

In case Fortran memory is on GPU (unlikely) and comes as a cupy array this will take a copy

This upload mechanism is slow and requires it's own optimization via staggered cudaMemCpyAsync. This is not present at the moment

Comment on lines 89 to 101
if "fv_core_nml" in namelist.keys():
layout = namelist["fv_core_nml"]["layout"]
# npx and npy in the namelist are cell centers, but npz is mid levels
nx_tile = namelist["fv_core_nml"]["npx"] - 1
ny_tile = namelist["fv_core_nml"]["npy"] - 1
nz = namelist["fv_core_nml"]["npz"]
elif "nx_tile" in namelist.keys():
layout = namelist["layout"]
# everything is cell centered in this format
nx_tile = namelist["nx_tile"]
ny_tile = namelist["nx_tile"]
nz = namelist["nz"]
return cls.from_tile_params(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warning: these seems to be no else here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be ok

diss_estd: np.ndarray,
) -> dict:

self._put_fortran_data_in_dycore(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Grab the timer from pace.util and slap one around each of those function call. See fv_dynamics for usage. They are GPU robust already

@FlorianDeconinck
Copy link
Contributor

launch jenkins

@@ -114,38 +115,41 @@ def __call__(
diss_estd: np.ndarray,
) -> dict:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for not making this more strongly typed? Could it be Dict[str, np.ndarray]?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small oversight, thought I'd done that

@oelbert oelbert enabled auto-merge (squash) December 2, 2022 21:52
@oelbert oelbert disabled auto-merge December 2, 2022 22:18
@oelbert oelbert enabled auto-merge (squash) December 2, 2022 22:23
@oelbert
Copy link
Contributor Author

oelbert commented Dec 5, 2022

launch jenkins

@oelbert oelbert disabled auto-merge December 9, 2022 18:48
@oelbert oelbert merged commit 5dddd21 into main Dec 9, 2022
@oelbert oelbert deleted the feature/fv3core_fortran_api branch December 9, 2022 18:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants