-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User-defined functions as closures/forcings in the momentum equation. #73
Comments
@glwagner @SandreOuza Just brainstorming some ideas on how to include arbitrary closures and forcing functions on the right hand side of the momentum equation in finite volume. Would be good to get your feedback. The hackiest/easiest way to do this would be to maybe to have a struct ForcingFunctions
Fu::Function
Fv::Function
Fw::Function
FT::Function
FS::Function
...
end be part of the Model struct. These forcing functions must have a very specific signature, e.g. F(grid, velocities, tracers, i, j, k) that computes the closure/forcing for volume element (i,j,k) but not every input has to be used. Then we can figure out how to implement Smagorinsky closures and AMD (anisotropic minimum dissipation) in finite volume using Then the forcing is only inserted into the associated momentum equation if it's defined. So without a forcing, the code is rewritten to not even include a forcing so in this sense this can be a zero-cost abstraction. Here's some pseudo code of what this macro might look like: # Add forcing forcing for u-momentum at the end of the expression.
macro insert_forcing_u(model, ex)
if model.forcing_functions.Fu == nothing
return ($(esc(ex)))
else
Fu = model.forcing_functions.Fu
return ($(esc(Expr(:call, :+, ex, Meta.parse("Fu(grid, velocities, tracers, i, j, k)")))))
end
end We can do this without macros by having the following default setting for all these forcing functions @inline F(grid, velocities, tracers, i, j, k) = 0 which should have the same effect (compiler will get rid of the call to Fu, if it's a smart as I think it is). This general approach might be nice because if the forcing/closure is a function that can be computed for each volume element (i,j,k), then we don't need to store huge arrays of size |
One of the worries is that it needs to be:
since otherwise we won't know which function to call statically which would stimify the GPU compiler |
Ah I see, the GPU compiler would just see Would |
|
My two cents on the implementation: I think it'll be nice to group fields and forcings into objects in which the field 'name' acts as a coordinate. So for forcings we could have struct Forcing{Tu,...}
u::Tu
...
end rather than using I vote for @ali-ramadhan's second non-macro approach involving a 'zero function' default for forcing terms that can be overridden by the user. Note that in addition to implementing forcings on the RHS of the equations, we also need an API that allows the user to define functions. @ali-ramadhan, perhaps we can work on these in parallel. We need to resolve the @vchuravy, @ali-ramadhan and I talked about using |
As discussed with @jm-c, I think another feature we will need is the ability to design closure-specific temporary variables. For example, the implementation of Constant Smagorinsky or Anisotropic Minimum Dissipation will benefit from the ability to compute the eddy viscosity (at 4 locations in a cell --- 3 |
Should be fine, but I leave the abstractions to @peterahrens ;)
What do you mean by that? Are these temporaries globals that ought to be updated and carried between different forcing functions? |
FieldVectors from StaticArrays are a great solution, as are SLArrays from https://github.com/JuliaDiffEq/LabelledArrays.jl |
@peterahrens thanks for that tip! @vchuravy, by closure-specific temporary variables (or fields), I am referring to domain-size quantities that can be re-used in computing the contribution of an LES closure to the 'source terms' for momentum and tracers. In other words, this line, which computes the source term for kernel_temporaries!(subgrid_closure, ..., i, j, k)
Gu[i, j, k] = ... + subgrid_closure.u(subgrid_closure_temporaries, ..., i, j, k) ... I think. In physical terms, the intermediate variable is an 'eddy viscosity' that acts on all momentum terms. But I guess when I say the computation 'will benefit' from variables to store intermediate results, what I really mean is that I'm expecting the calculation of this temporary variable to be fairly involved (potentially >20 derivatives of velocity and tracer fields in x, y, z, plus scalar operations to combine the fields, and a 'clipping' function to make sure the viscosity is not negative --- see the formula for the eddy viscosity predictor). It is also of course possible to simply write a function that calculates the eddy viscosity, and call this function repeatedly to avoid temporaries altogether. |
Don't know if it's always possible but if we can avoid temporary variables by aggressively inlining all calculations, that would be cool. Every temporary field saved means being able to run a larger model on GPUs. We will always need a couple of temporary arrays so if we can't inline we can just try to share and reuse the temporary arrays as much as possible. |
Ideally the user would just define an element-wise function
closure(..., i, j, k)
that gets passed to the model and is added to the velocity or tracer source terms, acting as a forcing term in the momentum and tracer advection equations.Defining an element-wise function will allow the function to be injected into CPU and GPU kernels.
The text was updated successfully, but these errors were encountered: