Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User-defined functions as closures/forcings in the momentum equation. #73

Closed
ali-ramadhan opened this issue Feb 24, 2019 · 10 comments
Closed
Assignees
Labels
abstractions 🎨 Whatever that means numerics 🧮 So things don't blow up and boil the lobsters alive
Milestone

Comments

@ali-ramadhan
Copy link
Member

Ideally the user would just define an element-wise function closure(..., i, j, k) that gets passed to the model and is added to the velocity or tracer source terms, acting as a forcing term in the momentum and tracer advection equations.

Defining an element-wise function will allow the function to be injected into CPU and GPU kernels.

@ali-ramadhan ali-ramadhan added feature 🌟 Something new and shiny science 🌊 Sometimes addictive, sometimes allergenic numerics 🧮 So things don't blow up and boil the lobsters alive and removed numerics 🧮 So things don't blow up and boil the lobsters alive science 🌊 Sometimes addictive, sometimes allergenic labels Feb 24, 2019
@ali-ramadhan
Copy link
Member Author

@glwagner @SandreOuza Just brainstorming some ideas on how to include arbitrary closures and forcing functions on the right hand side of the momentum equation in finite volume. Would be good to get your feedback.

The hackiest/easiest way to do this would be to maybe to have a

struct ForcingFunctions
    Fu::Function
    Fv::Function
    Fw::Function
    FT::Function
    FS::Function
    ...
end

be part of the Model struct. These forcing functions must have a very specific signature, e.g.

F(grid, velocities, tracers, i, j, k)

that computes the closure/forcing for volume element (i,j,k) but not every input has to be used. Then we can figure out how to implement Smagorinsky closures and AMD (anisotropic minimum dissipation) in finite volume using Oceananigans.Operators (submodule containing all operators). The reason I've called them forcing functions is because they show up on the right hand side of the momentum equations, so I'm just generically treating all additive RHS terms as "forcings". It may not be the best name if we're implementing LES closures though.

Then the forcing is only inserted into the associated momentum equation if it's defined. So without a forcing, the code is rewritten to not even include a forcing so in this sense this can be a zero-cost abstraction. Here's some pseudo code of what this macro might look like:

# Add forcing forcing for u-momentum at the end of the expression.
macro insert_forcing_u(model, ex)
    if model.forcing_functions.Fu == nothing
        return ($(esc(ex)))
    else
        Fu = model.forcing_functions.Fu
        return ($(esc(Expr(:call, :+, ex, Meta.parse("Fu(grid, velocities, tracers, i, j, k)")))))
    end
end

We can do this without macros by having the following default setting for all these forcing functions

@inline F(grid, velocities, tracers, i, j, k) = 0

which should have the same effect (compiler will get rid of the call to Fu, if it's a smart as I think it is).

This general approach might be nice because if the forcing/closure is a function that can be computed for each volume element (i,j,k), then we don't need to store huge arrays of size Nx*Ny*Nz containing forcing terms in memory.

@vchuravy
Copy link
Collaborator

One of the worries is that it needs to be:

struct ForcingFunctions{FFu, ...}
    Fu::FFu
    ...
end

since otherwise we won't know which function to call statically which would stimify the GPU compiler

@ali-ramadhan
Copy link
Member Author

Ah I see, the GPU compiler would just see Fu(grid, velocities, tracers, i, j, k) and not know what to call at compile time.

Would struct ForcingFunctions{FFu, ...} be the best approach to inject arbitrary functions (maybe with a specific signature) into a GPU kernel at run time? Can't think of many options myself.

@vchuravy
Copy link
Collaborator

ForcingFunctions could also be a tuple or a NamedTuple if you want some more flexibility, but yes generally that should be fine.

@glwagner
Copy link
Member

My two cents on the implementation:

I think it'll be nice to group fields and forcings into objects in which the field 'name' acts as a coordinate. So for forcings we could have

struct Forcing{Tu,...}
  u::Tu
  ...
end

rather than using Fu as the name of the field.

I vote for @ali-ramadhan's second non-macro approach involving a 'zero function' default for forcing terms that can be overridden by the user.

Note that in addition to implementing forcings on the RHS of the equations, we also need an API that allows the user to define functions. @ali-ramadhan, perhaps we can work on these in parallel.

We need to resolve the isbitstype issue for the function signature F.u(grid, velocities, tracers, i, j, k) to be possible?

@vchuravy, @ali-ramadhan and I talked about using FieldVector as a parent type for Forcing (and Velocities, Tracers, etc) as a nice way to get tuple-like features. What do you think about that?

@glwagner
Copy link
Member

As discussed with @jm-c, I think another feature we will need is the ability to design closure-specific temporary variables.

For example, the implementation of Constant Smagorinsky or Anisotropic Minimum Dissipation will benefit from the ability to compute the eddy viscosity (at 4 locations in a cell --- 3 FaceFields and 1 CellField) and re-use it in calculating the contribution of the subgrid closure to the momentum 'source terms'.

@vchuravy
Copy link
Collaborator

and I talked about using FieldVector as a parent type for Forcing (and Velocities, Tracers, etc) as a nice way to get tuple-like features. What do you think about that?

Should be fine, but I leave the abstractions to @peterahrens ;)

closure-specific temporary variables.

What do you mean by that? Are these temporaries globals that ought to be updated and carried between different forcing functions?

@willow-ahrens
Copy link

FieldVectors from StaticArrays are a great solution, as are SLArrays from https://github.com/JuliaDiffEq/LabelledArrays.jl

@glwagner
Copy link
Member

glwagner commented Mar 1, 2019

@peterahrens thanks for that tip!

@vchuravy, by closure-specific temporary variables (or fields), I am referring to domain-size quantities that can be re-used in computing the contribution of an LES closure to the 'source terms' for momentum and tracers.

In other words, this line, which computes the source term for u could become something like

kernel_temporaries!(subgrid_closure, ..., i, j, k)
Gu[i, j, k] = ... + subgrid_closure.u(subgrid_closure_temporaries, ..., i, j, k)

... I think. In physical terms, the intermediate variable is an 'eddy viscosity' that acts on all momentum terms.

But I guess when I say the computation 'will benefit' from variables to store intermediate results, what I really mean is that I'm expecting the calculation of this temporary variable to be fairly involved (potentially >20 derivatives of velocity and tracer fields in x, y, z, plus scalar operations to combine the fields, and a 'clipping' function to make sure the viscosity is not negative --- see the formula for the eddy viscosity predictor).

It is also of course possible to simply write a function that calculates the eddy viscosity, and call this function repeatedly to avoid temporaries altogether.

@ali-ramadhan
Copy link
Member Author

It is also of course possible to simply write a function that calculates the eddy viscosity, and call this function repeatedly to avoid temporaries altogether.

Don't know if it's always possible but if we can avoid temporary variables by aggressively inlining all calculations, that would be cool. Every temporary field saved means being able to run a larger model on GPUs.

We will always need a couple of temporary arrays so if we can't inline we can just try to share and reuse the temporary arrays as much as possible.

@ali-ramadhan ali-ramadhan added numerics 🧮 So things don't blow up and boil the lobsters alive abstractions 🎨 Whatever that means and removed feature 🌟 Something new and shiny labels Mar 2, 2019
@ali-ramadhan ali-ramadhan added this to the v0.5 milestone Mar 2, 2019
@ali-ramadhan ali-ramadhan self-assigned this Mar 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
abstractions 🎨 Whatever that means numerics 🧮 So things don't blow up and boil the lobsters alive
Projects
None yet
Development

No branches or pull requests

4 participants