Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add radiation-driven shell to regression tests #810

Merged
merged 17 commits into from
Dec 10, 2024

Conversation

BenWibking
Copy link
Collaborator

@BenWibking BenWibking commented Dec 9, 2024

Description

Adds this problem to the nightly regression test suite. There was also a bug in the initial conditions in the radiation flux in converting from spherical to Cartesian flux components (now fixed).

At late times, the simulation still has a slightly discrepancy w.r.t. the semi-analytic solution (similar to that shown in the original code paper). This is most likely due to initial shell thickness being quite significant. (There is not an easy solution to this, since there is no steady-state solution that satisfies the M1 closure for sufficiently thin shells.)

shell_velocity

movie_large.mov

Related issues

Closes #743.

Checklist

Before this pull request can be reviewed, all of these tasks should be completed. Denote completed tasks with an x inside the square brackets [ ] in the Markdown source below:

  • I have added a description (see above).
  • I have added a link to any related issues (if applicable; see above).
  • I have read the Contributing Guide.
  • I have added tests for any new physics that this PR adds to the code.
  • (For quokka-astro org members) I have manually triggered the GPU tests with the magic comment /azp run.

@BenWibking BenWibking changed the title Fix radiation-driven shell test Add radiation-driven shell test to regression tests Dec 9, 2024
@BenWibking BenWibking changed the title Add radiation-driven shell test to regression tests Add radiation-driven shell to regression tests Dec 9, 2024
@BenWibking
Copy link
Collaborator Author

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@BenWibking
Copy link
Collaborator Author

Hmm, it looks like there is a global variable that is misbehaving:

pure virtual method called
terminate called without an active exception
[moth:2945339] *** Process received signal ***
[moth:2945339] Signal: Aborted (6)
[moth:2945339] Signal code:  (-753879392)

Will fix.

@BenWibking
Copy link
Collaborator Author

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@BenWibking BenWibking marked this pull request as ready for review December 9, 2024 22:14
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug: wrong answer/failure/crash Something isn't working labels Dec 9, 2024
@BenWibking
Copy link
Collaborator Author

BenWibking commented Dec 9, 2024

@markkrumholz @chongchonghe I would appreciate a review of this from both of you, if you have time.

@markkrumholz
Copy link
Collaborator

Code changes look fine. As to why we're not exactly preserving the analytic solution, one thing I might check is whether the shell is really very thin -- the analytic solution assumes an infinitely thin shell, and if that assumption is not fully satisfied, then we shouldn't expect to match the solution exactly. It may be necessary to tweak test parameters to reduce shell thickness, for example by reducing the sound speed so that pressure support becomes less important.

markkrumholz
markkrumholz previously approved these changes Dec 9, 2024
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Dec 9, 2024
@chongchonghe
Copy link
Contributor

chongchonghe commented Dec 9, 2024

The RadShell test on GPU failed. Even if you fix the bug, I speculate it will take too long to run. I would recommend reducing max_timesteps, which is currently set to 5000. With $128^3 = 2M$ cells, suppose a performance of 10 Mzone/s/GPU, that implies 5 steps/s. It will take 1000 seconds to run 5000 timesteps.

EDIT: the run time is also determined by stopTime_, which is set to 0.125 * t0_hydro. I guess that is bigger than 5000 steps?

@BenWibking
Copy link
Collaborator Author

BenWibking commented Dec 10, 2024

@chongchonghe The error is this:

Coarse STEP 3203 at t = 7.844071263e+12 (81.33839288%) starts ...
	>> Using global timestep on this coarse step (estimated work ratio: 1).
[Level 0 step 3203] ADVANCE with time = 7.8440712630e+12 dt = 5.5257787970e+07
	>> CFL violation detected on level 0 with dt_lev = 55257787.97 and dt_cfl = 48548760.42
	   max_signal = 993202495.4
	>> WARNING: Hydro advance failed on level 0
	>> Re-trying hydro advance at level 0 with reduced timestep (nsubsteps = 2, dt_new = 27628893.98)
	Radiation substeps: 1	dt: 55257787.97
amrex::Abort::0::Newton-Raphson iteration for matter-radiation coupling failed to converge! !!!
SIGABRT
See Backtrace.0 file for details

I think this indicates our current method is not robust enough.

Edit: Ok, nvm. Something has gone wrong. The timestep is too small here anyway.

Somehow it works on AMD GPU, though...

      Start 34: RadhydroShell
34/52 Test #34: RadhydroShell ......................   Passed  465.20 sec

@BenWibking
Copy link
Collaborator Author

Code changes look fine. As to why we're not exactly preserving the analytic solution, one thing I might check is whether the shell is really very thin -- the analytic solution assumes an infinitely thin shell, and if that assumption is not fully satisfied, then we shouldn't expect to match the solution exactly. It may be necessary to tweak test parameters to reduce shell thickness, for example by reducing the sound speed so that pressure support becomes less important.

Ah, right. The shell is initially quite thick. This is necessary because there is no steady-state solution of the radiation moment equations for a sufficiently-thin shell under the M1 closure. But that probably means we shouldn't expect exact agreement...

@BenWibking
Copy link
Collaborator Author

/azp run

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

chongchonghe
chongchonghe previously approved these changes Dec 10, 2024
@BenWibking
Copy link
Collaborator Author

/azp run

@BenWibking BenWibking enabled auto-merge December 10, 2024 18:56
Copy link

Azure Pipelines successfully started running 2 pipeline(s).

@BenWibking
Copy link
Collaborator Author

@chongchonghe I added a density floor, so it now works for 128^3, which is tested in the nightly regression tests. The 64^3 version is used for the /azp run tests for each PR.

@BenWibking BenWibking added this pull request to the merge queue Dec 10, 2024
Merged via the queue into development with commit 5835ef8 Dec 10, 2024
23 checks passed
@BenWibking BenWibking deleted the BenWibking/fix-radhydro-shell branch December 12, 2024 18:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug: wrong answer/failure/crash Something isn't working lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add radhydro problem to nightly regression tests
3 participants