Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix KPP compilation problem due to non-existent empty directory after Github transition #1

Merged
merged 1 commit into from
Aug 31, 2016

Conversation

mkavulich
Copy link
Contributor

TYPE: bug fix

KEYWORDS: WRF-CHEM, KPP, compile

SOURCE: internal

DESCRIPTION OF CHANGES:
The transition to git comes with a bit of a quirk: empty directories are not tracked, so cloning the repository and compiling that code can lead to some interesting failures. In this example, WRF-CHEM KPP compilation fails because the chem/KPP/kpp/kpp-2.1/bin/ directory (which was previously just an empty directory in the Subversion repository) does not exist.

A similar problem was previously solved for WRFDA (commit 6f386a3, Fri Jul 8 19:52:50 2016) by adding a .gitignore to that directory. However, since this directory only holds one file in the compiled code, that solution seems a bit pointless. Adding a mkdir -p command to chem/KPP/compile_wkc ensures that the directory will be made if it does not exist.

LIST OF MODIFIED FILES (annotated if not obvious, not required to be on a single line):
M chem/KPP/compile_wkc

TESTS CONDUCTED: WTF passes (previous KPP failures no longer exist).

…m/KPP/kpp/kpp-2.1/bin/" directory. This should solve the KPP compilation problem on github due to the missing empty directory.
@mkavulich mkavulich merged commit 8fafea9 into master Aug 31, 2016
@mkavulich mkavulich deleted the KPP_github_fix branch August 31, 2016 18:13
BinLiu-NOAA pushed a commit that referenced this pull request Nov 17, 2016
KEYWORDS: vertical refinement clean-up

SOURCE: Katie Lundquist (LLNL)

PURPOSE: fix incorrect diag messages, fix incorrect if tests, pass necessary nml variables to interpolating routines

DESCRIPTION OF CHANGES: 
M       Registry/Registry.EM_COMMON
Clean up descriptions of vert refine_method and vert_refine_fact. 

M       dyn_em/module_initialize_les.F
print out for eta levels
Check for top/bottom specified eta levels set to 0, 1 respectively (all domains, even though IC is only for domain #1)

M       dyn_em/module_initialize_real.F
Fix comments about vertical refinement, so no confusion over which method is being used
Verify specified eta levels have 1,0 bounds for each domain

M       dyn_em/nest_init_utils.F
pass through the nml argument use_baseparam_fr_nml
compute eta levels based on whether ideal or real-data case

M       main/depend.common
Add module_model_constants dependecy to nest_init_utils (for real data base state computation)

M       main/real_em.F
Fix logic for better control of which nests are vertically refined.

M       share/mediation_integrate.F
Pass through use_baseparam_fr_nml to init_domain_vert_nesting

M       share/module_check_a_mundo.F
Fix error checks when user selects the vertical refinement


LIST OF MODIFIED FILES (annotated if not obvious, not required to be on a single line): 
M       Registry/Registry.EM_COMMON
M       dyn_em/module_initialize_les.F
M       dyn_em/module_initialize_real.F
M       dyn_em/nest_init_utils.F
M       main/depend.common
M       main/real_em.F
M       share/mediation_integrate.F
M       share/module_check_a_mundo.F

TESTS CONDUCTED (explicitly state mandatory, voluntary, and assigned tests, not required to be on a single line):
1) regression test - do no harm
2) before vs after comparison of most regression tests are identical - only misses on tests that are not bit-for-bit identical anyways.



git-svn-id: https://svn-wrf-model.cgd.ucar.edu/trunk@9156 b0b5d27b-6f0f-0410-a2a3-cb1e977edc3d
KathrynNewman added a commit that referenced this pull request Feb 17, 2017
Updates and bugfix for GF scheme
@davegill davegill mentioned this pull request Apr 13, 2018
1 task
kkeene44 pushed a commit that referenced this pull request Apr 13, 2018
TYPE: no impact

KEYWORDS: version, v4, friendly

SOURCE: internal

DESCRIPTION OF CHANGES:
Modify character string to reflect the friendly release #1 of version 4.0.

LIST OF MODIFIED FILES:
M      inc/version_decl

TESTS CONDUCTED:
 - [x] I stared at that line for a really long time.
jjguerrette pushed a commit to jjguerrette/WRF-public that referenced this pull request Sep 12, 2018
by requiring cloud_cv_options that match packaging defined in registry.var
for xa%qrn, xa%qcw, xa%qci, xa%qsn, and xa%qgr

This bugfix is connected to PR wrf-model#283 from  11 AUG 2017.

The array bounds error this avoids occurs when mp_physics ~= [0, 98] and cloud_cv_options==0.

With "debug" build of WRFDA and without fix, the end of rsl.error.0000:

forrtl: severe (408): fort: (2): Subscript wrf-model#1 of the array QCW has value 2 which is greater than the upper bound of 1

Image              PC                Routine            Line        Source
da_wrfvar.exe      00000000060CF996  Unknown               Unknown  Unknown
da_wrfvar.exe      00000000017EDB3B  da_transfer_model        2224  da_transfer_model.f
da_wrfvar.exe      00000000018A4263  da_transfer_model        3399  da_transfer_model.f
da_wrfvar.exe      00000000004C5D92  da_wrfvar_top_mp_        3675  da_wrfvar_top.f
da_wrfvar.exe      00000000004B0699  da_wrfvar_top_mp_        2779  da_wrfvar_top.f
da_wrfvar.exe      00000000004B0559  da_wrfvar_top_mp_        2749  da_wrfvar_top.f
da_wrfvar.exe      0000000000459863  MAIN__                     34  da_wrfvar_main.f
da_wrfvar.exe      0000000000405C1E  Unknown               Unknown  Unknown
libc-2.19.so       00002AAAAB7E5B25  __libc_start_main     Unknown  Unknown
da_wrfvar.exe      0000000000405B29  Unknown               Unknown  Unknown

 Changes to be committed:
	modified:   var/da/da_transfer_model/da_transfer_xatowrf.inc
jjguerrette pushed a commit that referenced this pull request Sep 13, 2018
TYPE: bug fix

KEYWORDS: moist, analysis update

SOURCE: Internal (JJG)

DESCRIPTION OF CHANGES: 
xatowrf now requires cloud_cv_options that match packaging defined in registry.var
for xa%qrn, xa%qcw, xa%qci, xa%qsn, and xa%qgr.

This bugfix is connected to PR #283 from  11 AUG 2017 (c7405bb#diff-fe8b020143d32583d82b945e2bd66f50)

The array bounds error this avoids occurs when mp_physics ~= [0, 98] and cloud_cv_options==0.

LIST OF MODIFIED FILES: 
M       var/da/da_transfer_model/da_transfer_xatowrf.inc

TESTS CONDUCTED: 
The following error at the end of rsl.error.0000 is avoided with this fix when "debug" build is used for WRFDA:

>``forrtl: severe (408): fort: (2): Subscript #1 of the array QCW has value 2 which is greater than the upper bound of 1``
>
>``Image              PC                Routine            Line        Source
da_wrfvar.exe      00000000060CF996  Unknown               Unknown  Unknown
da_wrfvar.exe      00000000017EDB3B  da_transfer_model        2224  da_transfer_model.f
da_wrfvar.exe      00000000018A4263  da_transfer_model        3399  da_transfer_model.f
da_wrfvar.exe      00000000004C5D92  da_wrfvar_top_mp_        3675  da_wrfvar_top.f
da_wrfvar.exe      00000000004B0699  da_wrfvar_top_mp_        2779  da_wrfvar_top.f
da_wrfvar.exe      00000000004B0559  da_wrfvar_top_mp_        2749  da_wrfvar_top.f
da_wrfvar.exe      0000000000459863  MAIN__                     34  da_wrfvar_main.f
da_wrfvar.exe      0000000000405C1E  Unknown               Unknown  Unknown
libc-2.19.so       00002AAAAB7E5B25  __libc_start_main     Unknown  Unknown
da_wrfvar.exe      0000000000405B29  Unknown               Unknown  Unknown``


The WRFDA Regression test was not run.  The changes are minor and fix the known bug.
smileMchen added a commit that referenced this pull request Dec 11, 2018
TYPE: bug fix

KEYWORDS: obs nudging, max number of tasks

SOURCE: internal

DESCRIPTION OF CHANGES:
Problem:
The max number of processors, 1024, is hard coded in module_dm.F for observation nudging.
If a user requests more MPI tasks than this max number, this leads to segmentation fault.

Solution:
In the routine where the dimension of the variables is defined as the maximum number of MPI
tasks, those two variables are now declared as ALLOCATABLE, and then they are allocated based on
the total number of MPI ranks.

LIST OF MODIFIED FILES:
M external/RSL_LITE/module_dm.F

TESTS CONDUCTED:

Applied new code to a user's case, which shows the code works as expected.
No bit-wise diffs with smaller test case, before vs after mods: I built the code with ./configure -d option, and run a small test case with 1 processor and 36 processors, respectively. OBS nudging is turned on. Both runs cover a 3-hour period. Results are identical.
Test case with > 1024 MPI tasks: A large case (derived from a user's case) is also tested. In this case, the code is built with ./configure -D option. Without the change, the case crashed immediately. The error message is:
OBS NUDGING is requested on a total of  2 domain(s).
++++++CALL ERROB AT KTAU =     0 AND INEST =  1:  NSTA =     0 ++++++
At line 5741 of file module_dm.f90
Fortran runtime error: Index '1025' of dimension 1 of array 'idisplacement' above upper bound of 1024
Error termination. Backtrace:
#0  0x782093 in __module_dm_MOD_get_full_obs_vector
	at /glade/scratch/chenming/WRFHELP/WRFV3.9.1.1_intel_dmpar_large-file/frame/module_dm.f90:5741
#1  0xffffffffffffffff in ???
With the code change, the case can run successfully for 6 hours.

RELEASE NOTE: After removing a hard-coded limit for an assumed maximum number of MPI tasks, the observation nudging code for WRF now supports more than 1024 MPI tasks. If users previously ran the obs nudging code with 1024 or fewer MPI tasks, the original code is OK. However, if users tried to run obs nudging with > 1024 MPI tasks, likely the code died from a segmentation fault, while trying to access an address for an array index that was not available.
kkeene44 pushed a commit that referenced this pull request Feb 15, 2019
TYPE: text only

KEYWORDS: version_decl, v4.1-alpha

SOURCE: internal

DESCRIPTION OF CHANGES: 
Update the character string inside the WRF system from 4.0.3 to 4.1-alpha.

LIST OF MODIFIED FILES: 
M inc/version_decl

TESTS CONDUCTED: 
 - [x] Code runs and v4.1-alpha is the version printed from the WRF system programs.
```
> ncdump -h wrfinput_d01 | grep TITLE
		:TITLE = " OUTPUT FROM REAL_EM V4.1-alpha PREPROCESSOR" ;
> ncdump -h wrfinput_initialized_d01  | grep TITLE
		:TITLE = " OUTPUT FROM WRF V4.1-alpha MODEL" ;
> ncdump -h met_em.d01.2019-02-15_12:00:00.nc  | grep TITLE
		:TITLE = "OUTPUT FROM METGRID V4.1" ;
> ncdump -h wrfout_d01_2019-02-16_12:00:00  | grep TITLE
		:TITLE = " OUTPUT FROM WRF V4.1-alpha MODEL" ;
```
johnrobertlawson pushed a commit to johnrobertlawson/WRF that referenced this pull request Mar 31, 2019
davegill added a commit that referenced this pull request May 15, 2019
… data (#875)

TYPE: bug fix

KEYWORDS: LBC, valid time

SOURCE: identified by Michael Duda (NCAR/MMM), fixed internally

DESCRIPTION OF CHANGES:
Problem:
1. If a user tried to start a simulation _after_ the last LBC valid period, the
WRF model would get into a nearly infinite loop and print out repeated statements:
```
 THIS TIME 2000-01-24_18:00:00, NEXT TIME 2000-01-25_00:00:00
d01 2000-01-25_06:00:00  Input data is acceptable to use: wrfbdy_d01
           2  input_wrf: wrf_get_next_time current_date: 2000-01-24_18:00:00 Status =           -4
d01 2000-01-25_06:00:00  ---- ERROR: Ran out of valid boundary conditions in file wrfbdy_d01
```
2. If a user tries to extend the model simulation beyond that valid times of the LBC, the code
behavior is not controlled (nearly infinite loops on some machines, or runtime errors with a backtrace
on other machines).

Solution:
In another routine, the lateral boundary condition is read to get to the
correct time. Once inside of share/input_wrf.F, we should be at the
correct time. There is no need to try to get to the next time. In this
particular case, the effort to get to the next time fails, but we try
again (and again and again). This solution fixes both problems identified
above.

ISSUE:
Fixes #769 "WRF doesn't halt when beginning LBC time is not in wrfbdy_d01 file"

LIST OF MODIFIED FILES:
M share/input_wrf.F

TESTS CONDUCTED:
1. Without fix, start the model after the last valid time of the LBC file => lots of repeated messages
```
 THIS TIME 2000-01-24_18:00:00, NEXT TIME 2000-01-25_00:00:00
d01 2000-01-25_06:00:00  Input data is acceptable to use: wrfbdy_d01
           2  input_wrf: wrf_get_next_time current_date: 2000-01-24_18:00:00 Status =           -4
d01 2000-01-25_06:00:00  ---- ERROR: Ran out of valid boundary conditions in file wrfbdy_d01
```
2. With this fix, when LBC stops at 2000 01 25 00, and WRF starts at 2000 01 25 06
```
d01 2000-01-25_06:00:00  Input data is acceptable to use: wrfbdy_d01
 THIS TIME 2000-01-24_12:00:00, NEXT TIME 2000-01-24_18:00:00
d01 2000-01-25_06:00:00  Input data is acceptable to use: wrfbdy_d01
 THIS TIME 2000-01-24_18:00:00, NEXT TIME 2000-01-25_00:00:00
d01 2000-01-25_06:00:00  Input data is acceptable to use: wrfbdy_d01
           2  input_wrf: wrf_get_next_time current_date: 2000-01-24_18:00:00 Status =           -4
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:    1134
 ---- ERROR: Ran out of valid boundary conditions in file wrfbdy_d01
-------------------------------------------
```
3. Without this fix, if we try to extend the module simulation beyond the valid lateral boundary times
```
Timing for main: time 2000-01-24_23:54:00 on domain   1:    0.53782 elapsed seconds
Timing for main: time 2000-01-24_23:57:00 on domain   1:    0.51111 elapsed seconds
Timing for main: time 2000-01-25_00:00:00 on domain   1:    0.54507 elapsed seconds
Timing for Writing wrfout_d01_2000-01-25_00:00:00 for domain        1:    0.03793 elapsed seconds
d01 2000-01-25_00:00:00  Input data is acceptable to use: wrfbdy_d01
           2  input_wrf: wrf_get_next_time current_date: 2000-01-25_00:00:00 Status =           -4
d01 2000-01-25_00:00:00  ---- ERROR: Ran out of valid boundary conditions in file wrfbdy_d01
At line 777 of file module_date_time.f90
Fortran runtime error: Bad value during integer read

Error termination. Backtrace:
#0  0x10e67c36c
#1  0x10e67d075
#2  0x10e67d7e9
```
4. With this fix, if we try to extend the module simulation beyond the valid lateral boundary times
```
Timing for main: time 2000-01-24_23:54:00 on domain   1:    0.60755 elapsed seconds
Timing for main: time 2000-01-24_23:57:00 on domain   1:    0.57641 elapsed seconds
Timing for main: time 2000-01-25_00:00:00 on domain   1:    0.60817 elapsed seconds
Timing for Writing wrfout_d01_2000-01-25_00:00:00 for domain        1:    0.04499 elapsed seconds
d01 2000-01-25_00:00:00  Input data is acceptable to use: wrfbdy_d01
           2  input_wrf: wrf_get_next_time current_date: 2000-01-25_00:00:00 Status =           -4
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE:  <stdin>  LINE:    1134
 ---- ERROR: Ran out of valid boundary conditions in file wrfbdy_d01
-------------------------------------------
```

MMM Classroom regtest; em_real, nmm, em_chem; GNU only
davegill added a commit to smileMchen/WRF that referenced this pull request Feb 10, 2020
dmey referenced this pull request in TEB-model/wrf-teb Mar 31, 2020
jordanschnell added a commit to jordanschnell/WRF that referenced this pull request Dec 29, 2020
@dwongepa dwongepa mentioned this pull request Mar 14, 2021
twjuliano pushed a commit to twjuliano/WRF that referenced this pull request Jun 13, 2022
…lates

Add setup script and template files
twjuliano pushed a commit to twjuliano/WRF that referenced this pull request Jun 13, 2022
smileMchen pushed a commit to smileMchen/WRF that referenced this pull request Jul 11, 2024
Synchronize the develop branch from official WRF GitHub
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants