Skip to content

Commit

Permalink
Merge Pull Request #4820 from gsjaardema/Trilinos/april-seacas-snapshot
Browse files Browse the repository at this point in the history
Automatically Merged using Trilinos Pull Request AutoTester
PR Title: Automatic snapshot commit from seacas at ee0f81875
PR Author: gsjaardema
  • Loading branch information
trilinos-autotester authored Apr 5, 2019
2 parents a470499 + 3c19f3b commit 4867628
Show file tree
Hide file tree
Showing 200 changed files with 6,299 additions and 2,693 deletions.
69 changes: 34 additions & 35 deletions packages/seacas/IossProperties.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,25 +2,25 @@

Property | Value | Description
----------|----------|------------
LOGGING | on/[off] | enable/disable logging of field input/output
LOWER\_CASE\_VARIABLE\_NAMES | [on]/off | Convert all variable names on database to lowercase; replace ' ' with '_'
LOGGING | on/\[off] | enable/disable logging of field input/output
LOWER\_CASE\_VARIABLE\_NAMES | \[on]/off | Convert all variable names on database to lowercase; replace ' ' with '_'
VARIABLE\_NAME\_CASE | upper/lower | Convert all variable names on output database to upper or lower case
USE\_GENERIC\_CANONICAL\_NAMES | on/[off] | use `block_{id}` as canonical name of an element block instead of the name (if any) stored on the database. The database name will be an alias.
ENABLE\_FIELD\_RECOGNITION | [on]/off | try to combine scalar fields with common basename and recognized suffix into vector, tensor, ...
FIELD\_SUFFIX\_SEPARATOR | character ['_'] | use this suffix as separtor between field basename and suffices when recognizing fields
MINIMIZE\_OPEN\_FILES | on/[off] | If on, then close file after each timestep and then reopen on next output
CYCLE\_COUNT | {int}[infinite] | See `OVERLAY_COUNT` description below
OVERLAY\_COUNT | {int}[0] | Output `OVERLAY_COUNT` steps to database on top of each other. `DB_STEP = (((IOSS_STEP-1) / (OVERLAY_COUNT+1)) % CYCLE_COUNT) +1`
USE\_GENERIC\_CANONICAL\_NAMES | on/\[off] | use `block_{id}` as canonical name of an element block instead of the name (if any) stored on the database. The database name will be an alias.
ENABLE\_FIELD\_RECOGNITION | \[on]/off | try to combine scalar fields with common basename and recognized suffix into vector, tensor, ...
FIELD\_SUFFIX\_SEPARATOR | character \['_'] | use this suffix as separtor between field basename and suffices when recognizing fields
MINIMIZE\_OPEN\_FILES | on/\[off] | If on, then close file after each timestep and then reopen on next output
CYCLE\_COUNT | {int}\[infinite] | See `OVERLAY_COUNT` description below
OVERLAY\_COUNT | {int}\[0] | Output `OVERLAY_COUNT` steps to database on top of each other. `DB_STEP = (((IOSS_STEP-1) / (OVERLAY_COUNT+1)) % CYCLE_COUNT) +1`
## Auto-Decomposition-Related Properties

Property | Value | Description
-----------------|--------|-----------------------------------------------------------
MODEL\_DECOMPOSITION\_METHOD | {method} | Decompose a DB with type `MODEL` using `method`
RESTART\_DECOMPOSITION\_METHOD | {method} | Decompose a DB with type `RESTART_IN` using `method`
DECOMPOSITION\_METHOD | {method} | Decompose all input DB using `method`
PARALLEL\_CONSISTENCY | [on]/off | On if the client will call Ioss functions consistently on all processors. If off, then the auto-decomp and auto-join cannot be used.
RETAIN\_FREE\_NODES | [on]/off | In auto-decomp, will nodes not connected to any elements be retained.
LOAD\_BALANCE\_THRESHOLD | {real} [1.4] | CGNS-Structured only -- Load imbalance permitted Load on Proc / Avg Load
PARALLEL\_CONSISTENCY | \[on]/off | On if the client will call Ioss functions consistently on all processors. If off, then the auto-decomp and auto-join cannot be used.
RETAIN\_FREE\_NODES | \[on]/off | In auto-decomp, will nodes not connected to any elements be retained.
LOAD\_BALANCE\_THRESHOLD | {real} \[1.4] | CGNS-Structured only -- Load imbalance permitted Load on Proc / Avg Load

### Valid values for Decomposition Method

Expand All @@ -41,28 +41,28 @@ external | Files are decomposed externally into a file-per-processor in a para

Property | Value | Description
-----------------|--------|-----------------------------------------------------------
COMPOSE\_RESTART | on/[off] |
COMPOSE\_RESULTS | on/[off] |
COMPOSE\_RESTART | on/\[off] |
COMPOSE\_RESULTS | on/\[off] |
PARALLEL\_IO\_MODE | netcdf4, hdf5, pnetcdf | mpiio and mpiposix are deprecated hdf5=netcdf4

## Properties Related to byte size of reals and integers

Property | Value | Description
-----------------------|--------|-----------------------------------------------------------
INTEGER\_SIZE\_DB | [4] / 8 | byte size of integers stored on the database.
INTEGER\_SIZE\_API | [4] / 8 | byte size of integers used in api functions.
REAL\_SIZE\_DB | 4 / [8] | byte size of floating point stored on the database.
REAL\_SIZE\_API | 4 / [8] | byte size of floating point used in api functions.
INTEGER\_SIZE\_DB | \[4] / 8 | byte size of integers stored on the database.
INTEGER\_SIZE\_API | \[4] / 8 | byte size of integers used in api functions.
REAL\_SIZE\_DB | 4 / \[8] | byte size of floating point stored on the database.
REAL\_SIZE\_API | 4 / \[8] | byte size of floating point used in api functions.

## Properties related to underlying file type (exodus only)

Property | Value | Description
-----------------------|--------|-----------------------------------------------------------
FILE\_TYPE | [netcdf], netcdf4, netcdf-4, hdf5 |
COMPRESSION\_LEVEL | [0]-9 | In the range [0..9]. A value of 0 indicates no compression, will automatically set `file_type=netcdf4`, recommend <=4
COMPRESSION\_SHUFFLE | on/[off] |to enable/disable hdf5's shuffle compression algorithm.
MAXIMUM\_NAME\_LENGTH | [32] | Maximum length of names that will be returned/passed via api call.
APPEND\_OUTPUT | on/[off] | Append output to end of existing output database
FILE\_TYPE | \[netcdf], netcdf4, netcdf-4, hdf5 |
COMPRESSION\_LEVEL | \[0]-9 | In the range \[0..9]. A value of 0 indicates no compression, will automatically set `file_type=netcdf4`, recommend <=4
COMPRESSION\_SHUFFLE | on/\[off] |to enable/disable hdf5's shuffle compression algorithm.
MAXIMUM\_NAME\_LENGTH | \[32] | Maximum length of names that will be returned/passed via api call.
APPEND\_OUTPUT | on/\[off] | Append output to end of existing output database
APPEND\_OUTPUT\_AFTER\_STEP | {step}| Max step to read from an input db or a db being appended to (typically used with APPEND\_OUTPUT)
APPEND\_OUTPUT\_AFTER\_TIME | {time}| Max time to read from an input db or a db being appended to (typically used with APPEND\_OUTPUT)

Expand All @@ -71,29 +71,28 @@ PARALLEL\_IO\_MODE | netcdf4, hdf5, pnetcdf | mpiio and mpiposix are deprecated
-----------------------|--------|-----------------------------------------------------------
FLUSH\_INTERVAL | int | Minimum time interval between flushing heartbeat data to disk. Default is 10 seconds
FLUSH\_INTERVAL | int | For non-heartbeat, the number of output steps between flushing data to disk; if 0, then no flush
TIME\_STAMP\_FORMAT | [%H:%M:%S] | Format used to format time stamp. See strftime man page
TIME\_STAMP\_FORMAT | \[%H:%M:%S] | Format used to format time stamp. See strftime man page
SHOW\_TIME\_STAMP | on/off | Should the output lines be preceded by the timestamp
PRECISION | 0..16 [5] | Precision used for floating point output.
PRECISION | 0..16 \[5] | Precision used for floating point output.
FIELD\_WIDTH | 0.. | Width of an output field. If 0, then use natural width.
SHOW\_LABELS | on/[off] | Should each field be preceded by its name (ke=1.3e9, ie=2.0e9)
SHOW\_LEGEND | [on]/off | Should a legend be printed at the beginning of the output showing the field names for each column of data.
SHOW\_TIME\_FIELD | on/[off] | Should the current analysis time be output as the first field.

SHOW\_LABELS | on/\[off] | Should each field be preceded by its name (ke=1.3e9, ie=2.0e9)
SHOW\_LEGEND | \[on]/off | Should a legend be printed at the beginning of the output showing the field names for each column of data.
SHOW\_TIME\_FIELD | on/\[off] | Should the current analysis time be output as the first field.

## Experimental

Property | Value | Description
-----------------------|--------|-----------------------------------------------------------
MEMORY\_READ | on/[off] | experimental
MEMORY\_WRITE | on/[off] | experimental
ENABLE\_FILE\_GROUPS | on/[off] | experimental
MEMORY\_READ | on/\[off] | experimental
MEMORY\_WRITE | on/\[off] | experimental
ENABLE\_FILE\_GROUPS | on/\[off] | experimental

## Debugging / Profiling

Property | Value | Description
----------|----------|------------
LOGGING | on/[off] | enable/disable logging of field input/output
DECOMP\_SHOW\_PROGRESS | on/[off] | show memory and elapsed time during autodecomp.
DECOMP\_SHOW\_HWM | on/[off] | show high-water memory during autodecomp
IOSS\_TIME\_FILE\_OPEN\_CLOSE | on/[off] | show elapsed time during parallel-io file open/close/create
LOGGING | on/\[off] | enable/disable logging of field input/output
DECOMP\_SHOW\_PROGRESS | on/\[off] | show memory and elapsed time during autodecomp.
DECOMP\_SHOW\_HWM | on/\[off] | show high-water memory during autodecomp
IOSS\_TIME\_FILE\_OPEN\_CLOSE | on/\[off] | show elapsed time during parallel-io file open/close/create
CHECK\_PARALLEL\_CONSISTENCY | ignored | check Ioss::GroupingEntity parallel consistency
44 changes: 44 additions & 0 deletions packages/seacas/MAPVAR.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
After a look and run through the debugger; I'm not sure how the nodal
variable interpolation *ever* worked in the past. It looks totally
wrong and I can understand why it isn’t working now; I just can't
understand how it ever worked correctly for multi-element block
models.

As I see it, it does the following for the interpolation:

* For all blocks

* For all time steps

* For all nodal variables
* Iterate all nodes in this block; map from A->B
* Write values for *all* B nodes at this step for this variable

* For all time steps

* For all element variables
* Iterate all elements in this block; map from A->B
* Write values for all elements in this block at this step for this variable

This works for element variables since the exodus API can output
elements a block and variable at a time. For nodes, it doesn't work
and you will end up with the values at the last step for all
nodes/variables except for nodes which are only in the last element
block which seems to be what you are seeing.

Fixing this would be a major undertaking and I'm not sure it would get
prioritized (although you are welcome to try).

This *should* work OK if you only do a single timestep or if you only
have a single element block. With a single timestep and multiple
element blocks, there is an issue of what happens if the node is
shared between multiple element blocks -- it will only get the
interpolated value from the last block.

Now, what to do...
* I think that the Percept code can do some mapping from mesh to mesh...
* Klugy, but can do a timestep at a time and then rejoin all timesteps using `conjoin`
* Klugy, but can subset down to one block / mesh and then run mapvar on each submesh and then join using `ejoin`

Sorry for the bearer of bad news, but hopefully there is a path to get
done what you need...
9 changes: 7 additions & 2 deletions packages/seacas/NetCDF-Mapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,7 @@ used in an Exodus file.
* There are about 10 standard dimensions in every file.
* plus one for each set plus one if any attributes
* plus two for each block plus one if any attributes
* plus one for each transient variable on an entity (node, node set, element
block, element set, ...)
* plus one for each transient variable on an entity (node, node set, element block, element set, ...)

## Variables: (NC_MAX_VARS)
* There are a few standard dimensions
Expand All @@ -41,19 +40,25 @@ block, element set, ...)
* #ndim coordinates (1,2,3)

* Each block adds 1 + 2*#attr_on_block + #var_on_block

* Each set adds 2 + 2*#attr_on_set + #var_on_set

* Each sideset add 3 + 2*#attr_on_sset + #var_on_sset

* Each map adds 1

## Example
If we have an exodus file with:
* Nodes

* 5 Element blocks
* 4 transient variables per element block
* 2 attributes per element block

* 4 Node Sets
* Distribution Factors defined on each set
* 3 transient variables

* 3 Side Sets
* Distribution Factors defined on each set
* 2 transient variables
Expand Down
Loading

0 comments on commit 4867628

Please sign in to comment.