diff --git a/benchmarks/README.md b/benchmarks/README.md
new file mode 100644
index 0000000000..8dffd473f3
--- /dev/null
+++ b/benchmarks/README.md
@@ -0,0 +1,99 @@
+# Iris Performance Benchmarking
+
+Iris uses an [Airspeed Velocity](https://github.com/airspeed-velocity/asv)
+(ASV) setup to benchmark performance. This is primarily designed to check for
+performance shifts between commits using statistical analysis, but can also
+be easily repurposed for manual comparative and scalability analyses.
+
+The benchmarks are automatically run overnight
+[by a GitHub Action](../.github/workflows/benchmark.yml), with any notable
+shifts in performance being flagged in a new GitHub issue.
+
+## Running benchmarks
+
+`asv ...` commands must be run from this directory. You will need to have ASV
+installed, as well as Nox (see
+[Benchmark environments](#benchmark-environments)).
+
+[Iris' noxfile](../noxfile.py) includes a `benchmarks` session that provides
+conveniences for setting up before benchmarking, and can also replicate the
+automated overnight run locally. See the session docstring for detail.
+
+### Environment variables
+
+* `OVERRIDE_TEST_DATA_REPOSITORY` - required - some benchmarks use
+`iris-test-data` content, and your local `site.cfg` is not available for
+benchmark scripts.
+* `DATA_GEN_PYTHON` - required - path to a Python executable that can be
+used to generate benchmark test objects/files; see
+[Data generation](#data-generation). The Nox session sets this automatically,
+but will defer to any value already set in the shell.
+* `BENCHMARK_DATA` - optional - path to a directory for benchmark synthetic
+test data, which the benchmark scripts will create if it doesn't already
+exist. Defaults to `/benchmarks/.data/` if not set. Note that some of
+the generated files, especially in the 'SPerf' suite, are many GB in size so
+plan accordingly.
+* `ON_DEMAND_BENCHMARKS` - optional - when set (to any value): benchmarks
+decorated with `@on_demand_benchmark` are included in the ASV run. Usually
+coupled with the ASV `--bench` argument to only run the benchmark(s) of
+interest. Is set during the Nox `cperf` and `sperf` sessions.
+
+## Writing benchmarks
+
+[See the ASV docs](https://asv.readthedocs.io/) for full detail.
+
+### Data generation
+**Important:** be sure not to use the benchmarking environment to generate any
+test objects/files, as this environment changes with each commit being
+benchmarked, creating inconsistent benchmark 'conditions'. The
+[generate_data](./benchmarks/generate_data/__init__.py) module offers a
+solution; read more detail there.
+
+### ASV re-run behaviour
+
+Note that ASV re-runs a benchmark multiple times between its `setup()` routine.
+This is a problem for benchmarking certain Iris operations such as data
+realisation, since the data will no longer be lazy after the first run.
+Consider writing extra steps to restore objects' original state _within_ the
+benchmark itself.
+
+If adding steps to the benchmark will skew the result too much then re-running
+can be disabled by setting an attribute on the benchmark: `number = 1`. To
+maintain result accuracy this should be accompanied by increasing the number of
+repeats _between_ `setup()` calls using the `repeat` attribute.
+`warmup_time = 0` is also advisable since ASV performs independent re-runs to
+estimate run-time, and these will still be subject to the original problem.
+
+### Scaling / non-Scaling Performance Differences
+
+When comparing performance between commits/file-type/whatever it can be helpful
+to know if the differences exist in scaling or non-scaling parts of the Iris
+functionality in question. This can be done using a size parameter, setting
+one value to be as small as possible (e.g. a scalar `Cube`), and the other to
+be significantly larger (e.g. a 1000x1000 `Cube`). Performance differences
+might only be seen for the larger value, or the smaller, or both, getting you
+closer to the root cause.
+
+### On-demand benchmarks
+
+Some benchmarks provide useful insight but are inappropriate to be included in
+a benchmark run by default, e.g. those with long run-times or requiring a local
+file. These benchmarks should be decorated with `@on_demand_benchmark`
+(see [benchmarks init](./benchmarks/__init__.py)), which
+sets the benchmark to only be included in a run when the `ON_DEMAND_BENCHMARKS`
+environment variable is set. Examples include the CPerf and SPerf benchmark
+suites for the UK Met Office NG-VAT project.
+
+## Benchmark environments
+
+We have disabled ASV's standard environment management, instead using an
+environment built using the same Nox scripts as Iris' test environments. This
+is done using ASV's plugin architecture - see
+[asv_delegated_conda.py](asv_delegated_conda.py) and the extra config items in
+[asv.conf.json](asv.conf.json).
+
+(ASV is written to control the environment(s) that benchmarks are run in -
+minimising external factors and also allowing it to compare between a matrix
+of dependencies (each in a separate environment). We have chosen to sacrifice
+these features in favour of testing each commit with its intended dependencies,
+controlled by Nox + lock-files).
diff --git a/benchmarks/asv.conf.json b/benchmarks/asv.conf.json
index 9ea1cdb101..7337eaa8c7 100644
--- a/benchmarks/asv.conf.json
+++ b/benchmarks/asv.conf.json
@@ -3,18 +3,26 @@
"project": "scitools-iris",
"project_url": "https://github.com/SciTools/iris",
"repo": "..",
- "environment_type": "nox-conda",
+ "environment_type": "conda-delegated",
"show_commit_url": "http://github.com/scitools/iris/commit/",
+ "branches": ["upstream/main"],
"benchmark_dir": "./benchmarks",
"env_dir": ".asv/env",
"results_dir": ".asv/results",
"html_dir": ".asv/html",
- "plugins": [".nox_asv_plugin"],
- // The commit to checkout to first run Nox to set up the environment.
- "nox_setup_commit": "HEAD",
- // The path of the noxfile's location relative to the project root.
- "noxfile_rel_path": "noxfile.py",
- // The ``--session`` arg to be used with ``--install-only`` to prep an environment.
- "nox_session_name": "tests"
+ "plugins": [".asv_delegated_conda"],
+
+ // The command(s) that create/update an environment correctly for the
+ // checked-out commit.
+ // Interpreted the same as build_command, with following exceptions:
+ // * No build-time environment variables.
+ // * Is run in the same environment as the ASV install itself.
+ "delegated_env_commands": [
+ "sed -i 's/_PY_VERSIONS_ALL/_PY_VERSION_LATEST/g' noxfile.py",
+ "nox --envdir={conf_dir}/.asv/env/nox01 --session=tests --install-only --no-error-on-external-run --verbose"
+ ],
+ // The parent directory of the above environment.
+ // The most recently modified environment in the directory will be used.
+ "delegated_env_parent": "{conf_dir}/.asv/env/nox01"
}
diff --git a/benchmarks/asv_delegated_conda.py b/benchmarks/asv_delegated_conda.py
new file mode 100644
index 0000000000..250a4e032d
--- /dev/null
+++ b/benchmarks/asv_delegated_conda.py
@@ -0,0 +1,208 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+ASV plug-in providing an alternative :class:`asv.plugins.conda.Conda`
+subclass that manages the Conda environment via custom user scripts.
+
+"""
+
+from os import environ
+from os.path import getmtime
+from pathlib import Path
+from shutil import copy2, copytree, rmtree
+from tempfile import TemporaryDirectory
+
+from asv import util as asv_util
+from asv.config import Config
+from asv.console import log
+from asv.plugins.conda import Conda
+from asv.repo import Repo
+
+
+class CondaDelegated(Conda):
+ """
+ Manage a Conda environment using custom user scripts, run at each commit.
+
+ Ignores user input variations - ``matrix`` / ``pythons`` /
+ ``conda_environment_file``, since environment is being managed outside ASV.
+
+ Original environment creation behaviour is inherited, but upon checking out
+ a commit the custom script(s) are run and the original environment is
+ replaced with a symlink to the custom environment. This arrangement is then
+ re-used in subsequent runs.
+
+ """
+
+ tool_name = "conda-delegated"
+
+ def __init__(
+ self,
+ conf: Config,
+ python: str,
+ requirements: dict,
+ tagged_env_vars: dict,
+ ) -> None:
+ """
+ Parameters
+ ----------
+ conf : Config instance
+
+ python : str
+ Version of Python. Must be of the form "MAJOR.MINOR".
+
+ requirements : dict
+ Dictionary mapping a PyPI package name to a version
+ identifier string.
+
+ tagged_env_vars : dict
+ Environment variables, tagged for build vs. non-build
+
+ """
+ ignored = ["`python`"]
+ if requirements:
+ ignored.append("`requirements`")
+ if tagged_env_vars:
+ ignored.append("`tagged_env_vars`")
+ if conf.conda_environment_file:
+ ignored.append("`conda_environment_file`")
+ message = (
+ f"Ignoring ASV setting(s): {', '.join(ignored)}. Benchmark "
+ "environment management is delegated to third party script(s)."
+ )
+ log.warning(message)
+ requirements = {}
+ tagged_env_vars = {}
+ conf.conda_environment_file = None
+
+ super().__init__(conf, python, requirements, tagged_env_vars)
+ self._update_info()
+
+ self._env_commands = self._interpolate_commands(
+ conf.delegated_env_commands
+ )
+ # Again using _interpolate_commands to get env parent path - allows use
+ # of the same ASV env variables.
+ env_parent_interpolated = self._interpolate_commands(
+ conf.delegated_env_parent
+ )
+ # Returns list of tuples, we just want the first.
+ env_parent_first = env_parent_interpolated[0]
+ # The 'command' is the first item in the returned tuple.
+ env_parent_string = " ".join(env_parent_first[0])
+ self._delegated_env_parent = Path(env_parent_string).resolve()
+
+ @property
+ def name(self):
+ """Get a name to uniquely identify this environment."""
+ return asv_util.sanitize_filename(self.tool_name)
+
+ def _update_info(self) -> None:
+ """Make sure class properties reflect the actual environment being used."""
+ # Follow symlink if it has been created.
+ actual_path = Path(self._path).resolve()
+ self._path = str(actual_path)
+
+ # Get custom environment's Python version if it exists yet.
+ try:
+ get_version = (
+ "from sys import version_info; "
+ "print(f'{version_info.major}.{version_info.minor}')"
+ )
+ actual_python = self.run(["-c", get_version])
+ self._python = actual_python
+ except OSError:
+ pass
+
+ def _prep_env(self) -> None:
+ """Run the custom environment script(s) and switch to using that environment."""
+ message = f"Running delegated environment management for: {self.name}"
+ log.info(message)
+ env_path = Path(self._path)
+
+ def copy_asv_files(src_parent: Path, dst_parent: Path) -> None:
+ """For copying between self._path and a temporary cache."""
+ asv_files = list(src_parent.glob("asv*"))
+ # build_root_path.name usually == "project" .
+ asv_files += [src_parent / Path(self._build_root).name]
+ for src_path in asv_files:
+ dst_path = dst_parent / src_path.name
+ if not dst_path.exists():
+ # Only caching in case the environment has been rebuilt.
+ # If the dst_path already exists: rebuilding hasn't
+ # happened. Also a non-issue when copying in the reverse
+ # direction because the cache dir is temporary.
+ if src_path.is_dir():
+ func = copytree
+ else:
+ func = copy2
+ func(src_path, dst_path)
+
+ with TemporaryDirectory(prefix="delegated_asv_cache_") as asv_cache:
+ asv_cache_path = Path(asv_cache)
+ # Cache all of ASV's files as delegated command may remove and
+ # re-build the environment.
+ copy_asv_files(env_path.resolve(), asv_cache_path)
+
+ # Adapt the build_dir to the cache location.
+ build_root_path = Path(self._build_root)
+ build_dir_original = build_root_path / self._repo_subdir
+ build_dir_subpath = build_dir_original.relative_to(
+ build_root_path.parent
+ )
+ build_dir = asv_cache_path / build_dir_subpath
+
+ # Run the script(s) for delegated environment creation/updating.
+ # (An adaptation of self._interpolate_and_run_commands).
+ for command, env, return_codes, cwd in self._env_commands:
+ local_envs = dict(environ)
+ local_envs.update(env)
+ if cwd is None:
+ cwd = str(build_dir)
+ _ = asv_util.check_output(
+ command,
+ timeout=self._install_timeout,
+ cwd=cwd,
+ env=local_envs,
+ valid_return_codes=return_codes,
+ )
+
+ # Replace the env that ASV created with a symlink to the env
+ # created/updated by the custom script.
+ delegated_env_path = sorted(
+ self._delegated_env_parent.glob("*"),
+ key=getmtime,
+ reverse=True,
+ )[0]
+ if env_path.resolve() != delegated_env_path:
+ try:
+ env_path.unlink(missing_ok=True)
+ except IsADirectoryError:
+ rmtree(env_path)
+ env_path.symlink_to(
+ delegated_env_path, target_is_directory=True
+ )
+
+ # Check that environment exists.
+ try:
+ env_path.resolve(strict=True)
+ except FileNotFoundError:
+ message = f"Path does not resolve to environment: {env_path}"
+ log.error(message)
+ raise RuntimeError(message)
+
+ # Restore ASV's files from the cache (if necessary).
+ copy_asv_files(asv_cache_path, env_path.resolve())
+
+ # Record new environment information in properties.
+ self._update_info()
+
+ def checkout_project(self, repo: Repo, commit_hash: str) -> None:
+ """Check out the working tree of the project at given commit hash."""
+ super().checkout_project(repo, commit_hash)
+ self._prep_env()
+ log.info(
+ f"Environment {self.name} updated to spec at {commit_hash[:8]}"
+ )
diff --git a/benchmarks/benchmarks/__init__.py b/benchmarks/benchmarks/__init__.py
index 2e741c3da0..c86682ca4a 100644
--- a/benchmarks/benchmarks/__init__.py
+++ b/benchmarks/benchmarks/__init__.py
@@ -4,46 +4,121 @@
# See COPYING and COPYING.LESSER in the root of the repository for full
# licensing details.
"""Common code for benchmarks."""
+from os import environ
+import resource
-import os
-from pathlib import Path
+ARTIFICIAL_DIM_SIZE = int(10e3) # For all artificial cubes, coords etc.
-# Environment variable names
-_ASVDIR_VARNAME = "ASV_DIR" # As set in nightly script "asv_nightly/asv.sh"
-_DATADIR_VARNAME = "BENCHMARK_DATA" # For local runs
-ARTIFICIAL_DIM_SIZE = int(10e3) # For all artificial cubes, coords etc.
+def disable_repeat_between_setup(benchmark_object):
+ """
+ Decorator for benchmarks where object persistence would be inappropriate.
+
+ E.g:
+ * Benchmarking data realisation
+ * Benchmarking Cube coord addition
+
+ Can be applied to benchmark classes/methods/functions.
+
+ https://asv.readthedocs.io/en/stable/benchmarks.html#timing-benchmarks
+
+ """
+ # Prevent repeat runs between setup() runs - object(s) will persist after 1st.
+ benchmark_object.number = 1
+ # Compensate for reduced certainty by increasing number of repeats.
+ # (setup() is run between each repeat).
+ # Minimum 5 repeats, run up to 30 repeats / 20 secs whichever comes first.
+ benchmark_object.repeat = (5, 30, 20.0)
+ # ASV uses warmup to estimate benchmark time before planning the real run.
+ # Prevent this, since object(s) will persist after first warmup run,
+ # which would give ASV misleading info (warmups ignore ``number``).
+ benchmark_object.warmup_time = 0.0
+
+ return benchmark_object
+
+
+class TrackAddedMemoryAllocation:
+ """
+ Context manager which measures by how much process resident memory grew,
+ during execution of its enclosed code block.
+
+ Obviously limited as to what it actually measures : Relies on the current
+ process not having significant unused (de-allocated) memory when the
+ tested codeblock runs, and only reliable when the code allocates a
+ significant amount of new memory.
+
+ Example:
+ with TrackAddedMemoryAllocation() as mb:
+ initial_call()
+ other_call()
+ result = mb.addedmem_mb()
+
+ Attributes
+ ----------
+ RESULT_MINIMUM_MB : float
+ The smallest result that should ever be returned, in Mb. Results
+ fluctuate from run to run (usually within 1Mb) so if a result is
+ sufficiently small this noise will produce a before-after ratio over
+ AVD's detection threshold and be treated as 'signal'. Results
+ smaller than this value will therefore be returned as equal to this
+ value, ensuring fractionally small noise / no noise at all.
+
+ """
+
+ RESULT_MINIMUM_MB = 5.0
+
+ @staticmethod
+ def process_resident_memory_mb():
+ return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024.0
+
+ def __enter__(self):
+ self.mb_before = self.process_resident_memory_mb()
+ return self
+
+ def __exit__(self, *_):
+ self.mb_after = self.process_resident_memory_mb()
+
+ def addedmem_mb(self):
+ """Return measured memory growth, in Mb."""
+ result = self.mb_after - self.mb_before
+ # Small results are too vulnerable to noise being interpreted as signal.
+ result = max(self.RESULT_MINIMUM_MB, result)
+ return result
+
+ @staticmethod
+ def decorator(decorated_func):
+ """
+ Decorates this benchmark to track growth in resident memory during execution.
+
+ Intended for use on ASV ``track_`` benchmarks. Applies the
+ :class:`TrackAddedMemoryAllocation` context manager to the benchmark
+ code, sets the benchmark ``unit`` attribute to ``Mb``.
+
+ """
+
+ def _wrapper(*args, **kwargs):
+ assert decorated_func.__name__[:6] == "track_"
+ # Run the decorated benchmark within the added memory context
+ # manager.
+ with TrackAddedMemoryAllocation() as mb:
+ decorated_func(*args, **kwargs)
+ return mb.addedmem_mb()
+
+ decorated_func.unit = "Mb"
+ return _wrapper
+
+
+def on_demand_benchmark(benchmark_object):
+ """
+ Decorator. Disables these benchmark(s) unless ON_DEMAND_BENCHARKS env var is set.
-# Work out where the benchmark data dir is.
-asv_dir = os.environ.get("ASV_DIR", None)
-if asv_dir:
- # For an overnight run, this comes from the 'ASV_DIR' setting.
- benchmark_data_dir = Path(asv_dir) / "data"
-else:
- # For a local run, you set 'BENCHMARK_DATA'.
- benchmark_data_dir = os.environ.get(_DATADIR_VARNAME, None)
- if benchmark_data_dir is not None:
- benchmark_data_dir = Path(benchmark_data_dir)
+ For benchmarks that, for whatever reason, should not be run by default.
+ E.g:
+ * Require a local file
+ * Used for scalability analysis instead of commit monitoring.
+ Can be applied to benchmark classes/methods/functions.
-def testdata_path(*path_names):
"""
- Return the path of a benchmark test data file.
-
- These are based from a test-data location dir, which is either
- ${}/data (for overnight tests), or ${} for local testing.
-
- If neither of these were set, an error is raised.
-
- """.format(
- _ASVDIR_VARNAME, _DATADIR_VARNAME
- )
- if benchmark_data_dir is None:
- msg = (
- "Benchmark data dir is not defined : "
- 'Either "${}" or "${}" must be set.'
- )
- raise (ValueError(msg.format(_ASVDIR_VARNAME, _DATADIR_VARNAME)))
- path = benchmark_data_dir.joinpath(*path_names)
- path = str(path) # Because Iris doesn't understand Path objects yet.
- return path
+ if "ON_DEMAND_BENCHMARKS" in environ:
+ return benchmark_object
diff --git a/benchmarks/benchmarks/aux_factory.py b/benchmarks/benchmarks/aux_factory.py
index 270119da71..4cc4f6c70a 100644
--- a/benchmarks/benchmarks/aux_factory.py
+++ b/benchmarks/benchmarks/aux_factory.py
@@ -10,9 +10,10 @@
import numpy as np
-from benchmarks import ARTIFICIAL_DIM_SIZE
from iris import aux_factory, coords
+from . import ARTIFICIAL_DIM_SIZE
+
class FactoryCommon:
# TODO: once https://github.com/airspeed-velocity/asv/pull/828 is released:
@@ -43,10 +44,6 @@ def time_create(self):
specified in the subclass."""
self.create()
- def time_return(self):
- """Return an instance of the benchmarked factory."""
- self.factory
-
class HybridHeightFactory(FactoryCommon):
def setup(self):
diff --git a/benchmarks/benchmarks/coords.py b/benchmarks/benchmarks/coords.py
index fce7318d49..3107dcf077 100644
--- a/benchmarks/benchmarks/coords.py
+++ b/benchmarks/benchmarks/coords.py
@@ -10,9 +10,10 @@
import numpy as np
-from benchmarks import ARTIFICIAL_DIM_SIZE
from iris import coords
+from . import ARTIFICIAL_DIM_SIZE, disable_repeat_between_setup
+
def setup():
"""General variables needed by multiple benchmark classes."""
@@ -50,10 +51,6 @@ def time_create(self):
specified in the subclass."""
self.create()
- def time_return(self):
- """Return an instance of the benchmarked coord."""
- self.component
-
class DimCoord(CoordCommon):
def setup(self):
@@ -92,6 +89,23 @@ def setup(self):
def create(self):
return coords.AuxCoord(**self.create_kwargs)
+ def time_points(self):
+ _ = self.component.points
+
+ def time_bounds(self):
+ _ = self.component.bounds
+
+
+@disable_repeat_between_setup
+class AuxCoordLazy(AuxCoord):
+ """Lazy equivalent of :class:`AuxCoord`."""
+
+ def setup(self):
+ super().setup()
+ self.create_kwargs["points"] = self.component.lazy_points()
+ self.create_kwargs["bounds"] = self.component.lazy_bounds()
+ self.setup_common()
+
class CellMeasure(CoordCommon):
def setup(self):
diff --git a/benchmarks/benchmarks/cperf/__init__.py b/benchmarks/benchmarks/cperf/__init__.py
new file mode 100644
index 0000000000..fb311c44dc
--- /dev/null
+++ b/benchmarks/benchmarks/cperf/__init__.py
@@ -0,0 +1,97 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Benchmarks for the CPerf scheme of the UK Met Office's NG-VAT project.
+
+CPerf = comparing performance working with data in UM versus LFRic formats.
+
+Files available from the UK Met Office:
+ moo ls moose:/adhoc/projects/avd/asv/data_for_nightly_tests/
+"""
+import numpy as np
+
+from iris import load_cube
+
+# TODO: remove uses of PARSE_UGRID_ON_LOAD once UGRID parsing is core behaviour.
+from iris.experimental.ugrid import PARSE_UGRID_ON_LOAD
+
+from ..generate_data import BENCHMARK_DATA
+from ..generate_data.ugrid import make_cubesphere_testfile
+
+# The data of the core test UM files has dtype=np.float32 shape=(1920, 2560)
+_UM_DIMS_YX = (1920, 2560)
+# The closest cubesphere size in terms of datapoints is sqrt(1920*2560 / 6)
+# This gives ~= 905, i.e. "C905"
+_N_CUBESPHERE_UM_EQUIVALENT = int(np.sqrt(np.prod(_UM_DIMS_YX) / 6))
+
+
+class SingleDiagnosticMixin:
+ """For use in any benchmark classes that work on a single diagnostic file."""
+
+ params = [
+ ["LFRic", "UM", "UM_lbpack0", "UM_netcdf"],
+ [False, True],
+ [False, True],
+ ]
+ param_names = ["file type", "height dim (len 71)", "time dim (len 3)"]
+
+ def setup(self, file_type, three_d, three_times):
+ if file_type == "LFRic":
+ # Generate an appropriate synthetic LFRic file.
+ if three_times:
+ n_times = 3
+ else:
+ n_times = 1
+
+ # Use a cubesphere size ~equivalent to our UM test data.
+ cells_per_panel_edge = _N_CUBESPHERE_UM_EQUIVALENT
+ create_kwargs = dict(c_size=cells_per_panel_edge, n_times=n_times)
+
+ if three_d:
+ create_kwargs["n_levels"] = 71
+
+ # Will re-use a file if already present.
+ file_path = make_cubesphere_testfile(**create_kwargs)
+
+ else:
+ # Locate the appropriate UM file.
+ if three_times:
+ # pa/pb003 files
+ numeric = "003"
+ else:
+ # pa/pb000 files
+ numeric = "000"
+
+ if three_d:
+ # theta diagnostic, N1280 file w/ 71 levels (1920, 2560, 71)
+ file_name = f"umglaa_pb{numeric}-theta"
+ else:
+ # surface_temp diagnostic, N1280 file (1920, 2560)
+ file_name = f"umglaa_pa{numeric}-surfacetemp"
+
+ file_suffices = {
+ "UM": "", # packed FF (WGDOS lbpack = 1)
+ "UM_lbpack0": ".uncompressed", # unpacked FF (lbpack = 0)
+ "UM_netcdf": ".nc", # UM file -> Iris -> NetCDF file
+ }
+ suffix = file_suffices[file_type]
+
+ file_path = (BENCHMARK_DATA / file_name).with_suffix(suffix)
+ if not file_path.exists():
+ message = "\n".join(
+ [
+ f"Expected local file not found: {file_path}",
+ "Available from the UK Met Office.",
+ ]
+ )
+ raise FileNotFoundError(message)
+
+ self.file_path = file_path
+ self.file_type = file_type
+
+ def load(self):
+ with PARSE_UGRID_ON_LOAD.context():
+ return load_cube(str(self.file_path))
diff --git a/benchmarks/benchmarks/cperf/equality.py b/benchmarks/benchmarks/cperf/equality.py
new file mode 100644
index 0000000000..47eb255513
--- /dev/null
+++ b/benchmarks/benchmarks/cperf/equality.py
@@ -0,0 +1,58 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Equality benchmarks for the CPerf scheme of the UK Met Office's NG-VAT project.
+"""
+from . import SingleDiagnosticMixin
+from .. import on_demand_benchmark
+
+
+class EqualityMixin(SingleDiagnosticMixin):
+ """
+ Uses :class:`SingleDiagnosticMixin` as the realistic case will be comparing
+ :class:`~iris.cube.Cube`\\ s that have been loaded from file.
+ """
+
+ # Cut down the parent parameters.
+ params = [["LFRic", "UM"]]
+
+ def setup(self, file_type, three_d=False, three_times=False):
+ super().setup(file_type, three_d, three_times)
+ self.cube = self.load()
+ self.other_cube = self.load()
+
+
+@on_demand_benchmark
+class CubeEquality(EqualityMixin):
+ """
+ Benchmark time and memory costs of comparing LFRic and UM
+ :class:`~iris.cube.Cube`\\ s.
+ """
+
+ def _comparison(self):
+ _ = self.cube == self.other_cube
+
+ def peakmem_eq(self, file_type):
+ self._comparison()
+
+ def time_eq(self, file_type):
+ self._comparison()
+
+
+@on_demand_benchmark
+class MeshEquality(EqualityMixin):
+ """Provides extra context for :class:`CubeEquality`."""
+
+ params = [["LFRic"]]
+
+ def _comparison(self):
+ _ = self.cube.mesh == self.other_cube.mesh
+
+ def peakmem_eq(self, file_type):
+ self._comparison()
+
+ def time_eq(self, file_type):
+ self._comparison()
diff --git a/benchmarks/benchmarks/cperf/load.py b/benchmarks/benchmarks/cperf/load.py
new file mode 100644
index 0000000000..04bb7e1a61
--- /dev/null
+++ b/benchmarks/benchmarks/cperf/load.py
@@ -0,0 +1,57 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+File loading benchmarks for the CPerf scheme of the UK Met Office's NG-VAT project.
+"""
+from . import SingleDiagnosticMixin
+from .. import on_demand_benchmark
+
+
+@on_demand_benchmark
+class SingleDiagnosticLoad(SingleDiagnosticMixin):
+ def time_load(self, _, __, ___):
+ """
+ The 'real world comparison'
+ * UM coords are always realised (DimCoords).
+ * LFRic coords are not realised by default (MeshCoords).
+
+ """
+ cube = self.load()
+ assert cube.has_lazy_data()
+ # UM files load lon/lat as DimCoords, which are always realised.
+ expecting_lazy_coords = self.file_type == "LFRic"
+ for coord_name in "longitude", "latitude":
+ coord = cube.coord(coord_name)
+ assert coord.has_lazy_points() == expecting_lazy_coords
+ assert coord.has_lazy_bounds() == expecting_lazy_coords
+
+ def time_load_w_realised_coords(self, _, __, ___):
+ """A valuable extra comparison where both UM and LFRic coords are realised."""
+ cube = self.load()
+ for coord_name in "longitude", "latitude":
+ coord = cube.coord(coord_name)
+ # Don't touch actual points/bounds objects - permanent
+ # realisation plays badly with ASV's re-run strategy.
+ if coord.has_lazy_points():
+ coord.core_points().compute()
+ if coord.has_lazy_bounds():
+ coord.core_bounds().compute()
+
+
+@on_demand_benchmark
+class SingleDiagnosticRealise(SingleDiagnosticMixin):
+ # The larger files take a long time to realise.
+ timeout = 600.0
+
+ def setup(self, file_type, three_d, three_times):
+ super().setup(file_type, three_d, three_times)
+ self.loaded_cube = self.load()
+
+ def time_realise(self, _, __, ___):
+ # Don't touch loaded_cube.data - permanent realisation plays badly with
+ # ASV's re-run strategy.
+ assert self.loaded_cube.has_lazy_data()
+ self.loaded_cube.core_data().compute()
diff --git a/benchmarks/benchmarks/cperf/save.py b/benchmarks/benchmarks/cperf/save.py
new file mode 100644
index 0000000000..2eb60e2ab5
--- /dev/null
+++ b/benchmarks/benchmarks/cperf/save.py
@@ -0,0 +1,47 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+File saving benchmarks for the CPerf scheme of the UK Met Office's NG-VAT project.
+"""
+
+from iris import save
+
+from . import _N_CUBESPHERE_UM_EQUIVALENT, _UM_DIMS_YX
+from .. import TrackAddedMemoryAllocation, on_demand_benchmark
+from ..generate_data.ugrid import (
+ make_cube_like_2d_cubesphere,
+ make_cube_like_umfield,
+)
+
+
+@on_demand_benchmark
+class NetcdfSave:
+ """
+ Benchmark time and memory costs of saving ~large-ish data cubes to netcdf.
+ Parametrised by file type.
+
+ """
+
+ params = ["LFRic", "UM"]
+ param_names = ["data type"]
+
+ def setup(self, data_type):
+ if data_type == "LFRic":
+ self.cube = make_cube_like_2d_cubesphere(
+ n_cube=_N_CUBESPHERE_UM_EQUIVALENT, with_mesh=True
+ )
+ else:
+ self.cube = make_cube_like_umfield(_UM_DIMS_YX)
+
+ def _save_data(self, cube):
+ save(cube, "tmp.nc")
+
+ def time_save_data_netcdf(self, data_type):
+ self._save_data(self.cube)
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_save_data_netcdf(self, data_type):
+ self._save_data(self.cube)
diff --git a/benchmarks/benchmarks/cube.py b/benchmarks/benchmarks/cube.py
index 3cfa6b248b..5889ce872b 100644
--- a/benchmarks/benchmarks/cube.py
+++ b/benchmarks/benchmarks/cube.py
@@ -10,11 +10,13 @@
import numpy as np
-from benchmarks import ARTIFICIAL_DIM_SIZE
from iris import analysis, aux_factory, coords, cube
+from . import ARTIFICIAL_DIM_SIZE, disable_repeat_between_setup
+from .generate_data.stock import sample_meshcoord
-def setup():
+
+def setup(*params):
"""General variables needed by multiple benchmark classes."""
global data_1d
global data_2d
@@ -66,10 +68,6 @@ def time_add(self):
general_cube_copy = general_cube.copy(data=data_2d)
self.add_method(general_cube_copy, *self.add_args)
- def time_return(self):
- """Return a cube that includes an instance of the benchmarked component."""
- self.cube
-
class Cube:
def time_basic(self):
@@ -170,6 +168,41 @@ def setup(self):
self.setup_common()
+class MeshCoord:
+ params = [
+ 6, # minimal cube-sphere
+ int(1e6), # realistic cube-sphere size
+ ARTIFICIAL_DIM_SIZE, # To match size in :class:`AuxCoord`
+ ]
+ param_names = ["number of faces"]
+
+ def setup(self, n_faces):
+ mesh_kwargs = dict(
+ n_nodes=n_faces + 2, n_edges=n_faces * 2, n_faces=n_faces
+ )
+
+ self.mesh_coord = sample_meshcoord(sample_mesh_kwargs=mesh_kwargs)
+ self.data = np.zeros(n_faces)
+ self.cube_blank = cube.Cube(data=self.data)
+ self.cube = self.create()
+
+ def create(self):
+ return cube.Cube(
+ data=self.data, aux_coords_and_dims=[(self.mesh_coord, 0)]
+ )
+
+ def time_create(self, n_faces):
+ _ = self.create()
+
+ @disable_repeat_between_setup
+ def time_add(self, n_faces):
+ self.cube_blank.add_aux_coord(self.mesh_coord, 0)
+
+ @disable_repeat_between_setup
+ def time_remove(self, n_faces):
+ self.cube.remove_coord(self.mesh_coord)
+
+
class Merge:
def setup(self):
self.cube_list = cube.CubeList()
diff --git a/benchmarks/benchmarks/experimental/__init__.py b/benchmarks/benchmarks/experimental/__init__.py
new file mode 100644
index 0000000000..f16e400bce
--- /dev/null
+++ b/benchmarks/benchmarks/experimental/__init__.py
@@ -0,0 +1,9 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Benchmark tests for the experimental module.
+
+"""
diff --git a/benchmarks/benchmarks/experimental/ugrid/__init__.py b/benchmarks/benchmarks/experimental/ugrid/__init__.py
new file mode 100644
index 0000000000..2f9bb04e35
--- /dev/null
+++ b/benchmarks/benchmarks/experimental/ugrid/__init__.py
@@ -0,0 +1,191 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Benchmark tests for the experimental.ugrid module.
+
+"""
+
+from copy import deepcopy
+
+import numpy as np
+
+from iris.experimental import ugrid
+
+from ... import ARTIFICIAL_DIM_SIZE, disable_repeat_between_setup
+from ...generate_data.stock import sample_mesh
+
+
+class UGridCommon:
+ """
+ A base class running a generalised suite of benchmarks for any ugrid object.
+ Object to be specified in a subclass.
+
+ ASV will run the benchmarks within this class for any subclasses.
+
+ ASV will not benchmark this class as setup() triggers a NotImplementedError.
+ (ASV has not yet released ABC/abstractmethod support - asv#838).
+
+ """
+
+ params = [
+ 6, # minimal cube-sphere
+ int(1e6), # realistic cube-sphere size
+ ]
+ param_names = ["number of faces"]
+
+ def setup(self, *params):
+ self.object = self.create()
+
+ def create(self):
+ raise NotImplementedError
+
+ def time_create(self, *params):
+ """Create an instance of the benchmarked object. create() method is
+ specified in the subclass."""
+ self.create()
+
+
+class Connectivity(UGridCommon):
+ def setup(self, n_faces):
+ self.array = np.zeros([n_faces, 3], dtype=np.int)
+ super().setup(n_faces)
+
+ def create(self):
+ return ugrid.Connectivity(
+ indices=self.array, cf_role="face_node_connectivity"
+ )
+
+ def time_indices(self, n_faces):
+ _ = self.object.indices
+
+ def time_location_lengths(self, n_faces):
+ # Proofed against the Connectivity name change (633ed17).
+ if getattr(self.object, "src_lengths", False):
+ meth = self.object.src_lengths
+ else:
+ meth = self.object.location_lengths
+ _ = meth()
+
+ def time_validate_indices(self, n_faces):
+ self.object.validate_indices()
+
+
+@disable_repeat_between_setup
+class ConnectivityLazy(Connectivity):
+ """Lazy equivalent of :class:`Connectivity`."""
+
+ def setup(self, n_faces):
+ super().setup(n_faces)
+ self.array = self.object.lazy_indices()
+ self.object = self.create()
+
+
+class Mesh(UGridCommon):
+ def setup(self, n_faces, lazy=False):
+ ####
+ # Steal everything from the sample mesh for benchmarking creation of a
+ # brand new mesh.
+ source_mesh = sample_mesh(
+ n_nodes=n_faces + 2,
+ n_edges=n_faces * 2,
+ n_faces=n_faces,
+ lazy_values=lazy,
+ )
+
+ def get_coords_and_axes(location):
+ search_kwargs = {f"include_{location}s": True}
+ return [
+ (source_mesh.coord(axis=axis, **search_kwargs), axis)
+ for axis in ("x", "y")
+ ]
+
+ self.mesh_kwargs = dict(
+ topology_dimension=source_mesh.topology_dimension,
+ node_coords_and_axes=get_coords_and_axes("node"),
+ connectivities=source_mesh.connectivities(),
+ edge_coords_and_axes=get_coords_and_axes("edge"),
+ face_coords_and_axes=get_coords_and_axes("face"),
+ )
+ ####
+
+ super().setup(n_faces)
+
+ self.face_node = self.object.face_node_connectivity
+ self.node_x = self.object.node_coords.node_x
+ # Kwargs for reuse in search and remove methods.
+ self.connectivities_kwarg = dict(cf_role="edge_node_connectivity")
+ self.coords_kwarg = dict(include_faces=True)
+
+ # TODO: an opportunity for speeding up runtime if needed, since
+ # eq_object is not needed for all benchmarks. Just don't generate it
+ # within a benchmark - the execution time is large enough that it
+ # could be a significant portion of the benchmark - makes regressions
+ # smaller and could even pick up regressions in copying instead!
+ self.eq_object = deepcopy(self.object)
+
+ def create(self):
+ return ugrid.Mesh(**self.mesh_kwargs)
+
+ def time_add_connectivities(self, n_faces):
+ self.object.add_connectivities(self.face_node)
+
+ def time_add_coords(self, n_faces):
+ self.object.add_coords(node_x=self.node_x)
+
+ def time_connectivities(self, n_faces):
+ _ = self.object.connectivities(**self.connectivities_kwarg)
+
+ def time_coords(self, n_faces):
+ _ = self.object.coords(**self.coords_kwarg)
+
+ def time_eq(self, n_faces):
+ _ = self.object == self.eq_object
+
+ def time_remove_connectivities(self, n_faces):
+ self.object.remove_connectivities(**self.connectivities_kwarg)
+
+ def time_remove_coords(self, n_faces):
+ self.object.remove_coords(**self.coords_kwarg)
+
+
+@disable_repeat_between_setup
+class MeshLazy(Mesh):
+ """Lazy equivalent of :class:`Mesh`."""
+
+ def setup(self, n_faces, lazy=True):
+ super().setup(n_faces, lazy=lazy)
+
+
+class MeshCoord(UGridCommon):
+ # Add extra parameter value to match AuxCoord benchmarking.
+ params = UGridCommon.params + [ARTIFICIAL_DIM_SIZE]
+
+ def setup(self, n_faces, lazy=False):
+ self.mesh = sample_mesh(
+ n_nodes=n_faces + 2,
+ n_edges=n_faces * 2,
+ n_faces=n_faces,
+ lazy_values=lazy,
+ )
+
+ super().setup(n_faces)
+
+ def create(self):
+ return ugrid.MeshCoord(mesh=self.mesh, location="face", axis="x")
+
+ def time_points(self, n_faces):
+ _ = self.object.points
+
+ def time_bounds(self, n_faces):
+ _ = self.object.bounds
+
+
+@disable_repeat_between_setup
+class MeshCoordLazy(MeshCoord):
+ """Lazy equivalent of :class:`MeshCoord`."""
+
+ def setup(self, n_faces, lazy=True):
+ super().setup(n_faces, lazy=lazy)
diff --git a/benchmarks/benchmarks/experimental/ugrid/regions_combine.py b/benchmarks/benchmarks/experimental/ugrid/regions_combine.py
new file mode 100644
index 0000000000..3b2d77a80a
--- /dev/null
+++ b/benchmarks/benchmarks/experimental/ugrid/regions_combine.py
@@ -0,0 +1,250 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Benchmarks stages of operation of the function
+:func:`iris.experimental.ugrid.utils.recombine_submeshes`.
+
+Where possible benchmarks should be parameterised for two sizes of input data:
+ * minimal: enables detection of regressions in parts of the run-time that do
+ NOT scale with data size.
+ * large: large enough to exclusively detect regressions in parts of the
+ run-time that scale with data size.
+
+"""
+import os
+
+import dask.array as da
+import numpy as np
+
+from iris import load, load_cube, save
+from iris.experimental.ugrid import PARSE_UGRID_ON_LOAD
+from iris.experimental.ugrid.utils import recombine_submeshes
+
+from ... import TrackAddedMemoryAllocation
+from ...generate_data.ugrid import make_cube_like_2d_cubesphere
+
+
+class MixinCombineRegions:
+ # Characterise time taken + memory-allocated, for various stages of combine
+ # operations on cubesphere-like test data.
+ params = [4, 500]
+ param_names = ["cubesphere-N"]
+
+ def _parametrised_cache_filename(self, n_cubesphere, content_name):
+ return f"cube_C{n_cubesphere}_{content_name}.nc"
+
+ def _make_region_cubes(self, full_mesh_cube):
+ """Make a fixed number of region cubes from a full meshcube."""
+ # Divide the cube into regions.
+ n_faces = full_mesh_cube.shape[-1]
+ # Start with a simple list of face indices
+ # first extend to multiple of 5
+ n_faces_5s = 5 * ((n_faces + 1) // 5)
+ i_faces = np.arange(n_faces_5s, dtype=int)
+ # reshape (5N,) to (N, 5)
+ i_faces = i_faces.reshape((n_faces_5s // 5, 5))
+ # reorder [2, 3, 4, 0, 1] within each block of 5
+ i_faces = np.concatenate([i_faces[:, 2:], i_faces[:, :2]], axis=1)
+ # flatten to get [2 3 4 0 1 (-) 8 9 10 6 7 (-) 13 14 15 11 12 ...]
+ i_faces = i_faces.flatten()
+ # reduce back to orignal length, wrap any overflows into valid range
+ i_faces = i_faces[:n_faces] % n_faces
+
+ # Divide into regions -- always slightly uneven, since 7 doesn't divide
+ n_regions = 7
+ n_facesperregion = n_faces // n_regions
+ i_face_regions = (i_faces // n_facesperregion) % n_regions
+ region_inds = [
+ np.where(i_face_regions == i_region)[0]
+ for i_region in range(n_regions)
+ ]
+ # NOTE: this produces 7 regions, with near-adjacent value ranges but
+ # with some points "moved" to an adjacent region.
+ # Also, region-0 is bigger (because of not dividing by 7).
+
+ # Finally, make region cubes with these indices.
+ region_cubes = [full_mesh_cube[..., inds] for inds in region_inds]
+ return region_cubes
+
+ def setup_cache(self):
+ """Cache all the necessary source data on disk."""
+
+ # Control dask, to minimise memory usage + allow largest data.
+ self.fix_dask_settings()
+
+ for n_cubesphere in self.params:
+ # Do for each parameter, since "setup_cache" is NOT parametrised
+ mesh_cube = make_cube_like_2d_cubesphere(
+ n_cube=n_cubesphere, with_mesh=True
+ )
+ # Save to files which include the parameter in the names.
+ save(
+ mesh_cube,
+ self._parametrised_cache_filename(n_cubesphere, "meshcube"),
+ )
+ region_cubes = self._make_region_cubes(mesh_cube)
+ save(
+ region_cubes,
+ self._parametrised_cache_filename(n_cubesphere, "regioncubes"),
+ )
+
+ def setup(
+ self, n_cubesphere, imaginary_data=True, create_result_cube=True
+ ):
+ """
+ The combine-tests "standard" setup operation.
+
+ Load the source cubes (full-mesh + region) from disk.
+ These are specific to the cubesize parameter.
+ The data is cached on disk rather than calculated, to avoid any
+ pre-loading of the process memory allocation.
+
+ If 'imaginary_data' is set (default), the region cubes data is replaced
+ with lazy data in the form of a da.zeros(). Otherwise, the region data
+ is lazy data from the files.
+
+ If 'create_result_cube' is set, create "self.combined_cube" containing
+ the (still lazy) result.
+
+ NOTE: various test classes override + extend this.
+
+ """
+
+ # Load source cubes (full-mesh and regions)
+ with PARSE_UGRID_ON_LOAD.context():
+ self.full_mesh_cube = load_cube(
+ self._parametrised_cache_filename(n_cubesphere, "meshcube")
+ )
+ self.region_cubes = load(
+ self._parametrised_cache_filename(n_cubesphere, "regioncubes")
+ )
+
+ # Remove all var-names from loaded cubes, which can otherwise cause
+ # problems. Also implement 'imaginary' data.
+ for cube in self.region_cubes + [self.full_mesh_cube]:
+ cube.var_name = None
+ for coord in cube.coords():
+ coord.var_name = None
+ if imaginary_data:
+ # Replace cube data (lazy file data) with 'imaginary' data.
+ # This has the same lazy-array attributes, but is allocated by
+ # creating chunks on demand instead of loading from file.
+ data = cube.lazy_data()
+ data = da.zeros(
+ data.shape, dtype=data.dtype, chunks=data.chunksize
+ )
+ cube.data = data
+
+ if create_result_cube:
+ self.recombined_cube = self.recombine()
+
+ # Fix dask usage mode for all the subsequent performance tests.
+ self.fix_dask_settings()
+
+ def fix_dask_settings(self):
+ """
+ Fix "standard" dask behaviour for time+space testing.
+
+ Currently this is single-threaded mode, with known chunksize,
+ which is optimised for space saving so we can test largest data.
+
+ """
+
+ import dask.config as dcfg
+
+ # Use single-threaded, to avoid process-switching costs and minimise memory usage.
+ # N.B. generally may be slower, but use less memory ?
+ dcfg.set(scheduler="single-threaded")
+ # Configure iris._lazy_data.as_lazy_data to aim for 100Mb chunks
+ dcfg.set({"array.chunk-size": "128Mib"})
+
+ def recombine(self):
+ # A handy general shorthand for the main "combine" operation.
+ result = recombine_submeshes(
+ self.full_mesh_cube,
+ self.region_cubes,
+ index_coord_name="i_mesh_face",
+ )
+ return result
+
+
+class CombineRegionsCreateCube(MixinCombineRegions):
+ """
+ Time+memory costs of creating a combined-regions cube.
+
+ The result is lazy, and we don't do the actual calculation.
+
+ """
+
+ def setup(self, n_cubesphere):
+ # In this case only, do *not* create the result cube.
+ # That is the operation we want to test.
+ super().setup(n_cubesphere, create_result_cube=False)
+
+ def time_create_combined_cube(self, n_cubesphere):
+ self.recombine()
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_create_combined_cube(self, n_cubesphere):
+ self.recombine()
+
+
+class CombineRegionsComputeRealData(MixinCombineRegions):
+ """
+ Time+memory costs of computing combined-regions data.
+ """
+
+ def time_compute_data(self, n_cubesphere):
+ _ = self.recombined_cube.data
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_compute_data(self, n_cubesphere):
+ _ = self.recombined_cube.data
+
+
+class CombineRegionsSaveData(MixinCombineRegions):
+ """
+ Test saving *only*, having replaced the input cube data with 'imaginary'
+ array data, so that input data is not loaded from disk during the save
+ operation.
+
+ """
+
+ def time_save(self, n_cubesphere):
+ # Save to disk, which must compute data + stream it to file.
+ save(self.recombined_cube, "tmp.nc")
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_save(self, n_cubesphere):
+ save(self.recombined_cube, "tmp.nc")
+
+ def track_filesize_saved(self, n_cubesphere):
+ save(self.recombined_cube, "tmp.nc")
+ return os.path.getsize("tmp.nc") * 1.0e-6
+
+
+CombineRegionsSaveData.track_filesize_saved.unit = "Mb"
+
+
+class CombineRegionsFileStreamedCalc(MixinCombineRegions):
+ """
+ Test the whole cost of file-to-file streaming.
+ Uses the combined cube which is based on lazy data loading from the region
+ cubes on disk.
+ """
+
+ def setup(self, n_cubesphere):
+ # In this case only, do *not* replace the loaded regions data with
+ # 'imaginary' data, as we want to test file-to-file calculation+save.
+ super().setup(n_cubesphere, imaginary_data=False)
+
+ def time_stream_file2file(self, n_cubesphere):
+ # Save to disk, which must compute data + stream it to file.
+ save(self.recombined_cube, "tmp.nc")
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_stream_file2file(self, n_cubesphere):
+ save(self.recombined_cube, "tmp.nc")
diff --git a/benchmarks/benchmarks/generate_data/__init__.py b/benchmarks/benchmarks/generate_data/__init__.py
new file mode 100644
index 0000000000..78b971d9de
--- /dev/null
+++ b/benchmarks/benchmarks/generate_data/__init__.py
@@ -0,0 +1,123 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Scripts for generating supporting data for benchmarking.
+
+Data generated using Iris should use :func:`run_function_elsewhere`, which
+means that data is generated using a fixed version of Iris and a fixed
+environment, rather than those that get changed when the benchmarking run
+checks out a new commit.
+
+Downstream use of data generated 'elsewhere' requires saving; usually in a
+NetCDF file. Could also use pickling but there is a potential risk if the
+benchmark sequence runs over two different Python versions.
+
+"""
+from contextlib import contextmanager
+from inspect import getsource
+from os import environ
+from pathlib import Path
+from subprocess import CalledProcessError, check_output, run
+from textwrap import dedent
+from warnings import warn
+
+from iris._lazy_data import as_concrete_data
+from iris.fileformats import netcdf
+
+#: Python executable used by :func:`run_function_elsewhere`, set via env
+#: variable of same name. Must be path of Python within an environment that
+#: includes Iris (including dependencies and test modules) and Mule.
+try:
+ DATA_GEN_PYTHON = environ["DATA_GEN_PYTHON"]
+ _ = check_output([DATA_GEN_PYTHON, "-c", "a = True"])
+except KeyError:
+ error = "Env variable DATA_GEN_PYTHON not defined."
+ raise KeyError(error)
+except (CalledProcessError, FileNotFoundError, PermissionError):
+ error = (
+ "Env variable DATA_GEN_PYTHON not a runnable python executable path."
+ )
+ raise ValueError(error)
+
+# The default location of data files used in benchmarks. Used by CI.
+default_data_dir = (Path(__file__).parents[2] / ".data").resolve()
+# Optionally override the default data location with environment variable.
+BENCHMARK_DATA = Path(environ.get("BENCHMARK_DATA", default_data_dir))
+if BENCHMARK_DATA == default_data_dir:
+ BENCHMARK_DATA.mkdir(exist_ok=True)
+ message = (
+ f"No BENCHMARK_DATA env var, defaulting to {BENCHMARK_DATA}. "
+ "Note that some benchmark files are GB in size."
+ )
+ warn(message)
+elif not BENCHMARK_DATA.is_dir():
+ message = f"Not a directory: {BENCHMARK_DATA} ."
+ raise ValueError(message)
+
+# Manual flag to allow the rebuilding of synthetic data.
+# False forces a benchmark run to re-make all the data files.
+REUSE_DATA = True
+
+
+def run_function_elsewhere(func_to_run, *args, **kwargs):
+ """
+ Run a given function using the :const:`DATA_GEN_PYTHON` executable.
+
+ This structure allows the function to be written natively.
+
+ Parameters
+ ----------
+ func_to_run : FunctionType
+ The function object to be run.
+ NOTE: the function must be completely self-contained, i.e. perform all
+ its own imports (within the target :const:`DATA_GEN_PYTHON`
+ environment).
+ *args : tuple, optional
+ Function call arguments. Must all be expressible as simple literals,
+ i.e. the ``repr`` must be a valid literal expression.
+ **kwargs: dict, optional
+ Function call keyword arguments. All values must be expressible as
+ simple literals (see ``*args``).
+
+ Returns
+ -------
+ str
+ The ``stdout`` from the run.
+
+ """
+ func_string = dedent(getsource(func_to_run))
+ func_string = func_string.replace("@staticmethod\n", "")
+ func_call_term_strings = [repr(arg) for arg in args]
+ func_call_term_strings += [
+ f"{name}={repr(val)}" for name, val in kwargs.items()
+ ]
+ func_call_string = (
+ f"{func_to_run.__name__}(" + ",".join(func_call_term_strings) + ")"
+ )
+ python_string = "\n".join([func_string, func_call_string])
+ result = run(
+ [DATA_GEN_PYTHON, "-c", python_string], capture_output=True, check=True
+ )
+ return result.stdout
+
+
+@contextmanager
+def load_realised():
+ """
+ Force NetCDF loading with realised arrays.
+
+ Since passing between data generation and benchmarking environments is via
+ file loading, but some benchmarks are only meaningful if starting with real
+ arrays.
+ """
+ from iris.fileformats.netcdf import _get_cf_var_data as pre_patched
+
+ def patched(cf_var, filename):
+ return as_concrete_data(pre_patched(cf_var, filename))
+
+ netcdf._get_cf_var_data = patched
+ yield netcdf
+ netcdf._get_cf_var_data = pre_patched
diff --git a/benchmarks/benchmarks/generate_data/stock.py b/benchmarks/benchmarks/generate_data/stock.py
new file mode 100644
index 0000000000..eaf46bb405
--- /dev/null
+++ b/benchmarks/benchmarks/generate_data/stock.py
@@ -0,0 +1,166 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Wrappers for using :mod:`iris.tests.stock` methods for benchmarking.
+
+See :mod:`benchmarks.generate_data` for an explanation of this structure.
+"""
+
+from hashlib import sha256
+import json
+from pathlib import Path
+
+from iris.experimental.ugrid import PARSE_UGRID_ON_LOAD, load_mesh
+
+from . import BENCHMARK_DATA, REUSE_DATA, load_realised, run_function_elsewhere
+
+
+def hash_args(*args, **kwargs):
+ """Convert arguments into a short hash - for preserving args in filenames."""
+ arg_string = str(args)
+ kwarg_string = json.dumps(kwargs)
+ full_string = arg_string + kwarg_string
+ return sha256(full_string.encode()).hexdigest()[:10]
+
+
+def _create_file__xios_common(func_name, **kwargs):
+ def _external(func_name_, temp_file_dir, **kwargs_):
+ from iris.tests.stock import netcdf
+
+ func = getattr(netcdf, func_name_)
+ print(func(temp_file_dir, **kwargs_), end="")
+
+ args_hash = hash_args(**kwargs)
+ save_path = (BENCHMARK_DATA / f"{func_name}_{args_hash}").with_suffix(
+ ".nc"
+ )
+ if not REUSE_DATA or not save_path.is_file():
+ # The xios functions take control of save location so need to move to
+ # a more specific name that allows re-use.
+ actual_path = run_function_elsewhere(
+ _external,
+ func_name_=func_name,
+ temp_file_dir=str(BENCHMARK_DATA),
+ **kwargs,
+ )
+ Path(actual_path.decode()).replace(save_path)
+ return save_path
+
+
+def create_file__xios_2d_face_half_levels(
+ temp_file_dir, dataset_name, n_faces=866, n_times=1
+):
+ """
+ Wrapper for :meth:`iris.tests.stock.netcdf.create_file__xios_2d_face_half_levels`.
+
+ Have taken control of temp_file_dir
+
+ todo: is create_file__xios_2d_face_half_levels still appropriate now we can
+ properly save Mesh Cubes?
+ """
+
+ return _create_file__xios_common(
+ func_name="create_file__xios_2d_face_half_levels",
+ dataset_name=dataset_name,
+ n_faces=n_faces,
+ n_times=n_times,
+ )
+
+
+def create_file__xios_3d_face_half_levels(
+ temp_file_dir, dataset_name, n_faces=866, n_times=1, n_levels=38
+):
+ """
+ Wrapper for :meth:`iris.tests.stock.netcdf.create_file__xios_3d_face_half_levels`.
+
+ Have taken control of temp_file_dir
+
+ todo: is create_file__xios_3d_face_half_levels still appropriate now we can
+ properly save Mesh Cubes?
+ """
+
+ return _create_file__xios_common(
+ func_name="create_file__xios_3d_face_half_levels",
+ dataset_name=dataset_name,
+ n_faces=n_faces,
+ n_times=n_times,
+ n_levels=n_levels,
+ )
+
+
+def sample_mesh(n_nodes=None, n_faces=None, n_edges=None, lazy_values=False):
+ """Wrapper for :meth:iris.tests.stock.mesh.sample_mesh`."""
+
+ def _external(*args, **kwargs):
+ from iris.experimental.ugrid import save_mesh
+ from iris.tests.stock.mesh import sample_mesh
+
+ save_path_ = kwargs.pop("save_path")
+ # Always saving, so laziness is irrelevant. Use lazy to save time.
+ kwargs["lazy_values"] = True
+ new_mesh = sample_mesh(*args, **kwargs)
+ save_mesh(new_mesh, save_path_)
+
+ arg_list = [n_nodes, n_faces, n_edges]
+ args_hash = hash_args(*arg_list)
+ save_path = (BENCHMARK_DATA / f"sample_mesh_{args_hash}").with_suffix(
+ ".nc"
+ )
+ if not REUSE_DATA or not save_path.is_file():
+ _ = run_function_elsewhere(
+ _external, *arg_list, save_path=str(save_path)
+ )
+ with PARSE_UGRID_ON_LOAD.context():
+ if not lazy_values:
+ # Realise everything.
+ with load_realised():
+ mesh = load_mesh(str(save_path))
+ else:
+ mesh = load_mesh(str(save_path))
+ return mesh
+
+
+def sample_meshcoord(sample_mesh_kwargs=None, location="face", axis="x"):
+ """
+ Wrapper for :meth:`iris.tests.stock.mesh.sample_meshcoord`.
+
+ Parameters deviate from the original as cannot pass a
+ :class:`iris.experimental.ugrid.Mesh to the separate Python instance - must
+ instead generate the Mesh as well.
+
+ MeshCoords cannot be saved to file, so the _external method saves the
+ MeshCoord's Mesh, then the original Python instance loads in that Mesh and
+ regenerates the MeshCoord from there.
+ """
+
+ def _external(sample_mesh_kwargs_, save_path_):
+ from iris.experimental.ugrid import save_mesh
+ from iris.tests.stock.mesh import sample_mesh, sample_meshcoord
+
+ if sample_mesh_kwargs_:
+ input_mesh = sample_mesh(**sample_mesh_kwargs_)
+ else:
+ input_mesh = None
+ # Don't parse the location or axis arguments - only saving the Mesh at
+ # this stage.
+ new_meshcoord = sample_meshcoord(mesh=input_mesh)
+ save_mesh(new_meshcoord.mesh, save_path_)
+
+ args_hash = hash_args(**sample_mesh_kwargs)
+ save_path = (
+ BENCHMARK_DATA / f"sample_mesh_coord_{args_hash}"
+ ).with_suffix(".nc")
+ if not REUSE_DATA or not save_path.is_file():
+ _ = run_function_elsewhere(
+ _external,
+ sample_mesh_kwargs_=sample_mesh_kwargs,
+ save_path_=str(save_path),
+ )
+ with PARSE_UGRID_ON_LOAD.context():
+ with load_realised():
+ source_mesh = load_mesh(str(save_path))
+ # Regenerate MeshCoord from its Mesh, which we saved.
+ return source_mesh.to_MeshCoord(location=location, axis=axis)
diff --git a/benchmarks/benchmarks/generate_data/ugrid.py b/benchmarks/benchmarks/generate_data/ugrid.py
new file mode 100644
index 0000000000..527b49a6bb
--- /dev/null
+++ b/benchmarks/benchmarks/generate_data/ugrid.py
@@ -0,0 +1,195 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Scripts for generating supporting data for UGRID-related benchmarking.
+"""
+from iris import load_cube as iris_loadcube
+from iris.experimental.ugrid import PARSE_UGRID_ON_LOAD
+
+from . import BENCHMARK_DATA, REUSE_DATA, load_realised, run_function_elsewhere
+from .stock import (
+ create_file__xios_2d_face_half_levels,
+ create_file__xios_3d_face_half_levels,
+)
+
+
+def generate_cube_like_2d_cubesphere(
+ n_cube: int, with_mesh: bool, output_path: str
+):
+ """
+ Construct and save to file an LFRIc cubesphere-like cube for a given
+ cubesphere size, *or* a simpler structured (UM-like) cube of equivalent
+ size.
+
+ NOTE: this function is *NEVER* called from within this actual package.
+ Instead, it is to be called via benchmarks.remote_data_generation,
+ so that it can use up-to-date facilities, independent of the ASV controlled
+ environment which contains the "Iris commit under test".
+ This means:
+ * it must be completely self-contained : i.e. it includes all its
+ own imports, and saves results to an output file.
+
+ """
+ from iris import save
+ from iris.tests.stock.mesh import sample_mesh, sample_mesh_cube
+
+ n_face_nodes = n_cube * n_cube
+ n_faces = 6 * n_face_nodes
+
+ # Set n_nodes=n_faces and n_edges=2*n_faces
+ # : Not exact, but similar to a 'real' cubesphere.
+ n_nodes = n_faces
+ n_edges = 2 * n_faces
+ if with_mesh:
+ mesh = sample_mesh(
+ n_nodes=n_nodes, n_faces=n_faces, n_edges=n_edges, lazy_values=True
+ )
+ cube = sample_mesh_cube(mesh=mesh, n_z=1)
+ else:
+ cube = sample_mesh_cube(nomesh_faces=n_faces, n_z=1)
+
+ # Strip off the 'extra' aux-coord mapping the mesh, which sample-cube adds
+ # but which we don't want.
+ cube.remove_coord("mesh_face_aux")
+
+ # Save the result to a named file.
+ save(cube, output_path)
+
+
+def make_cube_like_2d_cubesphere(n_cube: int, with_mesh: bool):
+ """
+ Generate an LFRIc cubesphere-like cube for a given cubesphere size,
+ *or* a simpler structured (UM-like) cube of equivalent size.
+
+ All the cube data, coords and mesh content are LAZY, and produced without
+ allocating large real arrays (to allow peak-memory testing).
+
+ NOTE: the actual cube generation is done in a stable Iris environment via
+ benchmarks.remote_data_generation, so it is all channeled via cached netcdf
+ files in our common testdata directory.
+
+ """
+ identifying_filename = (
+ f"cube_like_2d_cubesphere_C{n_cube}_Mesh={with_mesh}.nc"
+ )
+ filepath = BENCHMARK_DATA / identifying_filename
+ if not filepath.exists():
+ # Create the required testfile, by running the generation code remotely
+ # in a 'fixed' python environment.
+ run_function_elsewhere(
+ generate_cube_like_2d_cubesphere,
+ n_cube,
+ with_mesh=with_mesh,
+ output_path=str(filepath),
+ )
+
+ # File now *should* definitely exist: content is simply the desired cube.
+ with PARSE_UGRID_ON_LOAD.context():
+ cube = iris_loadcube(str(filepath))
+
+ # Ensure correct laziness.
+ _ = cube.data
+ for coord in cube.coords(mesh_coords=False):
+ assert not coord.has_lazy_points()
+ assert not coord.has_lazy_bounds()
+ if cube.mesh:
+ for coord in cube.mesh.coords():
+ assert coord.has_lazy_points()
+ for conn in cube.mesh.connectivities():
+ assert conn.has_lazy_indices()
+
+ return cube
+
+
+def make_cube_like_umfield(xy_dims):
+ """
+ Create a "UM-like" cube with lazy content, for save performance testing.
+
+ Roughly equivalent to a single current UM cube, to be compared with
+ a "make_cube_like_2d_cubesphere(n_cube=_N_CUBESPHERE_UM_EQUIVALENT)"
+ (see below).
+
+ Note: probably a bit over-simplified, as there is no time coord, but that
+ is probably equally true of our LFRic-style synthetic data.
+
+ Args:
+ * xy_dims (2-tuple):
+ Set the horizontal dimensions = n-lats, n-lons.
+
+ """
+
+ def _external(xy_dims_, save_path_):
+ from dask import array as da
+ import numpy as np
+
+ from iris import save
+ from iris.coords import DimCoord
+ from iris.cube import Cube
+
+ nz, ny, nx = (1,) + xy_dims_
+
+ # Base data : Note this is float32 not float64 like LFRic/XIOS outputs.
+ lazy_data = da.zeros((nz, ny, nx), dtype=np.float32)
+ cube = Cube(lazy_data, long_name="structured_phenom")
+
+ # Add simple dim coords also.
+ z_dimco = DimCoord(np.arange(nz), long_name="level", units=1)
+ y_dimco = DimCoord(
+ np.linspace(-90.0, 90.0, ny),
+ standard_name="latitude",
+ units="degrees",
+ )
+ x_dimco = DimCoord(
+ np.linspace(-180.0, 180.0, nx),
+ standard_name="longitude",
+ units="degrees",
+ )
+ for idim, co in enumerate([z_dimco, y_dimco, x_dimco]):
+ cube.add_dim_coord(co, idim)
+
+ save(cube, save_path_)
+
+ save_path = (
+ BENCHMARK_DATA / f"make_cube_like_umfield_{xy_dims}"
+ ).with_suffix(".nc")
+ if not REUSE_DATA or not save_path.is_file():
+ _ = run_function_elsewhere(_external, xy_dims, str(save_path))
+ with PARSE_UGRID_ON_LOAD.context():
+ with load_realised():
+ cube = iris_loadcube(str(save_path))
+
+ return cube
+
+
+def make_cubesphere_testfile(c_size, n_levels=0, n_times=1):
+ """
+ Build a C cubesphere testfile in a given directory, with a standard naming.
+ If n_levels > 0 specified: 3d file with the specified number of levels.
+ Return the file path.
+
+ todo: is create_file__xios... still appropriate now we can properly save
+ Mesh Cubes?
+
+ """
+ n_faces = 6 * c_size * c_size
+ stem_name = f"mesh_cubesphere_C{c_size}_t{n_times}"
+ kwargs = dict(
+ temp_file_dir=None,
+ dataset_name=stem_name, # N.B. function adds the ".nc" extension
+ n_times=n_times,
+ n_faces=n_faces,
+ )
+
+ three_d = n_levels > 0
+ if three_d:
+ kwargs["n_levels"] = n_levels
+ kwargs["dataset_name"] += f"_{n_levels}levels"
+ func = create_file__xios_3d_face_half_levels
+ else:
+ func = create_file__xios_2d_face_half_levels
+
+ file_path = func(**kwargs)
+ return file_path
diff --git a/benchmarks/benchmarks/generate_data/um_files.py b/benchmarks/benchmarks/generate_data/um_files.py
new file mode 100644
index 0000000000..39773bbb4b
--- /dev/null
+++ b/benchmarks/benchmarks/generate_data/um_files.py
@@ -0,0 +1,197 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Generate FF, PP and NetCDF files based on a minimal synthetic FF file.
+
+NOTE: uses the Mule package, so depends on an environment with Mule installed.
+"""
+
+
+def _create_um_files(
+ len_x: int, len_y: int, len_z: int, len_t: int, compress, save_paths: dict
+) -> None:
+ """
+ Generate an FF object of given shape and compression, save to FF/PP/NetCDF.
+
+ This is run externally
+ (:func:`benchmarks.generate_data.run_function_elsewhere`), so all imports
+ are self-contained and input parameters are simple types.
+ """
+ from copy import deepcopy
+ from datetime import datetime
+ from tempfile import NamedTemporaryFile
+
+ from mule import ArrayDataProvider, Field3, FieldsFile
+ from mule.pp import fields_to_pp_file
+ import numpy as np
+
+ from iris import load_cube
+ from iris import save as save_cube
+
+ template = {
+ "fixed_length_header": {"dataset_type": 3, "grid_staggering": 3},
+ "integer_constants": {
+ "num_p_levels": len_z,
+ "num_cols": len_x,
+ "num_rows": len_y,
+ },
+ "real_constants": {},
+ "level_dependent_constants": {"dims": (len_z + 1, None)},
+ }
+ new_ff = FieldsFile.from_template(deepcopy(template))
+
+ data_array = np.arange(len_x * len_y).reshape(len_x, len_y)
+ array_provider = ArrayDataProvider(data_array)
+
+ def add_field(level_: int, time_step_: int) -> None:
+ """
+ Add a minimal field to the new :class:`~mule.FieldsFile`.
+
+ Includes the minimum information to allow Mule saving and Iris
+ loading, as well as incrementation for vertical levels and time
+ steps to allow generation of z and t dimensions.
+ """
+ new_field = Field3.empty()
+ # To correspond to the header-release 3 class used.
+ new_field.lbrel = 3
+ # Mule uses the first element of the lookup to test for
+ # unpopulated fields (and skips them), so the first element should
+ # be set to something. The year will do.
+ new_field.raw[1] = datetime.now().year
+
+ # Horizontal.
+ new_field.lbcode = 1
+ new_field.lbnpt = len_x
+ new_field.lbrow = len_y
+ new_field.bdx = new_ff.real_constants.col_spacing
+ new_field.bdy = new_ff.real_constants.row_spacing
+ new_field.bzx = new_ff.real_constants.start_lon - 0.5 * new_field.bdx
+ new_field.bzy = new_ff.real_constants.start_lat - 0.5 * new_field.bdy
+
+ # Hemisphere.
+ new_field.lbhem = 32
+ # Processing.
+ new_field.lbproc = 0
+
+ # Vertical.
+ # Hybrid height values by simulating sequences similar to those in a
+ # theta file.
+ new_field.lbvc = 65
+ if level_ == 0:
+ new_field.lblev = 9999
+ else:
+ new_field.lblev = level_
+
+ level_1 = level_ + 1
+ six_rec = 20 / 3
+ three_rec = six_rec / 2
+
+ new_field.blev = level_1**2 * six_rec - six_rec
+ new_field.brsvd1 = (
+ level_1**2 * six_rec + (six_rec * level_1) - three_rec
+ )
+
+ brsvd2_simulated = np.linspace(0.995, 0, len_z)
+ shift = min(len_z, 2)
+ bhrlev_simulated = np.concatenate(
+ [np.ones(shift), brsvd2_simulated[:-shift]]
+ )
+ new_field.brsvd2 = brsvd2_simulated[level_]
+ new_field.bhrlev = bhrlev_simulated[level_]
+
+ # Time.
+ new_field.lbtim = 11
+
+ new_field.lbyr = time_step_
+ for attr_name in ["lbmon", "lbdat", "lbhr", "lbmin", "lbsec"]:
+ setattr(new_field, attr_name, 0)
+
+ new_field.lbyrd = time_step_ + 1
+ for attr_name in ["lbmond", "lbdatd", "lbhrd", "lbmind", "lbsecd"]:
+ setattr(new_field, attr_name, 0)
+
+ # Data and packing.
+ new_field.lbuser1 = 1
+ new_field.lbpack = int(compress)
+ new_field.bacc = 0
+ new_field.bmdi = -1
+ new_field.lbext = 0
+ new_field.set_data_provider(array_provider)
+
+ new_ff.fields.append(new_field)
+
+ for time_step in range(len_t):
+ for level in range(len_z):
+ add_field(level, time_step + 1)
+
+ ff_path = save_paths.get("FF", None)
+ pp_path = save_paths.get("PP", None)
+ nc_path = save_paths.get("NetCDF", None)
+
+ if ff_path:
+ new_ff.to_file(ff_path)
+ if pp_path:
+ fields_to_pp_file(str(pp_path), new_ff.fields)
+ if nc_path:
+ temp_ff_path = None
+ # Need an Iris Cube from the FF content.
+ if ff_path:
+ # Use the existing file.
+ ff_cube = load_cube(ff_path)
+ else:
+ # Make a temporary file.
+ temp_ff_path = NamedTemporaryFile()
+ new_ff.to_file(temp_ff_path.name)
+ ff_cube = load_cube(temp_ff_path.name)
+
+ save_cube(ff_cube, nc_path, zlib=compress)
+ if temp_ff_path:
+ temp_ff_path.close()
+
+
+FILE_EXTENSIONS = {"FF": "", "PP": ".pp", "NetCDF": ".nc"}
+
+
+def create_um_files(
+ len_x: int,
+ len_y: int,
+ len_z: int,
+ len_t: int,
+ compress: bool,
+ file_types: list,
+) -> dict:
+ """
+ Generate FF-based FF / PP / NetCDF files with specified shape and compression.
+
+ All files representing a given shape are saved in a dedicated directory. A
+ dictionary of the saved paths is returned.
+
+ If the required files exist, they are re-used, unless
+ :const:`benchmarks.REUSE_DATA` is ``False``.
+ """
+ # Self contained imports to avoid linting confusion with _create_um_files().
+ from . import BENCHMARK_DATA, REUSE_DATA, run_function_elsewhere
+
+ save_name_sections = ["UM", len_x, len_y, len_z, len_t]
+ save_name = "_".join(str(section) for section in save_name_sections)
+ save_dir = BENCHMARK_DATA / save_name
+ if not save_dir.is_dir():
+ save_dir.mkdir(parents=True)
+
+ save_paths = {}
+ files_exist = True
+ for file_type in file_types:
+ file_ext = FILE_EXTENSIONS[file_type]
+ save_path = (save_dir / f"{compress}").with_suffix(file_ext)
+ files_exist = files_exist and save_path.is_file()
+ save_paths[file_type] = str(save_path)
+
+ if not REUSE_DATA or not files_exist:
+ _ = run_function_elsewhere(
+ _create_um_files, len_x, len_y, len_z, len_t, compress, save_paths
+ )
+
+ return save_paths
diff --git a/benchmarks/benchmarks/import_iris.py b/benchmarks/benchmarks/import_iris.py
index 3e83ea8cfe..ad54c23122 100644
--- a/benchmarks/benchmarks/import_iris.py
+++ b/benchmarks/benchmarks/import_iris.py
@@ -3,240 +3,247 @@
# This file is part of Iris and is released under the LGPL license.
# See COPYING and COPYING.LESSER in the root of the repository for full
# licensing details.
-import sys
+from importlib import import_module, reload
class Iris:
- warmup_time = 0
- number = 1
- repeat = 10
-
- def setup(self):
- self.before = set(sys.modules.keys())
-
- def teardown(self):
- after = set(sys.modules.keys())
- diff = after - self.before
- for module in diff:
- sys.modules.pop(module)
+ @staticmethod
+ def _import(module_name):
+ """
+ Have experimented with adding sleep() commands into the imported
+ modules. The results reveal:
+
+ ASV avoids invoking `import x` if nothing gets called in the
+ benchmark (some imports were timed, but only those where calls
+ happened during import).
+
+ Using reload() is not identical to importing, but does produce
+ results that are very close to expected import times, so this is fine
+ for monitoring for regressions.
+ It is also ideal for accurate repetitions, without the need to mess
+ with the ASV `number` attribute etc, since cached imports are not used
+ and the repetitions are therefore no faster than the first run.
+ """
+ mod = import_module(module_name)
+ reload(mod)
def time_iris(self):
- import iris
+ self._import("iris")
def time__concatenate(self):
- import iris._concatenate
+ self._import("iris._concatenate")
def time__constraints(self):
- import iris._constraints
+ self._import("iris._constraints")
def time__data_manager(self):
- import iris._data_manager
+ self._import("iris._data_manager")
def time__deprecation(self):
- import iris._deprecation
+ self._import("iris._deprecation")
def time__lazy_data(self):
- import iris._lazy_data
+ self._import("iris._lazy_data")
def time__merge(self):
- import iris._merge
+ self._import("iris._merge")
def time__representation(self):
- import iris._representation
+ self._import("iris._representation")
def time_analysis(self):
- import iris.analysis
+ self._import("iris.analysis")
def time_analysis__area_weighted(self):
- import iris.analysis._area_weighted
+ self._import("iris.analysis._area_weighted")
def time_analysis__grid_angles(self):
- import iris.analysis._grid_angles
+ self._import("iris.analysis._grid_angles")
def time_analysis__interpolation(self):
- import iris.analysis._interpolation
+ self._import("iris.analysis._interpolation")
def time_analysis__regrid(self):
- import iris.analysis._regrid
+ self._import("iris.analysis._regrid")
def time_analysis__scipy_interpolate(self):
- import iris.analysis._scipy_interpolate
+ self._import("iris.analysis._scipy_interpolate")
def time_analysis_calculus(self):
- import iris.analysis.calculus
+ self._import("iris.analysis.calculus")
def time_analysis_cartography(self):
- import iris.analysis.cartography
+ self._import("iris.analysis.cartography")
def time_analysis_geomerty(self):
- import iris.analysis.geometry
+ self._import("iris.analysis.geometry")
def time_analysis_maths(self):
- import iris.analysis.maths
+ self._import("iris.analysis.maths")
def time_analysis_stats(self):
- import iris.analysis.stats
+ self._import("iris.analysis.stats")
def time_analysis_trajectory(self):
- import iris.analysis.trajectory
+ self._import("iris.analysis.trajectory")
def time_aux_factory(self):
- import iris.aux_factory
+ self._import("iris.aux_factory")
def time_common(self):
- import iris.common
+ self._import("iris.common")
def time_common_lenient(self):
- import iris.common.lenient
+ self._import("iris.common.lenient")
def time_common_metadata(self):
- import iris.common.metadata
+ self._import("iris.common.metadata")
def time_common_mixin(self):
- import iris.common.mixin
+ self._import("iris.common.mixin")
def time_common_resolve(self):
- import iris.common.resolve
+ self._import("iris.common.resolve")
def time_config(self):
- import iris.config
+ self._import("iris.config")
def time_coord_categorisation(self):
- import iris.coord_categorisation
+ self._import("iris.coord_categorisation")
def time_coord_systems(self):
- import iris.coord_systems
+ self._import("iris.coord_systems")
def time_coords(self):
- import iris.coords
+ self._import("iris.coords")
def time_cube(self):
- import iris.cube
+ self._import("iris.cube")
def time_exceptions(self):
- import iris.exceptions
+ self._import("iris.exceptions")
def time_experimental(self):
- import iris.experimental
+ self._import("iris.experimental")
def time_fileformats(self):
- import iris.fileformats
+ self._import("iris.fileformats")
def time_fileformats__ff(self):
- import iris.fileformats._ff
+ self._import("iris.fileformats._ff")
def time_fileformats__ff_cross_references(self):
- import iris.fileformats._ff_cross_references
+ self._import("iris.fileformats._ff_cross_references")
def time_fileformats__pp_lbproc_pairs(self):
- import iris.fileformats._pp_lbproc_pairs
+ self._import("iris.fileformats._pp_lbproc_pairs")
def time_fileformats_structured_array_identification(self):
- import iris.fileformats._structured_array_identification
+ self._import("iris.fileformats._structured_array_identification")
def time_fileformats_abf(self):
- import iris.fileformats.abf
+ self._import("iris.fileformats.abf")
def time_fileformats_cf(self):
- import iris.fileformats.cf
+ self._import("iris.fileformats.cf")
def time_fileformats_dot(self):
- import iris.fileformats.dot
+ self._import("iris.fileformats.dot")
def time_fileformats_name(self):
- import iris.fileformats.name
+ self._import("iris.fileformats.name")
def time_fileformats_name_loaders(self):
- import iris.fileformats.name_loaders
+ self._import("iris.fileformats.name_loaders")
def time_fileformats_netcdf(self):
- import iris.fileformats.netcdf
+ self._import("iris.fileformats.netcdf")
def time_fileformats_nimrod(self):
- import iris.fileformats.nimrod
+ self._import("iris.fileformats.nimrod")
def time_fileformats_nimrod_load_rules(self):
- import iris.fileformats.nimrod_load_rules
+ self._import("iris.fileformats.nimrod_load_rules")
def time_fileformats_pp(self):
- import iris.fileformats.pp
+ self._import("iris.fileformats.pp")
def time_fileformats_pp_load_rules(self):
- import iris.fileformats.pp_load_rules
+ self._import("iris.fileformats.pp_load_rules")
def time_fileformats_pp_save_rules(self):
- import iris.fileformats.pp_save_rules
+ self._import("iris.fileformats.pp_save_rules")
def time_fileformats_rules(self):
- import iris.fileformats.rules
+ self._import("iris.fileformats.rules")
def time_fileformats_um(self):
- import iris.fileformats.um
+ self._import("iris.fileformats.um")
def time_fileformats_um__fast_load(self):
- import iris.fileformats.um._fast_load
+ self._import("iris.fileformats.um._fast_load")
def time_fileformats_um__fast_load_structured_fields(self):
- import iris.fileformats.um._fast_load_structured_fields
+ self._import("iris.fileformats.um._fast_load_structured_fields")
def time_fileformats_um__ff_replacement(self):
- import iris.fileformats.um._ff_replacement
+ self._import("iris.fileformats.um._ff_replacement")
def time_fileformats_um__optimal_array_structuring(self):
- import iris.fileformats.um._optimal_array_structuring
+ self._import("iris.fileformats.um._optimal_array_structuring")
def time_fileformats_um_cf_map(self):
- import iris.fileformats.um_cf_map
+ self._import("iris.fileformats.um_cf_map")
def time_io(self):
- import iris.io
+ self._import("iris.io")
def time_io_format_picker(self):
- import iris.io.format_picker
+ self._import("iris.io.format_picker")
def time_iterate(self):
- import iris.iterate
+ self._import("iris.iterate")
def time_palette(self):
- import iris.palette
+ self._import("iris.palette")
def time_plot(self):
- import iris.plot
+ self._import("iris.plot")
def time_quickplot(self):
- import iris.quickplot
+ self._import("iris.quickplot")
def time_std_names(self):
- import iris.std_names
+ self._import("iris.std_names")
def time_symbols(self):
- import iris.symbols
+ self._import("iris.symbols")
def time_tests(self):
- import iris.tests
+ self._import("iris.tests")
def time_time(self):
- import iris.time
+ self._import("iris.time")
def time_util(self):
- import iris.util
+ self._import("iris.util")
# third-party imports
def time_third_party_cartopy(self):
- import cartopy
+ self._import("cartopy")
def time_third_party_cf_units(self):
- import cf_units
+ self._import("cf_units")
def time_third_party_cftime(self):
- import cftime
+ self._import("cftime")
def time_third_party_matplotlib(self):
- import matplotlib
+ self._import("matplotlib")
def time_third_party_numpy(self):
- import numpy
+ self._import("numpy")
def time_third_party_scipy(self):
- import scipy
+ self._import("scipy")
diff --git a/benchmarks/benchmarks/iterate.py b/benchmarks/benchmarks/iterate.py
index 20422750ef..0a5415ac2b 100644
--- a/benchmarks/benchmarks/iterate.py
+++ b/benchmarks/benchmarks/iterate.py
@@ -9,9 +9,10 @@
"""
import numpy as np
-from benchmarks import ARTIFICIAL_DIM_SIZE
from iris import coords, cube, iterate
+from . import ARTIFICIAL_DIM_SIZE
+
def setup():
"""General variables needed by multiple benchmark classes."""
diff --git a/benchmarks/benchmarks/load/__init__.py b/benchmarks/benchmarks/load/__init__.py
new file mode 100644
index 0000000000..1b0ea696f6
--- /dev/null
+++ b/benchmarks/benchmarks/load/__init__.py
@@ -0,0 +1,187 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+File loading benchmark tests.
+
+Where applicable benchmarks should be parameterised for two sizes of input data:
+ * minimal: enables detection of regressions in parts of the run-time that do
+ NOT scale with data size.
+ * large: large enough to exclusively detect regressions in parts of the
+ run-time that scale with data size. Size should be _just_ large
+ enough - don't want to bloat benchmark runtime.
+
+"""
+
+from iris import AttributeConstraint, Constraint, load, load_cube
+from iris.cube import Cube
+from iris.fileformats.um import structured_um_loading
+
+from ..generate_data import BENCHMARK_DATA, REUSE_DATA, run_function_elsewhere
+from ..generate_data.um_files import create_um_files
+
+
+class LoadAndRealise:
+ # For data generation
+ timeout = 600.0
+ params = [
+ [(2, 2, 2), (1280, 960, 5), (2, 2, 1000)],
+ [False, True],
+ ["FF", "PP", "NetCDF"],
+ ]
+ param_names = ["xyz", "compressed", "file_format"]
+
+ def setup_cache(self) -> dict:
+ file_type_args = self.params[2]
+ file_path_dict = {}
+ for xyz in self.params[0]:
+ file_path_dict[xyz] = {}
+ x, y, z = xyz
+ for compress in self.params[1]:
+ file_path_dict[xyz][compress] = create_um_files(
+ x, y, z, 1, compress, file_type_args
+ )
+ return file_path_dict
+
+ def setup(
+ self,
+ file_path_dict: dict,
+ xyz: tuple,
+ compress: bool,
+ file_format: str,
+ ) -> None:
+ self.file_path = file_path_dict[xyz][compress][file_format]
+ self.cube = self.load()
+
+ def load(self) -> Cube:
+ return load_cube(self.file_path)
+
+ def time_load(self, _, __, ___, ____) -> None:
+ _ = self.load()
+
+ def time_realise(self, _, __, ___, ____) -> None:
+ # Don't touch cube.data - permanent realisation plays badly with ASV's
+ # re-run strategy.
+ assert self.cube.has_lazy_data()
+ self.cube.core_data().compute()
+
+
+class STASHConstraint:
+ # xyz sizes mimic LoadAndRealise to maximise file re-use.
+ params = [[(2, 2, 2), (1280, 960, 5), (2, 2, 1000)], ["FF", "PP"]]
+ param_names = ["xyz", "file_format"]
+
+ def setup_cache(self) -> dict:
+ file_type_args = self.params[1]
+ file_path_dict = {}
+ for xyz in self.params[0]:
+ x, y, z = xyz
+ file_path_dict[xyz] = create_um_files(
+ x, y, z, 1, False, file_type_args
+ )
+ return file_path_dict
+
+ def setup(
+ self, file_path_dict: dict, xyz: tuple, file_format: str
+ ) -> None:
+ self.file_path = file_path_dict[xyz][file_format]
+
+ def time_stash_constraint(self, _, __, ___) -> None:
+ _ = load_cube(self.file_path, AttributeConstraint(STASH="m??s??i901"))
+
+
+class TimeConstraint:
+ params = [[3, 20], ["FF", "PP", "NetCDF"]]
+ param_names = ["time_dim_len", "file_format"]
+
+ def setup_cache(self) -> dict:
+ file_type_args = self.params[1]
+ file_path_dict = {}
+ for time_dim_len in self.params[0]:
+ file_path_dict[time_dim_len] = create_um_files(
+ 20, 20, 5, time_dim_len, False, file_type_args
+ )
+ return file_path_dict
+
+ def setup(
+ self, file_path_dict: dict, time_dim_len: int, file_format: str
+ ) -> None:
+ self.file_path = file_path_dict[time_dim_len][file_format]
+ self.time_constr = Constraint(time=lambda cell: cell.point.year < 3)
+
+ def time_time_constraint(self, _, __, ___) -> None:
+ _ = load_cube(self.file_path, self.time_constr)
+
+
+class ManyVars:
+ FILE_PATH = BENCHMARK_DATA / "many_var_file.nc"
+
+ @staticmethod
+ def _create_file(save_path: str) -> None:
+ """Is run externally - everything must be self-contained."""
+ import numpy as np
+
+ from iris import save
+ from iris.coords import AuxCoord
+ from iris.cube import Cube
+
+ data_len = 8
+ data = np.arange(data_len)
+ cube = Cube(data, units="unknown")
+ extra_vars = 80
+ names = ["coord_" + str(i) for i in range(extra_vars)]
+ for name in names:
+ coord = AuxCoord(data, long_name=name, units="unknown")
+ cube.add_aux_coord(coord, 0)
+ save(cube, save_path)
+
+ def setup_cache(self) -> None:
+ if not REUSE_DATA or not self.FILE_PATH.is_file():
+ # See :mod:`benchmarks.generate_data` docstring for full explanation.
+ _ = run_function_elsewhere(
+ self._create_file,
+ str(self.FILE_PATH),
+ )
+
+ def time_many_var_load(self) -> None:
+ _ = load(str(self.FILE_PATH))
+
+
+class StructuredFF:
+ """
+ Test structured loading of a large-ish fieldsfile.
+
+ Structured load of the larger size should show benefit over standard load,
+ avoiding the cost of merging.
+ """
+
+ params = [[(2, 2, 2), (1280, 960, 5), (2, 2, 1000)], [False, True]]
+ param_names = ["xyz", "structured_loading"]
+
+ def setup_cache(self) -> dict:
+ file_path_dict = {}
+ for xyz in self.params[0]:
+ x, y, z = xyz
+ file_path_dict[xyz] = create_um_files(x, y, z, 1, False, ["FF"])
+ return file_path_dict
+
+ def setup(self, file_path_dict, xyz, structured_load):
+ self.file_path = file_path_dict[xyz]["FF"]
+ self.structured_load = structured_load
+
+ def load(self):
+ """Load the whole file (in fact there is only 1 cube)."""
+
+ def _load():
+ _ = load(self.file_path)
+
+ if self.structured_load:
+ with structured_um_loading():
+ _load()
+ else:
+ _load()
+
+ def time_structured_load(self, _, __, ___):
+ self.load()
diff --git a/benchmarks/benchmarks/load/ugrid.py b/benchmarks/benchmarks/load/ugrid.py
new file mode 100644
index 0000000000..350a78e128
--- /dev/null
+++ b/benchmarks/benchmarks/load/ugrid.py
@@ -0,0 +1,130 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Mesh data loading benchmark tests.
+
+Where possible benchmarks should be parameterised for two sizes of input data:
+ * minimal: enables detection of regressions in parts of the run-time that do
+ NOT scale with data size.
+ * large: large enough to exclusively detect regressions in parts of the
+ run-time that scale with data size.
+
+"""
+
+from iris import load_cube as iris_load_cube
+from iris.experimental.ugrid import PARSE_UGRID_ON_LOAD
+from iris.experimental.ugrid import load_mesh as iris_load_mesh
+
+from ..generate_data.stock import create_file__xios_2d_face_half_levels
+
+
+def synthetic_data(**kwargs):
+ # Ensure all uses of the synthetic data function use the common directory.
+ # File location is controlled by :mod:`generate_data`, hence temp_file_dir=None.
+ return create_file__xios_2d_face_half_levels(temp_file_dir=None, **kwargs)
+
+
+def load_cube(*args, **kwargs):
+ with PARSE_UGRID_ON_LOAD.context():
+ return iris_load_cube(*args, **kwargs)
+
+
+def load_mesh(*args, **kwargs):
+ with PARSE_UGRID_ON_LOAD.context():
+ return iris_load_mesh(*args, **kwargs)
+
+
+class BasicLoading:
+ params = [1, int(2e5)]
+ param_names = ["number of faces"]
+
+ def setup_common(self, **kwargs):
+ self.data_path = synthetic_data(**kwargs)
+
+ def setup(self, *args):
+ self.setup_common(dataset_name="Loading", n_faces=args[0])
+
+ def time_load_file(self, *args):
+ _ = load_cube(str(self.data_path))
+
+ def time_load_mesh(self, *args):
+ _ = load_mesh(str(self.data_path))
+
+
+class BasicLoadingTime(BasicLoading):
+ """Same as BasicLoading, but scaling over a time series - an unlimited dimension."""
+
+ # NOTE iris#4834 - careful how big the time dimension is (time dimension
+ # is UNLIMITED).
+
+ param_names = ["number of time steps"]
+
+ def setup(self, *args):
+ self.setup_common(dataset_name="Loading", n_faces=1, n_times=args[0])
+
+
+class DataRealisation:
+ # Prevent repeat runs between setup() runs - data won't be lazy after 1st.
+ number = 1
+ # Compensate for reduced certainty by increasing number of repeats.
+ repeat = (10, 10, 10.0)
+ # Prevent ASV running its warmup, which ignores `number` and would
+ # therefore get a false idea of typical run time since the data would stop
+ # being lazy.
+ warmup_time = 0.0
+ timeout = 300.0
+
+ params = [1, int(2e5)]
+ param_names = ["number of faces"]
+
+ def setup_common(self, **kwargs):
+ data_path = synthetic_data(**kwargs)
+ self.cube = load_cube(str(data_path))
+
+ def setup(self, *args):
+ self.setup_common(dataset_name="Realisation", n_faces=args[0])
+
+ def time_realise_data(self, *args):
+ assert self.cube.has_lazy_data()
+ _ = self.cube.data[0]
+
+
+class DataRealisationTime(DataRealisation):
+ """Same as DataRealisation, but scaling over a time series - an unlimited dimension."""
+
+ param_names = ["number of time steps"]
+
+ def setup(self, *args):
+ self.setup_common(
+ dataset_name="Realisation", n_faces=1, n_times=args[0]
+ )
+
+
+class Callback:
+ params = [1, int(2e5)]
+ param_names = ["number of faces"]
+
+ def setup_common(self, **kwargs):
+ def callback(cube, field, filename):
+ return cube[::2]
+
+ self.data_path = synthetic_data(**kwargs)
+ self.callback = callback
+
+ def setup(self, *args):
+ self.setup_common(dataset_name="Loading", n_faces=args[0])
+
+ def time_load_file_callback(self, *args):
+ _ = load_cube(str(self.data_path), callback=self.callback)
+
+
+class CallbackTime(Callback):
+ """Same as Callback, but scaling over a time series - an unlimited dimension."""
+
+ param_names = ["number of time steps"]
+
+ def setup(self, *args):
+ self.setup_common(dataset_name="Loading", n_faces=1, n_times=args[0])
diff --git a/benchmarks/benchmarks/mixin.py b/benchmarks/benchmarks/mixin.py
index e78b150438..bec5518eee 100644
--- a/benchmarks/benchmarks/mixin.py
+++ b/benchmarks/benchmarks/mixin.py
@@ -10,10 +10,11 @@
import numpy as np
-from benchmarks import ARTIFICIAL_DIM_SIZE
from iris import coords
from iris.common.metadata import AncillaryVariableMetadata
+from . import ARTIFICIAL_DIM_SIZE
+
LONG_NAME = "air temperature"
STANDARD_NAME = "air_temperature"
VAR_NAME = "air_temp"
diff --git a/benchmarks/benchmarks/plot.py b/benchmarks/benchmarks/plot.py
index 45905abd2f..75195c86e9 100644
--- a/benchmarks/benchmarks/plot.py
+++ b/benchmarks/benchmarks/plot.py
@@ -10,9 +10,10 @@
import matplotlib
import numpy as np
-from benchmarks import ARTIFICIAL_DIM_SIZE
from iris import coords, cube, plot
+from . import ARTIFICIAL_DIM_SIZE
+
matplotlib.use("agg")
@@ -22,7 +23,7 @@ def setup(self):
# Should generate 10 distinct contours, regardless of dim size.
dim_size = int(ARTIFICIAL_DIM_SIZE / 5)
repeat_number = int(dim_size / 10)
- repeat_range = range(int((dim_size ** 2) / repeat_number))
+ repeat_range = range(int((dim_size**2) / repeat_number))
data = np.repeat(repeat_range, repeat_number)
data = data.reshape((dim_size,) * 2)
diff --git a/benchmarks/benchmarks/regridding.py b/benchmarks/benchmarks/regridding.py
index 6db33aa192..c315119c11 100644
--- a/benchmarks/benchmarks/regridding.py
+++ b/benchmarks/benchmarks/regridding.py
@@ -25,16 +25,31 @@ def setup(self) -> None:
)
self.cube = iris.load_cube(cube_file_path)
+ # Prepare a tougher cube and chunk it
+ chunked_cube_file_path = tests.get_data_path(
+ ["NetCDF", "regrid", "regrid_xyt.nc"]
+ )
+ self.chunked_cube = iris.load_cube(chunked_cube_file_path)
+
+ # Chunked data makes the regridder run repeatedly
+ self.cube.data = self.cube.lazy_data().rechunk((1, -1, -1))
+
template_file_path = tests.get_data_path(
["NetCDF", "regrid", "regrid_template_global_latlon.nc"]
)
self.template_cube = iris.load_cube(template_file_path)
- # Chunked data makes the regridder run repeatedly
- self.cube.data = self.cube.lazy_data().rechunk((1, -1, -1))
+ # Prepare a regridding scheme
+ self.scheme_area_w = AreaWeighted()
def time_regrid_area_w(self) -> None:
# Regrid the cube onto the template.
- out = self.cube.regrid(self.template_cube, AreaWeighted())
+ out = self.cube.regrid(self.template_cube, self.scheme_area_w)
# Realise the data
out.data
+
+ def time_regrid_area_w_new_grid(self) -> None:
+ # Regrid the chunked cube
+ out = self.chunked_cube.regrid(self.template_cube, self.scheme_area_w)
+ # Realise data
+ out.data
diff --git a/benchmarks/benchmarks/save.py b/benchmarks/benchmarks/save.py
new file mode 100644
index 0000000000..3551c72528
--- /dev/null
+++ b/benchmarks/benchmarks/save.py
@@ -0,0 +1,54 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+File saving benchmarks.
+
+Where possible benchmarks should be parameterised for two sizes of input data:
+ * minimal: enables detection of regressions in parts of the run-time that do
+ NOT scale with data size.
+ * large: large enough to exclusively detect regressions in parts of the
+ run-time that scale with data size.
+
+"""
+from iris import save
+from iris.experimental.ugrid import save_mesh
+
+from . import TrackAddedMemoryAllocation
+from .generate_data.ugrid import make_cube_like_2d_cubesphere
+
+
+class NetcdfSave:
+ params = [[1, 600], [False, True]]
+ param_names = ["cubesphere-N", "is_unstructured"]
+
+ def setup(self, n_cubesphere, is_unstructured):
+ self.cube = make_cube_like_2d_cubesphere(
+ n_cube=n_cubesphere, with_mesh=is_unstructured
+ )
+
+ def _save_data(self, cube, do_copy=True):
+ if do_copy:
+ # Copy the cube, to avoid distorting the results by changing it
+ # Because we known that older Iris code realises lazy coords
+ cube = cube.copy()
+ save(cube, "tmp.nc")
+
+ def _save_mesh(self, cube):
+ # In this case, we are happy that the mesh is *not* modified
+ save_mesh(cube.mesh, "mesh.nc")
+
+ def time_netcdf_save_cube(self, n_cubesphere, is_unstructured):
+ self._save_data(self.cube)
+
+ def time_netcdf_save_mesh(self, n_cubesphere, is_unstructured):
+ if is_unstructured:
+ self._save_mesh(self.cube)
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_netcdf_save(self, n_cubesphere, is_unstructured):
+ # Don't need to copy the cube here since track_ benchmarks don't
+ # do repeats between self.setup() calls.
+ self._save_data(self.cube, do_copy=False)
diff --git a/benchmarks/benchmarks/sperf/__init__.py b/benchmarks/benchmarks/sperf/__init__.py
new file mode 100644
index 0000000000..eccad56f6f
--- /dev/null
+++ b/benchmarks/benchmarks/sperf/__init__.py
@@ -0,0 +1,43 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Benchmarks for the SPerf scheme of the UK Met Office's NG-VAT project.
+
+SPerf = assessing performance against a series of increasingly large LFRic
+datasets.
+"""
+from iris import load_cube
+
+# TODO: remove uses of PARSE_UGRID_ON_LOAD once UGRID parsing is core behaviour.
+from iris.experimental.ugrid import PARSE_UGRID_ON_LOAD
+
+from ..generate_data.ugrid import make_cubesphere_testfile
+
+
+class FileMixin:
+ """For use in any benchmark classes that work on a file."""
+
+ # Allows time for large file generation.
+ timeout = 3600.0
+ # Largest file with these params: ~90GB.
+ # Total disk space: ~410GB.
+ params = [
+ [12, 384, 640, 960, 1280, 1668],
+ [1, 36, 72],
+ [1, 3, 10],
+ ]
+ param_names = ["cubesphere_C", "N levels", "N time steps"]
+ # cubesphere_C: notation refers to faces per panel.
+ # e.g. C1 is 6 faces, 8 nodes
+
+ def setup(self, c_size, n_levels, n_times):
+ self.file_path = make_cubesphere_testfile(
+ c_size=c_size, n_levels=n_levels, n_times=n_times
+ )
+
+ def load_cube(self):
+ with PARSE_UGRID_ON_LOAD.context():
+ return load_cube(str(self.file_path))
diff --git a/benchmarks/benchmarks/sperf/combine_regions.py b/benchmarks/benchmarks/sperf/combine_regions.py
new file mode 100644
index 0000000000..d3d128c7d8
--- /dev/null
+++ b/benchmarks/benchmarks/sperf/combine_regions.py
@@ -0,0 +1,257 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Region combine benchmarks for the SPerf scheme of the UK Met Office's NG-VAT project.
+"""
+import os.path
+
+from dask import array as da
+import numpy as np
+
+from iris import load, load_cube, save
+from iris.experimental.ugrid import PARSE_UGRID_ON_LOAD
+from iris.experimental.ugrid.utils import recombine_submeshes
+
+from .. import TrackAddedMemoryAllocation, on_demand_benchmark
+from ..generate_data.ugrid import BENCHMARK_DATA, make_cube_like_2d_cubesphere
+
+
+class Mixin:
+ # Characterise time taken + memory-allocated, for various stages of combine
+ # operations on cubesphere-like test data.
+ timeout = 300.0
+ params = [100, 200, 300, 500, 1000, 1668]
+ param_names = ["cubesphere_C"]
+ # Fix result units for the tracking benchmarks.
+ unit = "Mb"
+ temp_save_path = BENCHMARK_DATA / "tmp.nc"
+
+ def _parametrised_cache_filename(self, n_cubesphere, content_name):
+ return BENCHMARK_DATA / f"cube_C{n_cubesphere}_{content_name}.nc"
+
+ def _make_region_cubes(self, full_mesh_cube):
+ """Make a fixed number of region cubes from a full meshcube."""
+ # Divide the cube into regions.
+ n_faces = full_mesh_cube.shape[-1]
+ # Start with a simple list of face indices
+ # first extend to multiple of 5
+ n_faces_5s = 5 * ((n_faces + 1) // 5)
+ i_faces = np.arange(n_faces_5s, dtype=int)
+ # reshape (5N,) to (N, 5)
+ i_faces = i_faces.reshape((n_faces_5s // 5, 5))
+ # reorder [2, 3, 4, 0, 1] within each block of 5
+ i_faces = np.concatenate([i_faces[:, 2:], i_faces[:, :2]], axis=1)
+ # flatten to get [2 3 4 0 1 (-) 8 9 10 6 7 (-) 13 14 15 11 12 ...]
+ i_faces = i_faces.flatten()
+ # reduce back to orignal length, wrap any overflows into valid range
+ i_faces = i_faces[:n_faces] % n_faces
+
+ # Divide into regions -- always slightly uneven, since 7 doesn't divide
+ n_regions = 7
+ n_facesperregion = n_faces // n_regions
+ i_face_regions = (i_faces // n_facesperregion) % n_regions
+ region_inds = [
+ np.where(i_face_regions == i_region)[0]
+ for i_region in range(n_regions)
+ ]
+ # NOTE: this produces 7 regions, with near-adjacent value ranges but
+ # with some points "moved" to an adjacent region.
+ # Also, region-0 is bigger (because of not dividing by 7).
+
+ # Finally, make region cubes with these indices.
+ region_cubes = [full_mesh_cube[..., inds] for inds in region_inds]
+ return region_cubes
+
+ def setup_cache(self):
+ """Cache all the necessary source data on disk."""
+
+ # Control dask, to minimise memory usage + allow largest data.
+ self.fix_dask_settings()
+
+ for n_cubesphere in self.params:
+ # Do for each parameter, since "setup_cache" is NOT parametrised
+ mesh_cube = make_cube_like_2d_cubesphere(
+ n_cube=n_cubesphere, with_mesh=True
+ )
+ # Save to files which include the parameter in the names.
+ save(
+ mesh_cube,
+ self._parametrised_cache_filename(n_cubesphere, "meshcube"),
+ )
+ region_cubes = self._make_region_cubes(mesh_cube)
+ save(
+ region_cubes,
+ self._parametrised_cache_filename(n_cubesphere, "regioncubes"),
+ )
+
+ def setup(
+ self, n_cubesphere, imaginary_data=True, create_result_cube=True
+ ):
+ """
+ The combine-tests "standard" setup operation.
+
+ Load the source cubes (full-mesh + region) from disk.
+ These are specific to the cubesize parameter.
+ The data is cached on disk rather than calculated, to avoid any
+ pre-loading of the process memory allocation.
+
+ If 'imaginary_data' is set (default), the region cubes data is replaced
+ with lazy data in the form of a da.zeros(). Otherwise, the region data
+ is lazy data from the files.
+
+ If 'create_result_cube' is set, create "self.combined_cube" containing
+ the (still lazy) result.
+
+ NOTE: various test classes override + extend this.
+
+ """
+
+ # Load source cubes (full-mesh and regions)
+ with PARSE_UGRID_ON_LOAD.context():
+ self.full_mesh_cube = load_cube(
+ self._parametrised_cache_filename(n_cubesphere, "meshcube")
+ )
+ self.region_cubes = load(
+ self._parametrised_cache_filename(n_cubesphere, "regioncubes")
+ )
+
+ # Remove all var-names from loaded cubes, which can otherwise cause
+ # problems. Also implement 'imaginary' data.
+ for cube in self.region_cubes + [self.full_mesh_cube]:
+ cube.var_name = None
+ for coord in cube.coords():
+ coord.var_name = None
+ if imaginary_data:
+ # Replace cube data (lazy file data) with 'imaginary' data.
+ # This has the same lazy-array attributes, but is allocated by
+ # creating chunks on demand instead of loading from file.
+ data = cube.lazy_data()
+ data = da.zeros(
+ data.shape, dtype=data.dtype, chunks=data.chunksize
+ )
+ cube.data = data
+
+ if create_result_cube:
+ self.recombined_cube = self.recombine()
+
+ # Fix dask usage mode for all the subsequent performance tests.
+ self.fix_dask_settings()
+
+ def teardown(self, _):
+ self.temp_save_path.unlink(missing_ok=True)
+
+ def fix_dask_settings(self):
+ """
+ Fix "standard" dask behaviour for time+space testing.
+
+ Currently this is single-threaded mode, with known chunksize,
+ which is optimised for space saving so we can test largest data.
+
+ """
+
+ import dask.config as dcfg
+
+ # Use single-threaded, to avoid process-switching costs and minimise memory usage.
+ # N.B. generally may be slower, but use less memory ?
+ dcfg.set(scheduler="single-threaded")
+ # Configure iris._lazy_data.as_lazy_data to aim for 100Mb chunks
+ dcfg.set({"array.chunk-size": "128Mib"})
+
+ def recombine(self):
+ # A handy general shorthand for the main "combine" operation.
+ result = recombine_submeshes(
+ self.full_mesh_cube,
+ self.region_cubes,
+ index_coord_name="i_mesh_face",
+ )
+ return result
+
+ def save_recombined_cube(self):
+ save(self.recombined_cube, self.temp_save_path)
+
+
+@on_demand_benchmark
+class CreateCube(Mixin):
+ """
+ Time+memory costs of creating a combined-regions cube.
+
+ The result is lazy, and we don't do the actual calculation.
+
+ """
+
+ def setup(
+ self, n_cubesphere, imaginary_data=True, create_result_cube=False
+ ):
+ # In this case only, do *not* create the result cube.
+ # That is the operation we want to test.
+ super().setup(n_cubesphere, imaginary_data, create_result_cube)
+
+ def time_create_combined_cube(self, n_cubesphere):
+ self.recombine()
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_create_combined_cube(self, n_cubesphere):
+ self.recombine()
+
+
+@on_demand_benchmark
+class ComputeRealData(Mixin):
+ """
+ Time+memory costs of computing combined-regions data.
+ """
+
+ def time_compute_data(self, n_cubesphere):
+ _ = self.recombined_cube.data
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_compute_data(self, n_cubesphere):
+ _ = self.recombined_cube.data
+
+
+@on_demand_benchmark
+class SaveData(Mixin):
+ """
+ Test saving *only*, having replaced the input cube data with 'imaginary'
+ array data, so that input data is not loaded from disk during the save
+ operation.
+
+ """
+
+ def time_save(self, n_cubesphere):
+ # Save to disk, which must compute data + stream it to file.
+ self.save_recombined_cube()
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_save(self, n_cubesphere):
+ self.save_recombined_cube()
+
+ def track_filesize_saved(self, n_cubesphere):
+ self.save_recombined_cube()
+ return self.temp_save_path.stat().st_size * 1.0e-6
+
+
+@on_demand_benchmark
+class FileStreamedCalc(Mixin):
+ """
+ Test the whole cost of file-to-file streaming.
+ Uses the combined cube which is based on lazy data loading from the region
+ cubes on disk.
+ """
+
+ def setup(
+ self, n_cubesphere, imaginary_data=False, create_result_cube=True
+ ):
+ # In this case only, do *not* replace the loaded regions data with
+ # 'imaginary' data, as we want to test file-to-file calculation+save.
+ super().setup(n_cubesphere, imaginary_data, create_result_cube)
+
+ def time_stream_file2file(self, n_cubesphere):
+ # Save to disk, which must compute data + stream it to file.
+ self.save_recombined_cube()
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_stream_file2file(self, n_cubesphere):
+ self.save_recombined_cube()
diff --git a/benchmarks/benchmarks/sperf/equality.py b/benchmarks/benchmarks/sperf/equality.py
new file mode 100644
index 0000000000..85c73ab92b
--- /dev/null
+++ b/benchmarks/benchmarks/sperf/equality.py
@@ -0,0 +1,36 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Equality benchmarks for the SPerf scheme of the UK Met Office's NG-VAT project.
+"""
+from . import FileMixin
+from .. import on_demand_benchmark
+
+
+@on_demand_benchmark
+class CubeEquality(FileMixin):
+ """
+ Benchmark time and memory costs of comparing :class:`~iris.cube.Cube`\\ s
+ with attached :class:`~iris.experimental.ugrid.mesh.Mesh`\\ es.
+
+ Uses :class:`FileMixin` as the realistic case will be comparing
+ :class:`~iris.cube.Cube`\\ s that have been loaded from file.
+
+ """
+
+ # Cut down paremt parameters.
+ params = [FileMixin.params[0]]
+
+ def setup(self, c_size, n_levels=1, n_times=1):
+ super().setup(c_size, n_levels, n_times)
+ self.cube = self.load_cube()
+ self.other_cube = self.load_cube()
+
+ def peakmem_eq(self, n_cube):
+ _ = self.cube == self.other_cube
+
+ def time_eq(self, n_cube):
+ _ = self.cube == self.other_cube
diff --git a/benchmarks/benchmarks/sperf/load.py b/benchmarks/benchmarks/sperf/load.py
new file mode 100644
index 0000000000..6a60355976
--- /dev/null
+++ b/benchmarks/benchmarks/sperf/load.py
@@ -0,0 +1,29 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+File loading benchmarks for the SPerf scheme of the UK Met Office's NG-VAT project.
+"""
+from . import FileMixin
+from .. import on_demand_benchmark
+
+
+@on_demand_benchmark
+class Load(FileMixin):
+ def time_load_cube(self, _, __, ___):
+ _ = self.load_cube()
+
+
+@on_demand_benchmark
+class Realise(FileMixin):
+ def setup(self, c_size, n_levels, n_times):
+ super().setup(c_size, n_levels, n_times)
+ self.loaded_cube = self.load_cube()
+
+ def time_realise_cube(self, _, __, ___):
+ # Don't touch loaded_cube.data - permanent realisation plays badly with
+ # ASV's re-run strategy.
+ assert self.loaded_cube.has_lazy_data()
+ self.loaded_cube.core_data().compute()
diff --git a/benchmarks/benchmarks/sperf/save.py b/benchmarks/benchmarks/sperf/save.py
new file mode 100644
index 0000000000..dd33924c6c
--- /dev/null
+++ b/benchmarks/benchmarks/sperf/save.py
@@ -0,0 +1,56 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+File saving benchmarks for the SPerf scheme of the UK Met Office's NG-VAT project.
+"""
+import os.path
+
+from iris import save
+from iris.experimental.ugrid import save_mesh
+
+from .. import TrackAddedMemoryAllocation, on_demand_benchmark
+from ..generate_data.ugrid import make_cube_like_2d_cubesphere
+
+
+@on_demand_benchmark
+class NetcdfSave:
+ """
+ Benchmark time and memory costs of saving ~large-ish data cubes to netcdf.
+
+ """
+
+ params = [[1, 100, 200, 300, 500, 1000, 1668], [False, True]]
+ param_names = ["cubesphere_C", "is_unstructured"]
+ # Fix result units for the tracking benchmarks.
+ unit = "Mb"
+
+ def setup(self, n_cubesphere, is_unstructured):
+ self.cube = make_cube_like_2d_cubesphere(
+ n_cube=n_cubesphere, with_mesh=is_unstructured
+ )
+
+ def _save_cube(self, cube):
+ save(cube, "tmp.nc")
+
+ def _save_mesh(self, cube):
+ save_mesh(cube.mesh, "mesh.nc")
+
+ def time_save_cube(self, n_cubesphere, is_unstructured):
+ self._save_cube(self.cube)
+
+ @TrackAddedMemoryAllocation.decorator
+ def track_addedmem_save_cube(self, n_cubesphere, is_unstructured):
+ self._save_cube(self.cube)
+
+ def time_save_mesh(self, n_cubesphere, is_unstructured):
+ if is_unstructured:
+ self._save_mesh(self.cube)
+
+ # The filesizes make a good reference point for the 'addedmem' memory
+ # usage results.
+ def track_filesize_save_cube(self, n_cubesphere, is_unstructured):
+ self._save_cube(self.cube)
+ return os.path.getsize("tmp.nc") * 1.0e-6
diff --git a/benchmarks/benchmarks/trajectory.py b/benchmarks/benchmarks/trajectory.py
new file mode 100644
index 0000000000..5c1d10d218
--- /dev/null
+++ b/benchmarks/benchmarks/trajectory.py
@@ -0,0 +1,48 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+"""
+Trajectory benchmark test
+
+"""
+
+# import iris tests first so that some things can be initialised before
+# importing anything else
+from iris import tests # isort:skip
+
+import numpy as np
+
+import iris
+from iris.analysis.trajectory import interpolate
+
+
+class TrajectoryInterpolation:
+ def setup(self) -> None:
+ # Prepare a cube and a template
+
+ cube_file_path = tests.get_data_path(
+ ["NetCDF", "regrid", "regrid_xyt.nc"]
+ )
+ self.cube = iris.load_cube(cube_file_path)
+
+ trajectory = np.array(
+ [np.array((-50 + i, -50 + i)) for i in range(100)]
+ )
+ self.sample_points = [
+ ("longitude", trajectory[:, 0]),
+ ("latitude", trajectory[:, 1]),
+ ]
+
+ def time_trajectory_linear(self) -> None:
+ # Regrid the cube onto the template.
+ out_cube = interpolate(self.cube, self.sample_points, method="linear")
+ # Realise the data
+ out_cube.data
+
+ def time_trajectory_nearest(self) -> None:
+ # Regrid the cube onto the template.
+ out_cube = interpolate(self.cube, self.sample_points, method="nearest")
+ # Realise the data
+ out_cube.data
diff --git a/benchmarks/nox_asv_plugin.py b/benchmarks/nox_asv_plugin.py
deleted file mode 100644
index 6c9ce14272..0000000000
--- a/benchmarks/nox_asv_plugin.py
+++ /dev/null
@@ -1,249 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-"""
-ASV plug-in providing an alternative ``Environment`` subclass, which uses Nox
-for environment management.
-
-"""
-from importlib.util import find_spec
-from pathlib import Path
-from shutil import copy2, copytree
-from tempfile import TemporaryDirectory
-
-from asv import util as asv_util
-from asv.config import Config
-from asv.console import log
-from asv.environment import get_env_name
-from asv.plugins.conda import Conda, _find_conda
-from asv.repo import Repo, get_repo
-
-
-class NoxConda(Conda):
- """
- Manage a Conda environment using Nox, updating environment at each commit.
-
- Defers environment management to the project's noxfile, which must be able
- to create/update the benchmarking environment using ``nox --install-only``,
- with the ``--session`` specified in ``asv.conf.json.nox_session_name``.
-
- Notes
- -----
- If not all benchmarked commits support this use of Nox: the plugin will
- need to be modified to prep the environment in other ways.
-
- """
-
- tool_name = "nox-conda"
-
- @classmethod
- def matches(cls, python: str) -> bool:
- """Used by ASV to work out if this type of environment can be used."""
- result = find_spec("nox") is not None
- if result:
- result = super().matches(python)
-
- if result:
- message = (
- f"NOTE: ASV env match check incomplete. Not possible to know "
- f"if selected Nox session (asv.conf.json.nox_session_name) is "
- f"compatible with ``--python={python}`` until project is "
- f"checked out."
- )
- log.warning(message)
-
- return result
-
- def __init__(self, conf: Config, python: str, requirements: dict) -> None:
- """
- Parameters
- ----------
- conf: Config instance
-
- python : str
- Version of Python. Must be of the form "MAJOR.MINOR".
-
- requirements : dict
- Dictionary mapping a PyPI package name to a version
- identifier string.
-
- """
- from nox.sessions import _normalize_path
-
- # Need to checkout the project BEFORE the benchmark run - to access a noxfile.
- self.project_temp_checkout = TemporaryDirectory(
- prefix="nox_asv_checkout_"
- )
- repo = get_repo(conf)
- repo.checkout(self.project_temp_checkout.name, conf.nox_setup_commit)
- self.noxfile_rel_path = conf.noxfile_rel_path
- self.setup_noxfile = (
- Path(self.project_temp_checkout.name) / self.noxfile_rel_path
- )
- self.nox_session_name = conf.nox_session_name
-
- # Some duplication of parent code - need these attributes BEFORE
- # running inherited code.
- self._python = python
- self._requirements = requirements
- self._env_dir = conf.env_dir
-
- # Prepare the actual environment path, to override self._path.
- nox_envdir = str(Path(self._env_dir).absolute() / self.hashname)
- nox_friendly_name = self._get_nox_session_name(python)
- self._nox_path = Path(_normalize_path(nox_envdir, nox_friendly_name))
-
- # For storing any extra conda requirements from asv.conf.json.
- self._extra_reqs_path = self._nox_path / "asv-extra-reqs.yaml"
-
- super().__init__(conf, python, requirements)
-
- @property
- def _path(self) -> str:
- """
- Using a property to override getting and setting in parent classes -
- unable to modify parent classes as this is a plugin.
-
- """
- return str(self._nox_path)
-
- @_path.setter
- def _path(self, value) -> None:
- """Enforce overriding of this variable by disabling modification."""
- pass
-
- @property
- def name(self) -> str:
- """Overridden to prevent inclusion of user input requirements."""
- return get_env_name(self.tool_name, self._python, {})
-
- def _get_nox_session_name(self, python: str) -> str:
- nox_cmd_substring = (
- f"--noxfile={self.setup_noxfile} "
- f"--session={self.nox_session_name} "
- f"--python={python}"
- )
-
- list_output = asv_util.check_output(
- ["nox", "--list", *nox_cmd_substring.split(" ")],
- display_error=False,
- dots=False,
- )
- list_output = list_output.split("\n")
- list_matches = list(filter(lambda s: s.startswith("*"), list_output))
- matches_count = len(list_matches)
-
- if matches_count == 0:
- message = f"No Nox sessions found for: {nox_cmd_substring} ."
- log.error(message)
- raise RuntimeError(message)
- elif matches_count > 1:
- message = (
- f"Ambiguous - >1 Nox session found for: {nox_cmd_substring} ."
- )
- log.error(message)
- raise RuntimeError(message)
- else:
- line = list_matches[0]
- session_name = line.split(" ")[1]
- assert isinstance(session_name, str)
- return session_name
-
- def _nox_prep_env(self, setup: bool = False) -> None:
- message = f"Running Nox environment update for: {self.name}"
- log.info(message)
-
- build_root_path = Path(self._build_root)
- env_path = Path(self._path)
-
- def copy_asv_files(src_parent: Path, dst_parent: Path) -> None:
- """For copying between self._path and a temporary cache."""
- asv_files = list(src_parent.glob("asv*"))
- # build_root_path.name usually == "project" .
- asv_files += [src_parent / build_root_path.name]
- for src_path in asv_files:
- dst_path = dst_parent / src_path.name
- if not dst_path.exists():
- # Only cache-ing in case Nox has rebuilt the env @
- # self._path. If the dst_path already exists: rebuilding
- # hasn't happened. Also a non-issue when copying in the
- # reverse direction because the cache dir is temporary.
- if src_path.is_dir():
- func = copytree
- else:
- func = copy2
- func(src_path, dst_path)
-
- with TemporaryDirectory(prefix="nox_asv_cache_") as asv_cache:
- asv_cache_path = Path(asv_cache)
- if setup:
- noxfile = self.setup_noxfile
- else:
- # Cache all of ASV's files as Nox may remove and re-build the environment.
- copy_asv_files(env_path, asv_cache_path)
- # Get location of noxfile in cache.
- noxfile_original = (
- build_root_path / self._repo_subdir / self.noxfile_rel_path
- )
- noxfile_subpath = noxfile_original.relative_to(
- build_root_path.parent
- )
- noxfile = asv_cache_path / noxfile_subpath
-
- nox_cmd = [
- "nox",
- f"--noxfile={noxfile}",
- # Place the env in the ASV env directory, instead of the default.
- f"--envdir={env_path.parent}",
- f"--session={self.nox_session_name}",
- f"--python={self._python}",
- "--install-only",
- "--no-error-on-external-run",
- "--verbose",
- ]
-
- _ = asv_util.check_output(nox_cmd)
- if not env_path.is_dir():
- message = f"Expected Nox environment not found: {env_path}"
- log.error(message)
- raise RuntimeError(message)
-
- if not setup:
- # Restore ASV's files from the cache (if necessary).
- copy_asv_files(asv_cache_path, env_path)
-
- def _setup(self) -> None:
- """Used for initial environment creation - mimics parent method where possible."""
- try:
- self.conda = _find_conda()
- except IOError as e:
- raise asv_util.UserError(str(e))
- if find_spec("nox") is None:
- raise asv_util.UserError("Module not found: nox")
-
- message = f"Creating Nox-Conda environment for {self.name} ."
- log.info(message)
-
- try:
- self._nox_prep_env(setup=True)
- finally:
- # No longer need the setup checkout now that the environment has been built.
- self.project_temp_checkout.cleanup()
-
- conda_args, pip_args = self._get_requirements(self.conda)
- if conda_args or pip_args:
- message = (
- "Ignoring user input package requirements. Benchmark "
- "environment management is exclusively performed by Nox."
- )
- log.warning(message)
-
- def checkout_project(self, repo: Repo, commit_hash: str) -> None:
- """Check out the working tree of the project at given commit hash."""
- super().checkout_project(repo, commit_hash)
- self._nox_prep_env()
- log.info(
- f"Environment {self.name} updated to spec at {commit_hash[:8]}"
- )
diff --git a/docs/Makefile b/docs/Makefile
index 44c89206d2..f4c8d0b7f4 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -55,8 +55,3 @@ linkcheck:
echo "Running linkcheck in $$i..."; \
(cd $$i; $(MAKE) $(MFLAGS) $(MYMAKEFLAGS) linkcheck); done
-gallerytest:
- @echo
- @echo "Running \"gallery\" tests..."
- @echo
- python -m unittest discover -v -t .
diff --git a/docs/gallery_code/README.rst b/docs/gallery_code/README.rst
index 720fd1e6f6..85bf0552b4 100644
--- a/docs/gallery_code/README.rst
+++ b/docs/gallery_code/README.rst
@@ -1,3 +1,5 @@
+.. _gallery_index:
+
Gallery
=======
diff --git a/docs/gallery_code/general/README.rst b/docs/gallery_code/general/README.rst
index c846755f1e..3a48e7cd8e 100644
--- a/docs/gallery_code/general/README.rst
+++ b/docs/gallery_code/general/README.rst
@@ -1,2 +1,3 @@
General
-------
+
diff --git a/docs/gallery_code/general/plot_custom_file_loading.py b/docs/gallery_code/general/plot_custom_file_loading.py
index 025f395789..4b817aea66 100644
--- a/docs/gallery_code/general/plot_custom_file_loading.py
+++ b/docs/gallery_code/general/plot_custom_file_loading.py
@@ -57,7 +57,7 @@
import datetime
-from cf_units import CALENDAR_GREGORIAN, Unit
+from cf_units import CALENDAR_STANDARD, Unit
import matplotlib.pyplot as plt
import numpy as np
@@ -225,7 +225,7 @@ def NAME_to_cube(filenames, callback):
# define the time unit and use it to serialise the datetime for the
# time coordinate
- time_unit = Unit("hours since epoch", calendar=CALENDAR_GREGORIAN)
+ time_unit = Unit("hours since epoch", calendar=CALENDAR_STANDARD)
time_coord = icoords.AuxCoord(
time_unit.date2num(field_headings["time"]),
standard_name="time",
diff --git a/docs/gallery_code/general/plot_zonal_means.py b/docs/gallery_code/general/plot_zonal_means.py
new file mode 100644
index 0000000000..08a9578e63
--- /dev/null
+++ b/docs/gallery_code/general/plot_zonal_means.py
@@ -0,0 +1,89 @@
+"""
+Zonal Mean Diagram of Air Temperature
+=====================================
+This example demonstrates aligning a linear plot and a cartographic plot using Matplotlib.
+"""
+
+import cartopy.crs as ccrs
+import matplotlib.pyplot as plt
+from mpl_toolkits.axes_grid1 import make_axes_locatable
+import numpy as np
+
+import iris
+from iris.analysis import MEAN
+import iris.plot as iplt
+import iris.quickplot as qplt
+
+
+def main():
+
+ # Loads air_temp.pp and "collapses" longitude into a single, average value.
+ fname = iris.sample_data_path("air_temp.pp")
+ temperature = iris.load_cube(fname)
+ collapsed_temp = temperature.collapsed("longitude", MEAN)
+
+ # Set y-axes with -90 and 90 limits and steps of 15 per tick.
+ start, stop, step = -90, 90, 15
+ yticks = np.arange(start, stop + step, step)
+ ylim = [start, stop]
+
+ # Plot "temperature" on a cartographic plot and set the ticks and titles
+ # on the axes.
+ fig = plt.figure(figsize=[12, 4])
+
+ ax1 = fig.add_subplot(111, projection=ccrs.PlateCarree())
+ im = iplt.contourf(temperature, cmap="RdYlBu_r")
+ ax1.coastlines()
+ ax1.gridlines()
+ ax1.set_xticks([-180, -90, 0, 90, 180])
+ ax1.set_yticks(yticks)
+ ax1.set_title("Air Temperature")
+ ax1.set_ylabel(f"Latitude / {temperature.coord('latitude').units}")
+ ax1.set_xlabel(f"Longitude / {temperature.coord('longitude').units}")
+ ax1.set_ylim(*ylim)
+
+ # Create a Matplotlib AxesDivider object to allow alignment of other
+ # Axes objects.
+ divider = make_axes_locatable(ax1)
+
+ # Gives the air temperature bar size, colour and a title.
+ ax2 = divider.new_vertical(
+ size="5%", pad=0.5, axes_class=plt.Axes, pack_start=True
+ ) # creates 2nd axis
+ fig.add_axes(ax2)
+ cbar = plt.colorbar(
+ im, cax=ax2, orientation="horizontal"
+ ) # puts colour bar on second axis
+ cbar.ax.set_xlabel(f"{temperature.units}") # labels colour bar
+
+ # Plot "collapsed_temp" on the mean graph and set the ticks and titles
+ # on the axes.
+ ax3 = divider.new_horizontal(
+ size="30%", pad=0.4, axes_class=plt.Axes
+ ) # create 3rd axis
+ fig.add_axes(ax3)
+ qplt.plot(
+ collapsed_temp, collapsed_temp.coord("latitude")
+ ) # plots temperature collapsed over longitude against latitude
+ ax3.axhline(0, color="k", linewidth=0.5)
+
+ # Creates zonal mean details
+ ax3.set_title("Zonal Mean")
+ ax3.yaxis.set_label_position("right")
+ ax3.yaxis.tick_right()
+ ax3.set_yticks(yticks)
+ ax3.grid()
+
+ # Round each tick for the third ax to the nearest 20 (ready for use).
+ data_max = collapsed_temp.data.max()
+ x_max = data_max - data_max % -20
+ data_min = collapsed_temp.data.min()
+ x_min = data_min - data_min % 20
+ ax3.set_xlim(x_min, x_max)
+ ax3.set_ylim(*ylim)
+
+ plt.show()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/docs/gallery_code/meteorology/plot_wind_barbs.py b/docs/gallery_code/meteorology/plot_wind_barbs.py
index c3c056eb4a..b09040c64e 100644
--- a/docs/gallery_code/meteorology/plot_wind_barbs.py
+++ b/docs/gallery_code/meteorology/plot_wind_barbs.py
@@ -30,7 +30,7 @@ def main():
# To illustrate the full range of barbs, scale the wind speed up to pretend
# that a storm is passing over
- magnitude = (uwind ** 2 + vwind ** 2) ** 0.5
+ magnitude = (uwind**2 + vwind**2) ** 0.5
magnitude.convert_units("knot")
max_speed = magnitude.collapsed(
("latitude", "longitude"), iris.analysis.MAX
@@ -41,7 +41,7 @@ def main():
vwind = vwind / max_speed * max_desired
# Create a cube containing the wind speed
- windspeed = (uwind ** 2 + vwind ** 2) ** 0.5
+ windspeed = (uwind**2 + vwind**2) ** 0.5
windspeed.rename("windspeed")
windspeed.convert_units("knot")
diff --git a/docs/gallery_code/meteorology/plot_wind_speed.py b/docs/gallery_code/meteorology/plot_wind_speed.py
index fd03f54205..40d9d0da00 100644
--- a/docs/gallery_code/meteorology/plot_wind_speed.py
+++ b/docs/gallery_code/meteorology/plot_wind_speed.py
@@ -27,7 +27,7 @@ def main():
vwind = iris.load_cube(infile, "y_wind")
# Create a cube containing the wind speed.
- windspeed = (uwind ** 2 + vwind ** 2) ** 0.5
+ windspeed = (uwind**2 + vwind**2) ** 0.5
windspeed.rename("windspeed")
# Plot the wind speed as a contour plot.
diff --git a/docs/gallery_code/oceanography/plot_load_nemo.py b/docs/gallery_code/oceanography/plot_load_nemo.py
index 4bfee5ac8e..b19f37e1f5 100644
--- a/docs/gallery_code/oceanography/plot_load_nemo.py
+++ b/docs/gallery_code/oceanography/plot_load_nemo.py
@@ -13,7 +13,7 @@
import iris
import iris.plot as iplt
import iris.quickplot as qplt
-from iris.util import promote_aux_coord_to_dim_coord
+from iris.util import equalise_attributes, promote_aux_coord_to_dim_coord
def main():
@@ -21,16 +21,15 @@ def main():
fname = iris.sample_data_path("NEMO/nemo_1m_*.nc")
cubes = iris.load(fname)
- # Some attributes are unique to each file and must be blanked
- # to allow concatenation.
- differing_attrs = ["file_name", "name", "timeStamp", "TimeStamp"]
- for cube in cubes:
- for attribute in differing_attrs:
- cube.attributes[attribute] = ""
-
- # The cubes still cannot be concatenated because their time dimension is
- # time_counter rather than time. time needs to be promoted to allow
+ # Some attributes are unique to each file and must be removed to allow
# concatenation.
+ equalise_attributes(cubes)
+
+ # The cubes still cannot be concatenated because their dimension coordinate
+ # is "time_counter", which has the same value for each cube. concatenate
+ # needs distinct values in order to create a new DimCoord for the output
+ # cube. Here, each cube has a "time" auxiliary coordinate, and these do
+ # have distinct values, so we can promote them to allow concatenation.
for cube in cubes:
promote_aux_coord_to_dim_coord(cube, "time")
diff --git a/docs/gallery_tests/conftest.py b/docs/gallery_tests/conftest.py
new file mode 100644
index 0000000000..a218b305a2
--- /dev/null
+++ b/docs/gallery_tests/conftest.py
@@ -0,0 +1,67 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+
+"""Pytest fixtures for the gallery tests."""
+
+import pathlib
+
+import matplotlib.pyplot as plt
+import pytest
+
+import iris
+
+CURRENT_DIR = pathlib.Path(__file__).resolve()
+GALLERY_DIR = CURRENT_DIR.parents[1] / "gallery_code"
+
+
+@pytest.fixture
+def image_setup_teardown():
+ """
+ Setup and teardown fixture.
+
+ Ensures all figures are closed before and after test to prevent one test
+ polluting another if it fails with a figure unclosed.
+
+ """
+ plt.close("all")
+ yield
+ plt.close("all")
+
+
+@pytest.fixture
+def import_patches(monkeypatch):
+ """
+ Replace plt.show() with a function that does nothing, also add all the
+ gallery examples to sys.path.
+
+ """
+
+ def no_show():
+ pass
+
+ monkeypatch.setattr(plt, "show", no_show)
+
+ for example_dir in GALLERY_DIR.iterdir():
+ if example_dir.is_dir():
+ monkeypatch.syspath_prepend(example_dir)
+
+ yield
+
+
+@pytest.fixture
+def iris_future_defaults():
+ """
+ Create a fixture which resets all the iris.FUTURE settings to the defaults,
+ as otherwise changes made in one test can affect subsequent ones.
+
+ """
+ # Run with all default settings in iris.FUTURE.
+ default_future_kwargs = iris.Future().__dict__.copy()
+ for dead_option in iris.Future.deprecated_options:
+ # Avoid a warning when setting these !
+ del default_future_kwargs[dead_option]
+ with iris.FUTURE.context(**default_future_kwargs):
+ yield
diff --git a/docs/gallery_tests/gallerytest_util.py b/docs/gallery_tests/gallerytest_util.py
deleted file mode 100644
index eb2736f194..0000000000
--- a/docs/gallery_tests/gallerytest_util.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-"""
-Provides context managers which are fundamental to the ability
-to run the gallery tests.
-
-"""
-
-import contextlib
-import os.path
-import sys
-import warnings
-
-import matplotlib.pyplot as plt
-
-import iris
-from iris._deprecation import IrisDeprecation
-import iris.plot as iplt
-import iris.quickplot as qplt
-
-GALLERY_DIRECTORY = os.path.join(
- os.path.dirname(os.path.dirname(__file__)), "gallery_code"
-)
-GALLERY_DIRECTORIES = [
- os.path.join(GALLERY_DIRECTORY, the_dir)
- for the_dir in os.listdir(GALLERY_DIRECTORY)
-]
-
-
-@contextlib.contextmanager
-def add_gallery_to_path():
- """
- Creates a context manager which can be used to add the iris gallery
- to the PYTHONPATH. The gallery entries are only importable throughout the lifetime
- of this context manager.
-
- """
- orig_sys_path = sys.path
- sys.path = sys.path[:]
- sys.path += GALLERY_DIRECTORIES
- yield
- sys.path = orig_sys_path
-
-
-@contextlib.contextmanager
-def show_replaced_by_check_graphic(test_case):
- """
- Creates a context manager which can be used to replace the functionality
- of matplotlib.pyplot.show with a function which calls the check_graphic
- method on the given test_case (iris.tests.IrisTest.check_graphic).
-
- """
-
- def replacement_show():
- # form a closure on test_case and tolerance
- test_case.check_graphic()
-
- orig_show = plt.show
- plt.show = iplt.show = qplt.show = replacement_show
- yield
- plt.show = iplt.show = qplt.show = orig_show
-
-
-@contextlib.contextmanager
-def fail_any_deprecation_warnings():
- """
- Create a context in which any deprecation warning will cause an error.
-
- The context also resets all the iris.FUTURE settings to the defaults, as
- otherwise changes made in one test can affect subsequent ones.
-
- """
- with warnings.catch_warnings():
- # Detect and error all and any Iris deprecation warnings.
- warnings.simplefilter("error", IrisDeprecation)
- # Run with all default settings in iris.FUTURE.
- default_future_kwargs = iris.Future().__dict__.copy()
- for dead_option in iris.Future.deprecated_options:
- # Avoid a warning when setting these !
- del default_future_kwargs[dead_option]
- with iris.FUTURE.context(**default_future_kwargs):
- yield
diff --git a/docs/gallery_tests/test_gallery_examples.py b/docs/gallery_tests/test_gallery_examples.py
new file mode 100644
index 0000000000..0d0793a7da
--- /dev/null
+++ b/docs/gallery_tests/test_gallery_examples.py
@@ -0,0 +1,44 @@
+# Copyright Iris contributors
+#
+# This file is part of Iris and is released under the LGPL license.
+# See COPYING and COPYING.LESSER in the root of the repository for full
+# licensing details.
+
+import importlib
+
+import matplotlib.pyplot as plt
+import pytest
+
+from iris.tests import _RESULT_PATH
+from iris.tests.graphics import check_graphic
+
+from .conftest import GALLERY_DIR
+
+
+def gallery_examples():
+ """Generator to yield all current gallery examples."""
+
+ for example_file in GALLERY_DIR.glob("*/plot*.py"):
+ yield example_file.stem
+
+
+@pytest.mark.filterwarnings("error::iris.IrisDeprecation")
+@pytest.mark.parametrize("example", gallery_examples())
+def test_plot_example(
+ example,
+ image_setup_teardown,
+ import_patches,
+ iris_future_defaults,
+):
+ """Test that all figures from example code match KGO."""
+
+ module = importlib.import_module(example)
+
+ # Run example.
+ module.main()
+ # Loop through open figures and set each to be the current figure so check_graphic
+ # will find it.
+ for fig_num in plt.get_fignums():
+ plt.figure(fig_num)
+ image_id = f"gallery_tests.test_{example}.{fig_num - 1}"
+ check_graphic(image_id, _RESULT_PATH)
diff --git a/docs/gallery_tests/test_plot_COP_1d.py b/docs/gallery_tests/test_plot_COP_1d.py
deleted file mode 100644
index 9771e10fb1..0000000000
--- a/docs/gallery_tests/test_plot_COP_1d.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestCOP1DPlot(tests.GraphicsTest):
- """Test the COP_1d_plot gallery code."""
-
- def test_plot_COP_1d(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_COP_1d
- with show_replaced_by_check_graphic(self):
- plot_COP_1d.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_COP_maps.py b/docs/gallery_tests/test_plot_COP_maps.py
deleted file mode 100644
index a01e12527f..0000000000
--- a/docs/gallery_tests/test_plot_COP_maps.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestCOPMaps(tests.GraphicsTest):
- """Test the COP_maps gallery code."""
-
- def test_plot_cop_maps(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_COP_maps
- with show_replaced_by_check_graphic(self):
- plot_COP_maps.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_SOI_filtering.py b/docs/gallery_tests/test_plot_SOI_filtering.py
deleted file mode 100644
index 1da731122a..0000000000
--- a/docs/gallery_tests/test_plot_SOI_filtering.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestSOIFiltering(tests.GraphicsTest):
- """Test the SOI_filtering gallery code."""
-
- def test_plot_soi_filtering(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_SOI_filtering
- with show_replaced_by_check_graphic(self):
- plot_SOI_filtering.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_TEC.py b/docs/gallery_tests/test_plot_TEC.py
deleted file mode 100644
index cfc1fb8eec..0000000000
--- a/docs/gallery_tests/test_plot_TEC.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestTEC(tests.GraphicsTest):
- """Test the TEC gallery code."""
-
- def test_plot_TEC(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_TEC
- with show_replaced_by_check_graphic(self):
- plot_TEC.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_anomaly_log_colouring.py b/docs/gallery_tests/test_plot_anomaly_log_colouring.py
deleted file mode 100644
index 41f76cc774..0000000000
--- a/docs/gallery_tests/test_plot_anomaly_log_colouring.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestAnomalyLogColouring(tests.GraphicsTest):
- """Test the anomaly colouring gallery code."""
-
- def test_plot_anomaly_log_colouring(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_anomaly_log_colouring
- with show_replaced_by_check_graphic(self):
- plot_anomaly_log_colouring.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_atlantic_profiles.py b/docs/gallery_tests/test_plot_atlantic_profiles.py
deleted file mode 100644
index fdcb5fb1d1..0000000000
--- a/docs/gallery_tests/test_plot_atlantic_profiles.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestAtlanticProfiles(tests.GraphicsTest):
- """Test the atlantic_profiles gallery code."""
-
- def test_plot_atlantic_profiles(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_atlantic_profiles
- with show_replaced_by_check_graphic(self):
- plot_atlantic_profiles.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_coriolis.py b/docs/gallery_tests/test_plot_coriolis.py
deleted file mode 100644
index 2e4cea8a74..0000000000
--- a/docs/gallery_tests/test_plot_coriolis.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-
-import iris.tests as tests
-
-from . import gallerytest_util
-
-with gallerytest_util.add_gallery_to_path():
- import plot_coriolis
-
-
-class TestCoriolisPlot(tests.GraphicsTest):
- """Test the Coriolis Plot gallery code."""
-
- def test_plot_coriolis(self):
- with gallerytest_util.show_replaced_by_check_graphic(self):
- plot_coriolis.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_cross_section.py b/docs/gallery_tests/test_plot_cross_section.py
deleted file mode 100644
index b0878d10bc..0000000000
--- a/docs/gallery_tests/test_plot_cross_section.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestCrossSection(tests.GraphicsTest):
- """Test the cross_section gallery code."""
-
- def test_plot_cross_section(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_cross_section
- with show_replaced_by_check_graphic(self):
- plot_cross_section.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_custom_aggregation.py b/docs/gallery_tests/test_plot_custom_aggregation.py
deleted file mode 100644
index 9d0a40dd3c..0000000000
--- a/docs/gallery_tests/test_plot_custom_aggregation.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestCustomAggregation(tests.GraphicsTest):
- """Test the custom aggregation gallery code."""
-
- def test_plot_custom_aggregation(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_custom_aggregation
- with show_replaced_by_check_graphic(self):
- plot_custom_aggregation.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_custom_file_loading.py b/docs/gallery_tests/test_plot_custom_file_loading.py
deleted file mode 100644
index 4d0d603a22..0000000000
--- a/docs/gallery_tests/test_plot_custom_file_loading.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestCustomFileLoading(tests.GraphicsTest):
- """Test the custom_file_loading gallery code."""
-
- def test_plot_custom_file_loading(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_custom_file_loading
- with show_replaced_by_check_graphic(self):
- plot_custom_file_loading.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_deriving_phenomena.py b/docs/gallery_tests/test_plot_deriving_phenomena.py
deleted file mode 100644
index ef2f8cec87..0000000000
--- a/docs/gallery_tests/test_plot_deriving_phenomena.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestDerivingPhenomena(tests.GraphicsTest):
- """Test the deriving_phenomena gallery code."""
-
- def test_plot_deriving_phenomena(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_deriving_phenomena
- with show_replaced_by_check_graphic(self):
- plot_deriving_phenomena.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_global_map.py b/docs/gallery_tests/test_plot_global_map.py
deleted file mode 100644
index 16f769deae..0000000000
--- a/docs/gallery_tests/test_plot_global_map.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestGlobalMap(tests.GraphicsTest):
- """Test the global_map gallery code."""
-
- def test_plot_global_map(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_global_map
- with show_replaced_by_check_graphic(self):
- plot_global_map.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_hovmoller.py b/docs/gallery_tests/test_plot_hovmoller.py
deleted file mode 100644
index 29c0e72e05..0000000000
--- a/docs/gallery_tests/test_plot_hovmoller.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestGlobalMap(tests.GraphicsTest):
- """Test the hovmoller gallery code."""
-
- def test_plot_hovmoller(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_hovmoller
- with show_replaced_by_check_graphic(self):
- plot_hovmoller.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_inset.py b/docs/gallery_tests/test_plot_inset.py
deleted file mode 100644
index 739e0a3224..0000000000
--- a/docs/gallery_tests/test_plot_inset.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestInsetPlot(tests.GraphicsTest):
- """Test the inset plot gallery code."""
-
- def test_plot_inset(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_inset
- with show_replaced_by_check_graphic(self):
- plot_inset.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_lagged_ensemble.py b/docs/gallery_tests/test_plot_lagged_ensemble.py
deleted file mode 100644
index f0a0201613..0000000000
--- a/docs/gallery_tests/test_plot_lagged_ensemble.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestLaggedEnsemble(tests.GraphicsTest):
- """Test the lagged ensemble gallery code."""
-
- def test_plot_lagged_ensemble(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_lagged_ensemble
- with show_replaced_by_check_graphic(self):
- plot_lagged_ensemble.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_lineplot_with_legend.py b/docs/gallery_tests/test_plot_lineplot_with_legend.py
deleted file mode 100644
index 5677667026..0000000000
--- a/docs/gallery_tests/test_plot_lineplot_with_legend.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestLineplotWithLegend(tests.GraphicsTest):
- """Test the lineplot_with_legend gallery code."""
-
- def test_plot_lineplot_with_legend(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_lineplot_with_legend
- with show_replaced_by_check_graphic(self):
- plot_lineplot_with_legend.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_load_nemo.py b/docs/gallery_tests/test_plot_load_nemo.py
deleted file mode 100644
index f250dc46b4..0000000000
--- a/docs/gallery_tests/test_plot_load_nemo.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestLoadNemo(tests.GraphicsTest):
- """Test the load_nemo gallery code."""
-
- def test_plot_load_nemo(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_load_nemo
- with show_replaced_by_check_graphic(self):
- plot_load_nemo.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_orca_projection.py b/docs/gallery_tests/test_plot_orca_projection.py
deleted file mode 100644
index c4058c996e..0000000000
--- a/docs/gallery_tests/test_plot_orca_projection.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestOrcaProjection(tests.GraphicsTest):
- """Test the orca projection gallery code."""
-
- def test_plot_orca_projection(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_orca_projection
- with show_replaced_by_check_graphic(self):
- plot_orca_projection.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_polar_stereo.py b/docs/gallery_tests/test_plot_polar_stereo.py
deleted file mode 100644
index 4d32ee5830..0000000000
--- a/docs/gallery_tests/test_plot_polar_stereo.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestPolarStereo(tests.GraphicsTest):
- """Test the polar_stereo gallery code."""
-
- def test_plot_polar_stereo(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_polar_stereo
- with show_replaced_by_check_graphic(self):
- plot_polar_stereo.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_polynomial_fit.py b/docs/gallery_tests/test_plot_polynomial_fit.py
deleted file mode 100644
index b522dcf43c..0000000000
--- a/docs/gallery_tests/test_plot_polynomial_fit.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestPolynomialFit(tests.GraphicsTest):
- """Test the polynomial_fit gallery code."""
-
- def test_plot_polynomial_fit(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_polynomial_fit
- with show_replaced_by_check_graphic(self):
- plot_polynomial_fit.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_projections_and_annotations.py b/docs/gallery_tests/test_plot_projections_and_annotations.py
deleted file mode 100644
index 1c24202251..0000000000
--- a/docs/gallery_tests/test_plot_projections_and_annotations.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestProjectionsAndAnnotations(tests.GraphicsTest):
- """Test the atlantic_profiles gallery code."""
-
- def test_plot_projections_and_annotations(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_projections_and_annotations
- with show_replaced_by_check_graphic(self):
- plot_projections_and_annotations.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_rotated_pole_mapping.py b/docs/gallery_tests/test_plot_rotated_pole_mapping.py
deleted file mode 100644
index cd9b04fc66..0000000000
--- a/docs/gallery_tests/test_plot_rotated_pole_mapping.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestRotatedPoleMapping(tests.GraphicsTest):
- """Test the rotated_pole_mapping gallery code."""
-
- def test_plot_rotated_pole_mapping(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_rotated_pole_mapping
- with show_replaced_by_check_graphic(self):
- plot_rotated_pole_mapping.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_wind_barbs.py b/docs/gallery_tests/test_plot_wind_barbs.py
deleted file mode 100644
index 6003860a5e..0000000000
--- a/docs/gallery_tests/test_plot_wind_barbs.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests # isort:skip
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestWindBarbs(tests.GraphicsTest):
- """Test the wind_barbs example code."""
-
- def test_wind_barbs(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_wind_barbs
- with show_replaced_by_check_graphic(self):
- plot_wind_barbs.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/gallery_tests/test_plot_wind_speed.py b/docs/gallery_tests/test_plot_wind_speed.py
deleted file mode 100644
index ebaf97adbe..0000000000
--- a/docs/gallery_tests/test_plot_wind_speed.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-# Import Iris tests first so that some things can be initialised before
-# importing anything else.
-import iris.tests as tests
-
-from .gallerytest_util import (
- add_gallery_to_path,
- fail_any_deprecation_warnings,
- show_replaced_by_check_graphic,
-)
-
-
-class TestWindSpeed(tests.GraphicsTest):
- """Test the wind_speed gallery code."""
-
- def test_plot_wind_speed(self):
- with fail_any_deprecation_warnings():
- with add_gallery_to_path():
- import plot_wind_speed
- with show_replaced_by_check_graphic(self):
- plot_wind_speed.main()
-
-
-if __name__ == "__main__":
- tests.main()
diff --git a/docs/src/_static/Iris7_1_trim_100.png b/docs/src/_static/Iris7_1_trim_100.png
deleted file mode 100644
index 2f6f80eff9..0000000000
Binary files a/docs/src/_static/Iris7_1_trim_100.png and /dev/null differ
diff --git a/docs/src/_static/Iris7_1_trim_full.png b/docs/src/_static/Iris7_1_trim_full.png
deleted file mode 100644
index c381aa3a89..0000000000
Binary files a/docs/src/_static/Iris7_1_trim_full.png and /dev/null differ
diff --git a/docs/src/_static/README.md b/docs/src/_static/README.md
new file mode 100644
index 0000000000..b9f2877a30
--- /dev/null
+++ b/docs/src/_static/README.md
@@ -0,0 +1,31 @@
+# Iris logos
+
+[](iris-logo-title.svg)
+
+Code for generating the logos is at:
+[SciTools/marketing/iris/logo/generate_logo.py](https://github.com/SciTools/marketing/blob/master/iris/logo/generate_logo.py)
+
+See the docstring of the `generate_logo()` function for more information.
+
+## Why a scripted logo?
+
+SVG logos are ideal for source-controlled projects:
+
+* Low file size, with infinitely scaling quality
+* Universally recognised vector format, editable by many software packages
+* XML-style content = human-readable diff when changes are made
+
+But Iris' logo is difficult to reproduce/edit using an SVG editor alone:
+
+* Includes correctly projected, low resolution coastlines
+* Needs precise alignment of the 'visual centre' of the iris with the centres
+ of the Earth and the image
+
+An SVG image is simply XML format, so can be easily assembled automatically
+with a script, which can also be engineered to address the above problems.
+
+Further advantages of using a script:
+
+* Parameterised text, making it easy to standardise the logo across all Iris
+ packages
+* Can generate an animated GIF/SVG of a rotating Earth
diff --git a/docs/src/_static/favicon.ico b/docs/src/_static/favicon.ico
deleted file mode 100644
index 0e5f0492b4..0000000000
Binary files a/docs/src/_static/favicon.ico and /dev/null differ
diff --git a/docs/src/_static/icon_api.svg b/docs/src/_static/icon_api.svg
new file mode 100644
index 0000000000..841b105973
--- /dev/null
+++ b/docs/src/_static/icon_api.svg
@@ -0,0 +1,144 @@
+
+
+
+
\ No newline at end of file
diff --git a/docs/src/_static/icon_development.svg b/docs/src/_static/icon_development.svg
new file mode 100644
index 0000000000..dbc342688c
--- /dev/null
+++ b/docs/src/_static/icon_development.svg
@@ -0,0 +1,63 @@
+
+
diff --git a/docs/src/_static/icon_instructions.svg b/docs/src/_static/icon_instructions.svg
new file mode 100644
index 0000000000..62b3fc3620
--- /dev/null
+++ b/docs/src/_static/icon_instructions.svg
@@ -0,0 +1,162 @@
+
+
+
+
\ No newline at end of file
diff --git a/docs/src/_static/icon_new_product.svg b/docs/src/_static/icon_new_product.svg
new file mode 100644
index 0000000000..f222e1e066
--- /dev/null
+++ b/docs/src/_static/icon_new_product.svg
@@ -0,0 +1,182 @@
+
+
diff --git a/docs/src/_static/icon_shuttle.svg b/docs/src/_static/icon_shuttle.svg
new file mode 100644
index 0000000000..46ba64d2e0
--- /dev/null
+++ b/docs/src/_static/icon_shuttle.svg
@@ -0,0 +1,71 @@
+
+
diff --git a/docs/src/_static/icon_support.png b/docs/src/_static/icon_support.png
new file mode 100644
index 0000000000..567cdb1b2f
Binary files /dev/null and b/docs/src/_static/icon_support.png differ
diff --git a/docs/src/_static/icon_thumb.png b/docs/src/_static/icon_thumb.png
new file mode 100644
index 0000000000..6a14875e22
Binary files /dev/null and b/docs/src/_static/icon_thumb.png differ
diff --git a/docs/src/_static/iris-logo-title.png b/docs/src/_static/iris-logo-title.png
deleted file mode 100644
index e517aa7784..0000000000
Binary files a/docs/src/_static/iris-logo-title.png and /dev/null differ
diff --git a/docs/src/_static/iris-logo-title.svg b/docs/src/_static/iris-logo-title.svg
index 60ba0a1118..5bc38bfbda 100644
--- a/docs/src/_static/iris-logo-title.svg
+++ b/docs/src/_static/iris-logo-title.svg
@@ -1,89 +1,107 @@
-
-
-
-
+
+
\ No newline at end of file
diff --git a/docs/src/_static/iris-logo.svg b/docs/src/_static/iris-logo.svg
new file mode 100644
index 0000000000..6c4bdb0e5a
--- /dev/null
+++ b/docs/src/_static/iris-logo.svg
@@ -0,0 +1,104 @@
+
+
+
+ Logo for the SciTools Iris project - https://github.com/SciTools/iris/
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/src/_static/theme_override.css b/docs/src/_static/theme_override.css
index c56b720f69..326c1d4d4a 100644
--- a/docs/src/_static/theme_override.css
+++ b/docs/src/_static/theme_override.css
@@ -1,33 +1,10 @@
/* import the standard theme css */
@import url("css/theme.css");
-/* now we can add custom any css */
-
-/* set the width of the logo */
-.wy-side-nav-search>a img.logo,
-.wy-side-nav-search .wy-dropdown>a img.logo {
- width: 12rem
-}
-
-/* color of the logo background in the top left corner */
-.wy-side-nav-search {
- background-color: lightgray;
-}
-
-/* color of the font for the version in the top left corner */
-.wy-side-nav-search>div.version {
- color: black;
- font-weight: bold;
-}
-
-/* Ensures tables do now have width scroll bars */
-table.docutils td {
- white-space: unset;
- word-wrap: break-word;
-}
+/* now we can add custom css.... */
/* Used for very strong warning */
-#slim-red-box-message {
+#slim-red-box-banner {
background: #ff0000;
box-sizing: border-box;
color: #ffffff;
@@ -35,8 +12,17 @@ table.docutils td {
padding: 0.5em;
}
-#slim-red-box-message a {
+#slim-red-box-banner a {
color: #ffffff;
- font-weight: normal;
- text-decoration:underline;
+ font-weight: normal;
+ text-decoration: underline;
+}
+
+/* bullet point list with green ticks */
+ul.squarelist {
+ /* https://developer.mozilla.org/en-US/docs/Web/CSS/list-style-type */
+ list-style-type: "\2705";
+ margin-left: 0;
+ text-indent: 1em;
+ padding-left: 5em;
}
diff --git a/docs/src/_templates/custom_footer.html b/docs/src/_templates/custom_footer.html
new file mode 100644
index 0000000000..f81fcc583e
--- /dev/null
+++ b/docs/src/_templates/custom_footer.html
@@ -0,0 +1 @@
+
Built using Python {{ python_version }}.
diff --git a/docs/src/_templates/custom_sidebar_logo_version.html b/docs/src/_templates/custom_sidebar_logo_version.html
new file mode 100644
index 0000000000..c9d9ac6e2e
--- /dev/null
+++ b/docs/src/_templates/custom_sidebar_logo_version.html
@@ -0,0 +1,26 @@
+{% if on_rtd %}
+ {% if rtd_version == 'latest' %}
+
+
+
+ {% elif rtd_version == 'stable' %}
+
+
+
+ {% elif rtd_version_type == 'tag' %}
+ {# Covers builds for specific tags, including RC's. #}
+
+
+
+ {% else %}
+ {# Anything else build by RTD will be the HEAD of an activated branch #}
+
+
+
+ {% endif %}
+{%- else %}
+ {# not on rtd #}
+
+
+
+{%- endif %}
diff --git a/docs/src/_templates/footer.html b/docs/src/_templates/footer.html
deleted file mode 100644
index 1d5fb08b78..0000000000
--- a/docs/src/_templates/footer.html
+++ /dev/null
@@ -1,5 +0,0 @@
-{% extends "!footer.html" %}
-{% block extrafooter %}
- Built using Python {{ python_version }}.
- {{ super() }}
-{% endblock %}
diff --git a/docs/src/_templates/layout.html b/docs/src/_templates/layout.html
index 96a2e0913e..7377e866b7 100644
--- a/docs/src/_templates/layout.html
+++ b/docs/src/_templates/layout.html
@@ -1,16 +1,16 @@
-{% extends "!layout.html" %}
+{% extends "pydata_sphinx_theme/layout.html" %}
-{# This uses blocks. See:
+{# This uses blocks. See:
https://www.sphinx-doc.org/en/master/templating.html
#}
-/*---------------------------------------------------------------------------*/
-{%- block document %}
- {% if READTHEDOCS and rtd_version == 'latest' %}
-
+ {%- block docs_body %}
+
+ {% if on_rtd and rtd_version == 'latest' %}
+
You are viewing the latest unreleased documentation
- v{{ version }}. You may prefer a
+ v{{ version }}. You may prefer a
stable
version.
@@ -19,29 +19,3 @@
{{ super() }}
{%- endblock %}
-
-/*-----------------------------------------------------z----------------------*/
-
-{% block menu %}
- {{ super() }}
-
- {# menu_links and menu_links_name are set in conf.py (html_context) #}
-
- {% if menu_links %}
-
- {% endif %}
-{% endblock %}
-
diff --git a/docs/src/common_links.inc b/docs/src/common_links.inc
index 67fc493e3e..ec7e1efd6d 100644
--- a/docs/src/common_links.inc
+++ b/docs/src/common_links.inc
@@ -3,19 +3,19 @@
.. _black: https://black.readthedocs.io/en/stable/
.. _cartopy: https://github.com/SciTools/cartopy
-.. _.cirrus.yml: https://github.com/SciTools/iris/blob/main/.cirrus.yml
.. _flake8: https://flake8.pycqa.org/en/stable/
.. _.flake8.yml: https://github.com/SciTools/iris/blob/main/.flake8
.. _cirrus-ci: https://cirrus-ci.com/github/SciTools/iris
.. _conda: https://docs.conda.io/en/latest/
.. _contributor: https://github.com/SciTools/scitools.org.uk/blob/master/contributors.json
.. _core developers: https://github.com/SciTools/scitools.org.uk/blob/master/contributors.json
-.. _discussions: https://github.com/SciTools/iris/discussions
.. _generating sss keys for GitHub: https://docs.github.com/en/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account
+.. _GitHub Actions: https://docs.github.com/en/actions
.. _GitHub Help Documentation: https://docs.github.com/en/github
-.. _Iris GitHub Discussions: https://github.com/SciTools/iris/discussions
+.. _GitHub Discussions: https://github.com/SciTools/iris/discussions
.. _Iris: https://github.com/SciTools/iris
.. _Iris GitHub: https://github.com/SciTools/iris
+.. _Iris GitHub Actions: https://github.com/SciTools/iris/actions
.. _iris-sample-data: https://github.com/SciTools/iris-sample-data
.. _iris-test-data: https://github.com/SciTools/iris-test-data
.. _isort: https://pycqa.github.io/isort/
@@ -38,6 +38,7 @@
.. _using git: https://docs.github.com/en/github/using-git
.. _requirements/ci/: https://github.com/SciTools/iris/tree/main/requirements/ci
.. _CF-UGRID: https://ugrid-conventions.github.io/ugrid-conventions/
+.. _issues on GitHub: https://github.com/SciTools/iris/issues?q=is%3Aopen+is%3Aissue+sort%3Areactions-%2B1-desc
.. comment
@@ -52,6 +53,7 @@
.. _@cpelley: https://github.com/cpelley
.. _@djkirkham: https://github.com/djkirkham
.. _@DPeterK: https://github.com/DPeterK
+.. _@ESadek-MO: https://github.com/ESadek-MO
.. _@esc24: https://github.com/esc24
.. _@jamesp: https://github.com/jamesp
.. _@jonseddon: https://github.com/jonseddon
@@ -63,6 +65,7 @@
.. _@QuLogic: https://github.com/QuLogic
.. _@rcomer: https://github.com/rcomer
.. _@rhattersley: https://github.com/rhattersley
+.. _@schlunma: https://github.com/schlunma
.. _@stephenworsley: https://github.com/stephenworsley
.. _@tkknight: https://github.com/tkknight
.. _@trexfeathers: https://github.com/trexfeathers
diff --git a/docs/src/conf.py b/docs/src/conf.py
index 19f22e808f..33864c4658 100644
--- a/docs/src/conf.py
+++ b/docs/src/conf.py
@@ -20,15 +20,16 @@
# ----------------------------------------------------------------------------
import datetime
+from importlib.metadata import version as get_version
import ntpath
import os
from pathlib import Path
import re
+from subprocess import run
import sys
+from urllib.parse import quote
import warnings
-import iris
-
# function to write useful output to stdout, prefixing the source.
def autolog(message):
@@ -41,20 +42,33 @@ def autolog(message):
# -- Are we running on the readthedocs server, if so do some setup -----------
on_rtd = os.environ.get("READTHEDOCS") == "True"
+# This is the rtd reference to the version, such as: latest, stable, v3.0.1 etc
+rtd_version = os.environ.get("READTHEDOCS_VERSION")
+if rtd_version is not None:
+ # Make rtd_version safe for use in shields.io badges.
+ rtd_version = rtd_version.replace("_", "__")
+ rtd_version = rtd_version.replace("-", "--")
+ rtd_version = quote(rtd_version)
+
+# branch, tag, external (for pull request builds), or unknown.
+rtd_version_type = os.environ.get("READTHEDOCS_VERSION_TYPE")
+
+# For local testing purposes we can force being on RTD and the version
+# on_rtd = True # useful for testing
+# rtd_version = "latest" # useful for testing
+# rtd_version = "stable" # useful for testing
+# rtd_version_type = "tag" # useful for testing
+# rtd_version = "my_branch" # useful for testing
+
if on_rtd:
autolog("Build running on READTHEDOCS server")
# list all the READTHEDOCS environment variables that may be of use
- # at some point
autolog("Listing all environment variables on the READTHEDOCS server...")
for item, value in os.environ.items():
autolog("[READTHEDOCS] {} = {}".format(item, value))
-# This is the rtd reference to the version, such as: latest, stable, v3.0.1 etc
-# For local testing purposes this could be explicitly set latest or stable.
-rtd_version = os.environ.get("READTHEDOCS_VERSION")
-
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
@@ -82,20 +96,11 @@ def autolog(message):
author = "Iris Developers"
# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-
-# The short X.Y version.
-if iris.__version__ == "dev":
- version = "dev"
-else:
- # major.minor.patch-dev -> major.minor.patch
- version = ".".join(iris.__version__.split("-")[0].split(".")[:3])
-# The full version, including alpha/beta/rc tags.
-release = iris.__version__
-
-autolog("Iris Version = {}".format(version))
-autolog("Iris Release = {}".format(release))
+# |version|, also used in various other places throughout the built documents.
+version = get_version("scitools-iris")
+release = version
+autolog(f"Iris Version = {version}")
+autolog(f"Iris Release = {release}")
# -- General configuration ---------------------------------------------------
@@ -158,7 +163,6 @@ def _dotv(version):
"sphinx_gallery.gen_gallery",
"matplotlib.sphinxext.mathmpl",
"matplotlib.sphinxext.plot_directive",
- "image_test_output",
]
if skip_api == "1":
@@ -171,6 +175,7 @@ def _dotv(version):
# -- panels extension ---------------------------------------------------------
# See https://sphinx-panels.readthedocs.io/en/latest/
+panels_add_bootstrap_css = False
# -- Napoleon extension -------------------------------------------------------
# See https://sphinxcontrib-napoleon.readthedocs.io/en/latest/sphinxcontrib.napoleon.html
@@ -229,6 +234,7 @@ def _dotv(version):
"numpy": ("https://numpy.org/doc/stable/", None),
"python": ("https://docs.python.org/3/", None),
"scipy": ("https://docs.scipy.org/doc/scipy/", None),
+ "pandas": ("https://pandas.pydata.org/docs/", None),
}
# The name of the Pygments (syntax highlighting) style to use.
@@ -246,6 +252,10 @@ def _dotv(version):
extlinks = {
"issue": ("https://github.com/SciTools/iris/issues/%s", "Issue #"),
"pull": ("https://github.com/SciTools/iris/pull/%s", "PR #"),
+ "discussion": (
+ "https://github.com/SciTools/iris/discussions/%s",
+ "Discussion #",
+ ),
}
# -- Doctest ("make doctest")--------------------------------------------------
@@ -257,43 +267,68 @@ def _dotv(version):
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_logo = "_static/iris-logo-title.png"
-html_favicon = "_static/favicon.ico"
-html_theme = "sphinx_rtd_theme"
+html_logo = "_static/iris-logo-title.svg"
+html_favicon = "_static/iris-logo.svg"
+html_theme = "pydata_sphinx_theme"
+
+# See https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/configuring.html#configure-the-search-bar-position
+html_sidebars = {
+ "**": [
+ "custom_sidebar_logo_version",
+ "search-field",
+ "sidebar-nav-bs",
+ "sidebar-ethical-ads",
+ ]
+}
+# See https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/configuring.html
html_theme_options = {
- "display_version": True,
- "style_external_links": True,
- "logo_only": "True",
+ "footer_items": ["copyright", "sphinx-version", "custom_footer"],
+ "collapse_navigation": True,
+ "navigation_depth": 3,
+ "show_prev_next": True,
+ "navbar_align": "content",
+ "github_url": "https://github.com/SciTools/iris",
+ "twitter_url": "https://twitter.com/scitools_iris",
+ # icons available: https://fontawesome.com/v5.15/icons?d=gallery&m=free
+ "icon_links": [
+ {
+ "name": "GitHub Discussions",
+ "url": "https://github.com/SciTools/iris/discussions",
+ "icon": "far fa-comments",
+ },
+ {
+ "name": "PyPI",
+ "url": "https://pypi.org/project/scitools-iris/",
+ "icon": "fas fa-box",
+ },
+ {
+ "name": "Conda",
+ "url": "https://anaconda.org/conda-forge/iris",
+ "icon": "fas fa-boxes",
+ },
+ ],
+ "use_edit_page_button": True,
+ "show_toc_level": 1,
}
+rev_parse = run(["git", "rev-parse", "--short", "HEAD"], capture_output=True)
+commit_sha = rev_parse.stdout.decode().strip()
+
html_context = {
+ # pydata_theme
+ "github_repo": "iris",
+ "github_user": "scitools",
+ "github_version": "main",
+ "doc_path": "docs/src",
+ # custom
+ "on_rtd": on_rtd,
"rtd_version": rtd_version,
+ "rtd_version_type": rtd_version_type,
"version": version,
"copyright_years": copyright_years,
"python_version": build_python_version,
- # menu_links and menu_links_name are used in _templates/layout.html
- # to include some nice icons. See http://fontawesome.io for a list of
- # icons (used in the sphinx_rtd_theme)
- "menu_links_name": "Support",
- "menu_links": [
- (
- ' Source Code',
- "https://github.com/SciTools/iris",
- ),
- (
- ' GitHub Discussions',
- "https://github.com/SciTools/iris/discussions",
- ),
- (
- ' StackOverflow for "How Do I?"',
- "https://stackoverflow.com/questions/tagged/python-iris",
- ),
- (
- ' Legacy Documentation',
- "https://scitools.org.uk/iris/docs/v2.4.0/index.html",
- ),
- ],
+ "commit_sha": commit_sha,
}
# Add any paths that contain custom static files (such as style sheets) here,
@@ -302,12 +337,24 @@ def _dotv(version):
html_static_path = ["_static"]
html_style = "theme_override.css"
+# this allows for using datatables: https://datatables.net/
+html_css_files = [
+ "https://cdn.datatables.net/1.10.23/css/jquery.dataTables.min.css",
+]
+
+html_js_files = [
+ "https://cdn.datatables.net/1.10.23/js/jquery.dataTables.min.js",
+]
+
# url link checker. Some links work but report as broken, lets ignore them.
# See https://www.sphinx-doc.org/en/1.2/config.html#options-for-the-linkcheck-builder
linkcheck_ignore = [
+ "http://catalogue.ceda.ac.uk/uuid/82adec1f896af6169112d09cc1174499",
"http://cfconventions.org",
"http://code.google.com/p/msysgit/downloads/list",
"http://effbot.org",
+ "https://help.github.com",
+ "https://docs.github.com",
"https://github.com",
"http://www.personal.psu.edu/cab38/ColorBrewer/ColorBrewer_updates.html",
"http://schacon.github.com/git",
@@ -316,6 +363,7 @@ def _dotv(version):
"https://software.ac.uk/how-cite-software",
"http://www.esrl.noaa.gov/psd/data/gridded/conventions/cdc_netcdf_standard.shtml",
"http://www.nationalarchives.gov.uk/doc/open-government-licence",
+ "https://www.metoffice.gov.uk/",
]
# list of sources to exclude from the build.
@@ -335,6 +383,11 @@ def _dotv(version):
"ignore_pattern": r"__init__\.py",
# force gallery building, unless overridden (see src/Makefile)
"plot_gallery": "'True'",
+ # force re-registering of nc-time-axis with matplotlib for each example,
+ # required for sphinx-gallery>=0.11.0
+ "reset_modules": (
+ lambda gallery_conf, fname: sys.modules.pop("nc_time_axis", None),
+ ),
}
# -----------------------------------------------------------------------------
diff --git a/docs/src/developers_guide/assets/developer-settings-github-apps.png b/docs/src/developers_guide/assets/developer-settings-github-apps.png
new file mode 100644
index 0000000000..a63994d087
Binary files /dev/null and b/docs/src/developers_guide/assets/developer-settings-github-apps.png differ
diff --git a/docs/src/developers_guide/assets/download-pem.png b/docs/src/developers_guide/assets/download-pem.png
new file mode 100644
index 0000000000..cbceb1304d
Binary files /dev/null and b/docs/src/developers_guide/assets/download-pem.png differ
diff --git a/docs/src/developers_guide/assets/generate-key.png b/docs/src/developers_guide/assets/generate-key.png
new file mode 100644
index 0000000000..ac894dc71b
Binary files /dev/null and b/docs/src/developers_guide/assets/generate-key.png differ
diff --git a/docs/src/developers_guide/assets/gha-token-example.png b/docs/src/developers_guide/assets/gha-token-example.png
new file mode 100644
index 0000000000..cba1cf6935
Binary files /dev/null and b/docs/src/developers_guide/assets/gha-token-example.png differ
diff --git a/docs/src/developers_guide/assets/install-app.png b/docs/src/developers_guide/assets/install-app.png
new file mode 100644
index 0000000000..31259de588
Binary files /dev/null and b/docs/src/developers_guide/assets/install-app.png differ
diff --git a/docs/src/developers_guide/assets/install-iris-actions.png b/docs/src/developers_guide/assets/install-iris-actions.png
new file mode 100644
index 0000000000..db16dee55b
Binary files /dev/null and b/docs/src/developers_guide/assets/install-iris-actions.png differ
diff --git a/docs/src/developers_guide/assets/installed-app.png b/docs/src/developers_guide/assets/installed-app.png
new file mode 100644
index 0000000000..ab87032393
Binary files /dev/null and b/docs/src/developers_guide/assets/installed-app.png differ
diff --git a/docs/src/developers_guide/assets/iris-actions-secret.png b/docs/src/developers_guide/assets/iris-actions-secret.png
new file mode 100644
index 0000000000..f32456d0f2
Binary files /dev/null and b/docs/src/developers_guide/assets/iris-actions-secret.png differ
diff --git a/docs/src/developers_guide/assets/iris-github-apps.png b/docs/src/developers_guide/assets/iris-github-apps.png
new file mode 100644
index 0000000000..50753532b7
Binary files /dev/null and b/docs/src/developers_guide/assets/iris-github-apps.png differ
diff --git a/docs/src/developers_guide/assets/iris-secrets-created.png b/docs/src/developers_guide/assets/iris-secrets-created.png
new file mode 100644
index 0000000000..19b0ba11dc
Binary files /dev/null and b/docs/src/developers_guide/assets/iris-secrets-created.png differ
diff --git a/docs/src/developers_guide/assets/iris-security-actions.png b/docs/src/developers_guide/assets/iris-security-actions.png
new file mode 100644
index 0000000000..7cbe3a7dc2
Binary files /dev/null and b/docs/src/developers_guide/assets/iris-security-actions.png differ
diff --git a/docs/src/developers_guide/assets/iris-settings.png b/docs/src/developers_guide/assets/iris-settings.png
new file mode 100644
index 0000000000..70714235c2
Binary files /dev/null and b/docs/src/developers_guide/assets/iris-settings.png differ
diff --git a/docs/src/developers_guide/assets/org-perms-members.png b/docs/src/developers_guide/assets/org-perms-members.png
new file mode 100644
index 0000000000..99fd8985e2
Binary files /dev/null and b/docs/src/developers_guide/assets/org-perms-members.png differ
diff --git a/docs/src/developers_guide/assets/repo-perms-contents.png b/docs/src/developers_guide/assets/repo-perms-contents.png
new file mode 100644
index 0000000000..4c325c334d
Binary files /dev/null and b/docs/src/developers_guide/assets/repo-perms-contents.png differ
diff --git a/docs/src/developers_guide/assets/repo-perms-pull-requests.png b/docs/src/developers_guide/assets/repo-perms-pull-requests.png
new file mode 100644
index 0000000000..812f5ef951
Binary files /dev/null and b/docs/src/developers_guide/assets/repo-perms-pull-requests.png differ
diff --git a/docs/src/developers_guide/assets/scitools-settings.png b/docs/src/developers_guide/assets/scitools-settings.png
new file mode 100644
index 0000000000..8d7e728ab5
Binary files /dev/null and b/docs/src/developers_guide/assets/scitools-settings.png differ
diff --git a/docs/src/developers_guide/assets/user-perms.png b/docs/src/developers_guide/assets/user-perms.png
new file mode 100644
index 0000000000..607c7dcdb6
Binary files /dev/null and b/docs/src/developers_guide/assets/user-perms.png differ
diff --git a/docs/src/developers_guide/assets/webhook-active.png b/docs/src/developers_guide/assets/webhook-active.png
new file mode 100644
index 0000000000..538362f335
Binary files /dev/null and b/docs/src/developers_guide/assets/webhook-active.png differ
diff --git a/docs/src/developers_guide/asv_example_images/commits.png b/docs/src/developers_guide/asv_example_images/commits.png
new file mode 100644
index 0000000000..4e0d695322
Binary files /dev/null and b/docs/src/developers_guide/asv_example_images/commits.png differ
diff --git a/docs/src/developers_guide/asv_example_images/comparison.png b/docs/src/developers_guide/asv_example_images/comparison.png
new file mode 100644
index 0000000000..e146d30696
Binary files /dev/null and b/docs/src/developers_guide/asv_example_images/comparison.png differ
diff --git a/docs/src/developers_guide/asv_example_images/scalability.png b/docs/src/developers_guide/asv_example_images/scalability.png
new file mode 100644
index 0000000000..260c3ef536
Binary files /dev/null and b/docs/src/developers_guide/asv_example_images/scalability.png differ
diff --git a/docs/src/developers_guide/ci_checks.png b/docs/src/developers_guide/ci_checks.png
old mode 100755
new mode 100644
index e088e03a66..54ab672b3c
Binary files a/docs/src/developers_guide/ci_checks.png and b/docs/src/developers_guide/ci_checks.png differ
diff --git a/docs/src/developers_guide/contributing_benchmarks.rst b/docs/src/developers_guide/contributing_benchmarks.rst
new file mode 100644
index 0000000000..65bc9635b6
--- /dev/null
+++ b/docs/src/developers_guide/contributing_benchmarks.rst
@@ -0,0 +1,62 @@
+.. include:: ../common_links.inc
+
+.. _contributing.benchmarks:
+
+Benchmarking
+============
+Iris includes architecture for benchmarking performance and other metrics of
+interest. This is done using the `Airspeed Velocity`_ (ASV) package.
+
+Full detail on the setup and how to run or write benchmarks is in
+`benchmarks/README.md`_ in the Iris repository.
+
+Continuous Integration
+----------------------
+The primary purpose of `Airspeed Velocity`_, and Iris' specific benchmarking
+setup, is to monitor for performance changes using statistical comparison
+between commits, and this forms part of Iris' continuous integration.
+
+Accurately assessing performance takes longer than functionality pass/fail
+tests, so the benchmark suite is not automatically run against open pull
+requests, instead it is **run overnight against each the commits of the
+previous day** to check if any commit has introduced performance shifts.
+Detected shifts are reported in a new Iris GitHub issue.
+
+If a pull request author/reviewer suspects their changes may cause performance
+shifts, a convenience is available (currently via Nox) to replicate the
+overnight benchmark run but comparing the current ``HEAD`` with a requested
+branch (e.g. ``upstream/main``). Read more in `benchmarks/README.md`_.
+
+Other Uses
+----------
+Even when not statistically comparing commits, ASV's accurate execution time
+results - recorded using a sophisticated system of repeats - have other
+applications.
+
+* Absolute numbers can be interpreted providing they are recorded on a
+ dedicated resource.
+* Results for a series of commits can be visualised for an intuitive
+ understanding of when and why changes occurred.
+
+ .. image:: asv_example_images/commits.png
+ :width: 300
+
+* Parameterised benchmarks make it easy to visualise:
+
+ * Comparisons
+
+ .. image:: asv_example_images/comparison.png
+ :width: 300
+
+ * Scalability
+
+ .. image:: asv_example_images/scalability.png
+ :width: 300
+
+This also isn't limited to execution times. ASV can also measure memory demand,
+and even arbitrary numbers (e.g. file size, regridding accuracy), although
+without the repetition logic that execution timing has.
+
+
+.. _Airspeed Velocity: https://github.com/airspeed-velocity/asv
+.. _benchmarks/README.md: https://github.com/SciTools/iris/blob/main/benchmarks/README.md
diff --git a/docs/src/developers_guide/contributing_ci_tests.rst b/docs/src/developers_guide/contributing_ci_tests.rst
index 0257ff7cff..1d06434843 100644
--- a/docs/src/developers_guide/contributing_ci_tests.rst
+++ b/docs/src/developers_guide/contributing_ci_tests.rst
@@ -13,51 +13,50 @@ The `Iris`_ GitHub repository is configured to run checks against all its
branches automatically whenever a pull-request is created, updated or merged.
The checks performed are:
-* :ref:`testing_cirrus`
+* :ref:`testing_gha`
* :ref:`testing_cla`
* :ref:`pre_commit_ci`
-.. _testing_cirrus:
+.. _testing_gha:
-Cirrus-CI
-*********
+GitHub Actions
+**************
Iris unit and integration tests are an essential mechanism to ensure
that the Iris code base is working as expected. :ref:`developer_running_tests`
may be performed manually by a developer locally. However Iris is configured to
-use the `cirrus-ci`_ service for automated Continuous Integration (CI) testing.
+use `GitHub Actions`_ (GHA) for automated Continuous Integration (CI) testing.
-The `cirrus-ci`_ configuration file `.cirrus.yml`_ in the root of the Iris repository
-defines the tasks to be performed by `cirrus-ci`_. For further details
-refer to the `Cirrus-CI Documentation`_. The tasks performed during CI include:
+The Iris GHA YAML configuration files in the ``.github/workflows`` directory
+defines the CI tasks to be performed. For further details
+refer to the `GitHub Actions`_ documentation. The tasks performed during CI include:
-* linting the code base and ensuring it adheres to the `black`_ format
* running the system, integration and unit tests for Iris
* ensuring the documentation gallery builds successfully
* performing all doc-tests within the code base
* checking all URL references within the code base and documentation are valid
-The above `cirrus-ci`_ tasks are run automatically against all `Iris`_ branches
+The above GHA tasks are run automatically against all `Iris`_ branches
on GitHub whenever a pull-request is submitted, updated or merged. See the
-`Cirrus-CI Dashboard`_ for details of recent past and active Iris jobs.
+`Iris GitHub Actions`_ dashboard for details of recent past and active CI jobs.
-.. _cirrus_test_env:
+.. _gha_test_env:
-Cirrus CI Test environment
---------------------------
+GitHub Actions Test Environment
+-------------------------------
-The test environment on the Cirrus-CI service is determined from the requirement files
-in ``requirements/ci/py**.yml``. These are conda environment files that list the entire
-set of build, test and run requirements for Iris.
+The CI test environments for our GHA is determined from the requirement files
+in ``requirements/ci/pyXX.yml``. These are conda environment files list the top-level
+package dependencies for running and testing Iris.
For reproducible test results, these environments are resolved for all their dependencies
-and stored as lock files in ``requirements/ci/nox.lock``. The test environments will not
-resolve the dependencies each time, instead they will use the lock file to reproduce the
-same exact environment each time.
+and stored as conda lock files in the ``requirements/ci/nox.lock`` directory. The test environments
+will not resolve the dependencies each time, instead they will use the lock files to reproduce the
+exact same environment each time.
-**If you have updated the requirement yaml files with new dependencies, you will need to
+**If you have updated the requirement YAML files with new dependencies, you will need to
generate new lock files.** To do this, run the command::
python tools/update_lockfiles.py -o requirements/ci/nox.lock requirements/ci/py*.yml
@@ -68,49 +67,22 @@ or simply::
and add the changed lockfiles to your pull request.
+.. note::
+
+ If your installation of conda runs through Artifactory or another similar
+ proxy then you will need to amend that lockfile to use URLs that Github
+ Actions can access. A utility to strip out Artifactory exists in the
+ ``ssstack`` tool.
+
New lockfiles are generated automatically each week to ensure that Iris continues to be
tested against the latest available version of its dependencies.
Each week the yaml files in ``requirements/ci`` are resolved by a GitHub Action.
If the resolved environment has changed, a pull request is created with the new lock files.
-The CI test suite will run on this pull request and fixes for failed tests can be pushed to
-the ``auto-update-lockfiles`` branch to be included in the PR.
-Once a developer has pushed to this branch, the auto-update process will not run again until
-the PR is merged, to prevent overwriting developer commits.
-The auto-updater can still be invoked manually in this situation by going to the `GitHub Actions`_
-page for the workflow, and manually running using the "Run Workflow" button.
-By default, this will also not override developer commits. To force an update, you must
-confirm "yes" in the "Run Worflow" prompt.
-
-
-.. _skipping Cirrus-CI tasks:
-
-Skipping Cirrus-CI Tasks
-------------------------
-
-As a developer you may wish to not run all the CI tasks when you are actively
-developing e.g., you are writing documentation and there is no need for linting,
-or long running compute intensive testing tasks to be executed.
-
-As a convenience, it is possible to easily skip one or more tasks by setting
-the appropriate environment variable within the `.cirrus.yml`_ file to a
-**non-empty** string:
-
-* ``SKIP_LINT_TASK`` to skip `flake8`_ linting and `black`_ formatting
-* ``SKIP_TEST_MINIMAL_TASK`` to skip restricted unit and integration testing
-* ``SKIP_TEST_FULL_TASK`` to skip full unit and integration testing
-* ``SKIP_GALLERY_TASK`` to skip building the documentation gallery
-* ``SKIP_DOCTEST_TASK`` to skip running the documentation doc-tests
-* ``SKIP_LINKCHECK_TASK`` to skip checking for broken documentation URL references
-* ``SKIP_ALL_TEST_TASKS`` which is equivalent to setting ``SKIP_TEST_MINIMAL_TASK`` and ``SKIP_TEST_FULL_TASK``
-* ``SKIP_ALL_DOC_TASKS`` which is equivalent to setting ``SKIP_GALLERY_TASK``, ``SKIP_DOCTEST_TASK``, and ``SKIP_LINKCHECK_TASK``
-
-e.g., to skip the linting task, the following are all equivalent::
-
- SKIP_LINT_TASK: "1"
- SKIP_LINT_TASK: "true"
- SKIP_LINT_TASK: "false"
- SKIP_LINT_TASK: "skip"
- SKIP_LINT_TASK: "unicorn"
+The CI test suite will run on this pull request. If the tests fail, a developer
+will need to create a new branch based off the ``auto-update-lockfiles`` branch
+and add the required fixes to this new branch. If the fixes are made to the
+``auto-update-lockfiles`` branch these will be overwritten the next time the
+Github Action is run.
GitHub Checklist
@@ -146,9 +118,5 @@ pull-requests given the `Iris`_ GitHub repository `.pre-commit-config.yaml`_.
See the `pre-commit.ci dashboard`_ for details of recent past and active Iris jobs.
-
-.. _Cirrus-CI Dashboard: https://cirrus-ci.com/github/SciTools/iris
-.. _Cirrus-CI Documentation: https://cirrus-ci.org/guide/writing-tasks/
.. _.pre-commit-config.yaml: https://github.com/SciTools/iris/blob/main/.pre-commit-config.yaml
.. _pre-commit.ci dashboard: https://results.pre-commit.ci/repo/github/5312648
-.. _GitHub Actions: https://github.com/SciTools/iris/actions/workflows/refresh-lockfiles.yml
diff --git a/docs/src/developers_guide/contributing_codebase_index.rst b/docs/src/developers_guide/contributing_codebase_index.rst
index 88986c0c7a..b59a196ff0 100644
--- a/docs/src/developers_guide/contributing_codebase_index.rst
+++ b/docs/src/developers_guide/contributing_codebase_index.rst
@@ -1,7 +1,7 @@
.. _contributing.documentation.codebase:
-Contributing to the Code Base
-=============================
+Working with the Code Base
+==========================
.. toctree::
:maxdepth: 3
diff --git a/docs/src/developers_guide/contributing_deprecations.rst b/docs/src/developers_guide/contributing_deprecations.rst
index 1ecafdca9f..0b22e2cbd2 100644
--- a/docs/src/developers_guide/contributing_deprecations.rst
+++ b/docs/src/developers_guide/contributing_deprecations.rst
@@ -25,29 +25,29 @@ deprecation is accompanied by the introduction of a new public API.
Under these circumstances the following points apply:
- - Using the deprecated API must result in a concise deprecation warning which
- is an instance of :class:`iris.IrisDeprecation`.
- It is easiest to call
- :func:`iris._deprecation.warn_deprecated`, which is a
- simple wrapper to :func:`warnings.warn` with the signature
- `warn_deprecation(message, **kwargs)`.
- - Where possible, your deprecation warning should include advice on
- how to avoid using the deprecated API. For example, you might
- reference a preferred API, or more detailed documentation elsewhere.
- - You must update the docstring for the deprecated API to include a
- Sphinx deprecation directive:
-
- :literal:`.. deprecated:: `
-
- where you should replace `` with the major and minor version
- of Iris in which this API is first deprecated. For example: `1.8`.
-
- As with the deprecation warning, you should include advice on how to
- avoid using the deprecated API within the content of this directive.
- Feel free to include more detail in the updated docstring than in the
- deprecation warning.
- - You should check the documentation for references to the deprecated
- API and update them as appropriate.
+- Using the deprecated API must result in a concise deprecation warning which
+ is an instance of :class:`iris.IrisDeprecation`.
+ It is easiest to call
+ :func:`iris._deprecation.warn_deprecated`, which is a
+ simple wrapper to :func:`warnings.warn` with the signature
+ `warn_deprecation(message, **kwargs)`.
+- Where possible, your deprecation warning should include advice on
+ how to avoid using the deprecated API. For example, you might
+ reference a preferred API, or more detailed documentation elsewhere.
+- You must update the docstring for the deprecated API to include a
+ Sphinx deprecation directive:
+
+ :literal:`.. deprecated:: `
+
+ where you should replace `` with the major and minor version
+ of Iris in which this API is first deprecated. For example: `1.8`.
+
+ As with the deprecation warning, you should include advice on how to
+ avoid using the deprecated API within the content of this directive.
+ Feel free to include more detail in the updated docstring than in the
+ deprecation warning.
+- You should check the documentation for references to the deprecated
+ API and update them as appropriate.
Changing a Default
------------------
@@ -64,14 +64,14 @@ it causes the corresponding public API to use its new default behaviour.
The following points apply in addition to those for removing a public
API:
- - You should add a new boolean attribute to :data:`iris.FUTURE` (by
- modifying :class:`iris.Future`) that controls the default behaviour
- of the public API that needs updating. The initial state of the new
- boolean attribute should be `False`. You should name the new boolean
- attribute to indicate that setting it to `True` will select the new
- default behaviour.
- - You should include a reference to this :data:`iris.FUTURE` flag in your
- deprecation warning and corresponding Sphinx deprecation directive.
+- You should add a new boolean attribute to :data:`iris.FUTURE` (by
+ modifying :class:`iris.Future`) that controls the default behaviour
+ of the public API that needs updating. The initial state of the new
+ boolean attribute should be `False`. You should name the new boolean
+ attribute to indicate that setting it to `True` will select the new
+ default behaviour.
+- You should include a reference to this :data:`iris.FUTURE` flag in your
+ deprecation warning and corresponding Sphinx deprecation directive.
Removing a Deprecation
@@ -94,11 +94,11 @@ and/or example code should be removed/updated as appropriate.
Changing a Default
------------------
- - You should update the initial state of the relevant boolean attribute
- of :data:`iris.FUTURE` to `True`.
- - You should deprecate setting the relevant boolean attribute of
- :class:`iris.Future` in the same way as described in
- :ref:`removing-a-public-api`.
+- You should update the initial state of the relevant boolean attribute
+ of :data:`iris.FUTURE` to `True`.
+- You should deprecate setting the relevant boolean attribute of
+ :class:`iris.Future` in the same way as described in
+ :ref:`removing-a-public-api`.
.. rubric:: Footnotes
diff --git a/docs/src/developers_guide/contributing_documentation_full.rst b/docs/src/developers_guide/contributing_documentation_full.rst
index 77b898c0f3..ac62a67373 100755
--- a/docs/src/developers_guide/contributing_documentation_full.rst
+++ b/docs/src/developers_guide/contributing_documentation_full.rst
@@ -1,3 +1,4 @@
+.. include:: ../common_links.inc
.. _contributing.documentation_full:
@@ -31,7 +32,7 @@ The build can be run from the documentation directory ``docs/src``.
The build output for the html is found in the ``_build/html`` sub directory.
When updating the documentation ensure the html build has *no errors* or
-*warnings* otherwise it may fail the automated `cirrus-ci`_ build.
+*warnings* otherwise it may fail the automated `Iris GitHub Actions`_ build.
Once the build is complete, if it is rerun it will only rebuild the impacted
build artefacts so should take less time.
@@ -66,21 +67,25 @@ This is useful for a final test before committing your changes.
have been promoted to be **errors** to ensure they are addressed.
This **only** applies when ``make html`` is run.
-.. _cirrus-ci: https://cirrus-ci.com/github/SciTools/iris
-
.. _contributing.documentation.testing:
Testing
~~~~~~~
-There are a ways to test various aspects of the documentation. The
-``make`` commands shown below can be run in the ``docs`` or
-``docs/src`` directory.
+There are various ways to test aspects of the documentation.
Each :ref:`contributing.documentation.gallery` entry has a corresponding test.
-To run the tests::
+To run all the gallery tests::
+
+ pytest -v docs/gallery_tests/test_gallery_examples.py
+
+To run a test for a single gallery example, use the ``pytest -k`` option for
+pattern matching, e.g.::
+
+ pytest -v -k plot_coriolis docs/gallery_tests/test_gallery_examples.py
- make gallerytest
+The ``make`` commands shown below can be run in the ``docs`` or ``docs/src``
+directory.
Many documentation pages includes python code itself that can be run to ensure
it is still valid or to demonstrate examples. To ensure these tests pass
@@ -115,7 +120,7 @@ or ignore the url.
``spelling_word_list_filename``.
-.. note:: In addition to the automated `cirrus-ci`_ build of all the
+.. note:: In addition to the automated `Iris GitHub Actions`_ build of all the
documentation build options above, the
https://readthedocs.org/ service is also used. The configuration
of this held in a file in the root of the
@@ -148,7 +153,7 @@ can exclude the module from the API documentation. Add the entry to the
Gallery
~~~~~~~
-The Iris :ref:`sphx_glr_generated_gallery` uses a sphinx extension named
+The Iris :ref:`gallery_index` uses a sphinx extension named
`sphinx-gallery `_
that auto generates reStructuredText (rst) files based upon a gallery source
directory that abides directory and filename convention.
diff --git a/docs/src/developers_guide/contributing_getting_involved.rst b/docs/src/developers_guide/contributing_getting_involved.rst
index f7bd4733a3..9ec6559114 100644
--- a/docs/src/developers_guide/contributing_getting_involved.rst
+++ b/docs/src/developers_guide/contributing_getting_involved.rst
@@ -1,8 +1,9 @@
.. include:: ../common_links.inc
.. _development_where_to_start:
+.. _developers_guide:
-Getting Involved
+Developers Guide
----------------
Iris_ is an Open Source project hosted on Github and as such anyone with a
@@ -17,7 +18,7 @@ The `Iris GitHub`_ project has been configured to use templates for each of
the above issue types when creating a `new issue`_ to ensure the appropriate
information is provided.
-Alternatively, **join the conversation** in `Iris GitHub Discussions`_, when
+Alternatively, **join the conversation** in Iris `GitHub Discussions`_, when
you would like the opinions of the Iris community.
A `pull request`_ may also be created by anyone who has become a
@@ -25,7 +26,7 @@ A `pull request`_ may also be created by anyone who has become a
``main`` branch are only given to **core developers** of Iris_, this is
to ensure a measure of control.
-To get started we suggest reading recent `issues`_, `discussions`_ and
+To get started we suggest reading recent `issues`_, `GitHub Discussions`_ and
`pull requests`_ for Iris.
If you are new to using GitHub we recommend reading the
@@ -36,5 +37,30 @@ If you are new to using GitHub we recommend reading the
`Governance `_
section of the `SciTools`_ ogranization web site.
-
.. _GitHub getting started: https://docs.github.com/en/github/getting-started-with-github
+
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Developers Guide
+ :name: development_index
+ :hidden:
+
+ gitwash/index
+ contributing_documentation
+ contributing_codebase_index
+ contributing_changes
+ github_app
+ release
+
+
+.. toctree::
+ :maxdepth: 1
+ :caption: Reference
+ :hidden:
+
+ ../generated/api/iris
+ ../whatsnew/index
+ ../techpapers/index
+ ../copyright
+ ../voted_issues
diff --git a/docs/src/developers_guide/contributing_graphics_tests.rst b/docs/src/developers_guide/contributing_graphics_tests.rst
index 1268aa2686..7964c008c5 100644
--- a/docs/src/developers_guide/contributing_graphics_tests.rst
+++ b/docs/src/developers_guide/contributing_graphics_tests.rst
@@ -2,72 +2,17 @@
.. _testing.graphics:
-Graphics Tests
-**************
+Adding or Updating Graphics Tests
+=================================
-Iris may be used to create various forms of graphical output; to ensure
-the output is consistent, there are automated tests to check against
-known acceptable graphical output. See :ref:`developer_running_tests` for
-more information.
-
-At present graphical tests are used in the following areas of Iris:
-
-* Module ``iris.tests.test_plot``
-* Module ``iris.tests.test_quickplot``
-* :ref:`sphx_glr_generated_gallery` plots contained in
- ``docs/gallery_tests``.
-
-
-Challenges
-==========
-
-Iris uses many dependencies that provide functionality, an example that
-applies here is matplotlib_. For more information on the dependences, see
-:ref:`installing_iris`. When there are updates to the matplotlib_ or a
-dependency of matplotlib, this may result in a change in the rendered graphical
-output. This means that there may be no changes to Iris_, but due to an
-updated dependency any automated tests that compare a graphical output to a
-known acceptable output may fail. The failure may also not be visually
-perceived as it may be a simple pixel shift.
-
-
-Testing Strategy
-================
-
-The `Iris Cirrus-CI matrix`_ defines multiple test runs that use
-different versions of Python to ensure Iris is working as expected.
-
-To make this manageable, the ``iris.tests.IrisTest_nometa.check_graphic`` test
-routine tests against multiple alternative **acceptable** results. It does
-this using an image **hash** comparison technique which avoids storing
-reference images in the Iris repository itself.
-
-This consists of:
-
- * The ``iris.tests.IrisTest_nometa.check_graphic`` function uses a perceptual
- **image hash** of the outputs (see https://github.com/JohannesBuchner/imagehash)
- as the basis for checking test results.
-
- * The hashes of known **acceptable** results for each test are stored in a
- lookup dictionary, saved to the repo file
- ``lib/iris/tests/results/imagerepo.json``
- (`link `_) .
-
- * An actual reference image for each hash value is stored in a *separate*
- public repository https://github.com/SciTools/test-iris-imagehash.
-
- * The reference images allow human-eye assessment of whether a new output is
- judged to be close enough to the older ones, or not.
-
- * The utility script ``iris/tests/idiff.py`` automates checking, enabling the
- developer to easily compare proposed new **acceptable** result images
- against the existing accepted reference images, for each failing test.
+.. note::
-The acceptable images for each test can be viewed online. The :ref:`testing.imagehash_index` lists all the graphical tests in the test suite and
-shows the known acceptable result images for comparison.
+ If a large number of images tests are failing due to an update to the
+ libraries used for image hashing, follow the instructions on
+ :ref:`refresh-imagerepo`.
-Reviewing Failing Tests
-=======================
+Generating New Results
+----------------------
When you find that a graphics test in the Iris testing suite has failed,
following changes in Iris or the run dependencies, this is the process
@@ -76,14 +21,24 @@ you should follow:
#. Create a new, empty directory to store temporary image results, at the path
``lib/iris/tests/result_image_comparison`` in your Iris repository checkout.
-#. **In your Iris repo root directory**, run the relevant (failing) tests
- directly as python scripts, or by using a command such as::
+#. Run the relevant (failing) tests directly as python scripts, or using
+ ``pytest``.
+
+The results of the failing image tests will now be available in
+``lib/iris/tests/result_image_comparison``.
+
+.. note::
+
+ The ``result_image_comparison`` folder is covered by a project
+ ``.gitignore`` setting, so those files *will not show up* in a
+ ``git status`` check.
- python -m unittest discover paths/to/test/files
+Reviewing Failing Tests
+-----------------------
-#. In the ``iris/lib/iris/tests`` folder, run the command::
+#. Run ``iris/lib/iris/tests/graphics/idiff.py`` with python, e.g.:
- python idiff.py
+ python idiff.py
This will open a window for you to visually inspect
side-by-side **old**, **new** and **difference** images for each failed
@@ -92,29 +47,28 @@ you should follow:
If the change is **accepted**:
- * the imagehash value of the new result image is added into the relevant
- set of 'valid result hashes' in the image result database file,
- ``tests/results/imagerepo.json``
+ * the imagehash value of the new result image is added into the relevant
+ set of 'valid result hashes' in the image result database file,
+ ``tests/results/imagerepo.json``
- * the relevant output file in ``tests/result_image_comparison`` is
- renamed according to the image hash value, as ``.png``.
- A copy of this new PNG file must then be added into the reference image
- repository at https://github.com/SciTools/test-iris-imagehash
- (See below).
+ * the relevant output file in ``tests/result_image_comparison`` is renamed
+ according to the test name. A copy of this new PNG file must then be added
+ into the ``iris-test-data`` repository, at
+ https://github.com/SciTools/iris-test-data (See below).
If a change is **skipped**:
- * no further changes are made in the repo.
+ * no further changes are made in the repo.
- * when you run ``iris/tests/idiff.py`` again, the skipped choice will be
- presented again.
+ * when you run ``iris/tests/idiff.py`` again, the skipped choice will be
+ presented again.
If a change is **rejected**:
- * the output image is deleted from ``result_image_comparison``.
+ * the output image is deleted from ``result_image_comparison``.
- * when you run ``iris/tests/idiff.py`` again, the skipped choice will not
- appear, unless the relevant failing test is re-run.
+ * when you run ``iris/tests/idiff.py`` again, the skipped choice will not
+ appear, unless the relevant failing test is re-run.
#. **Now re-run the tests**. The **new** result should now be recognised and the
relevant test should pass. However, some tests can perform *multiple*
@@ -123,46 +77,66 @@ you should follow:
re-run may encounter further (new) graphical test failures. If that
happens, simply repeat the check-and-accept process until all tests pass.
+#. You're now ready to :ref:`add-graphics-test-changes`
-Add Your Changes to Iris
-========================
-To add your changes to Iris, you need to make two pull requests (PR).
+Adding a New Image Test
+-----------------------
-#. The first PR is made in the ``test-iris-imagehash`` repository, at
- https://github.com/SciTools/test-iris-imagehash.
+If you attempt to run ``idiff.py`` when there are new graphical tests for which
+no baseline yet exists, you will get a warning that ``idiff.py`` is ``Ignoring
+unregistered test result...``. In this case,
- * First, add all the newly-generated referenced PNG files into the
- ``images/v4`` directory. In your Iris repo, these files are to be found
- in the temporary results folder ``iris/tests/result_image_comparison``.
+#. rename the relevant images from ``iris/tests/result_image_comparison`` by
- * Then, to update the file which lists available images,
- ``v4_files_listing.txt``, run from the project root directory::
+ * removing the ``result-`` prefix
- python recreate_v4_files_listing.py
+ * fully qualifying the test name if it isn't already (i.e. it should start
+ ``iris.tests...``or ``gallery_tests...``)
- * Create a PR proposing these changes, in the usual way.
+#. run the tests in the mode that lets them create missing data (see
+ :ref:`create-missing`). This will update ``imagerepo.json`` with the new
+ test name and image hash.
-#. The second PR is created in the Iris_ repository, and
- should only include the change to the image results database,
- ``tests/results/imagerepo.json``.
- The description box of this pull request should contain a reference to
- the matching one in ``test-iris-imagehash``.
+#. and then add them to the Iris test data as covered in
+ :ref:`add-graphics-test-changes`.
-.. note::
- The ``result_image_comparison`` folder is covered by a project
- ``.gitignore`` setting, so those files *will not show up* in a
- ``git status`` check.
+.. _refresh-imagerepo:
-.. important::
+Refreshing the Stored Hashes
+----------------------------
- The Iris pull-request will not test successfully in Cirrus-CI until the
- ``test-iris-imagehash`` pull request has been merged. This is because there
- is an Iris_ test which ensures the existence of the reference images (uris)
- for all the targets in the image results database. It will also fail
- if you forgot to run ``recreate_v4_files_listing.py`` to update the
- image-listing file in ``test-iris-imagehash``.
+From time to time, a new version of the image hashing library will cause all
+image hashes to change. The image hashes stored in
+``tests/results/imagerepo.json`` can be refreshed using the baseline images
+stored in the ``iris-test-data`` repository (at
+https://github.com/SciTools/iris-test-data) using the script
+``tests/graphics/recreate_imagerepo.py``. Use the ``--help`` argument for the
+command line arguments.
-.. _Iris Cirrus-CI matrix: https://github.com/scitools/iris/blob/main/.cirrus.yml
+.. _add-graphics-test-changes:
+
+Add Your Changes to Iris
+------------------------
+
+To add your changes to Iris, you need to make two pull requests (PR).
+
+#. The first PR is made in the ``iris-test-data`` repository, at
+ https://github.com/SciTools/iris-test-data.
+
+ * Add all the newly-generated referenced PNG files into the
+ ``test_data/images`` directory. In your Iris repo, these files are to be found
+ in the temporary results folder ``iris/tests/result_image_comparison``.
+
+ * Create a PR proposing these changes, in the usual way.
+
+#. The second PR is the one that makes the changes you intend to the Iris_ repository.
+ The description box of this pull request should contain a reference to
+ the matching one in ``iris-test-data``.
+
+ * This PR should include updating the version of the test data in
+ ``.github/workflows/ci-tests.yml`` and
+ ``.github/workflows/ci-docs-tests.yml`` to the new version created by the
+ merging of your ``iris-test-data`` PR.
diff --git a/docs/src/developers_guide/contributing_pull_request_checklist.rst b/docs/src/developers_guide/contributing_pull_request_checklist.rst
index 5afb461d68..57bc9fd728 100644
--- a/docs/src/developers_guide/contributing_pull_request_checklist.rst
+++ b/docs/src/developers_guide/contributing_pull_request_checklist.rst
@@ -16,8 +16,8 @@ is merged. Before submitting a pull request please consider this list.
#. **Provide a helpful description** of the Pull Request. This should include:
- * The aim of the change / the problem addressed / a link to the issue.
- * How the change has been delivered.
+ * The aim of the change / the problem addressed / a link to the issue.
+ * How the change has been delivered.
#. **Include a "What's New" entry**, if appropriate.
See :ref:`whats_new_contributions`.
@@ -31,10 +31,11 @@ is merged. Before submitting a pull request please consider this list.
#. **Check all new dependencies added to the** `requirements/ci/`_ **yaml
files.** If dependencies have been added then new nox testing lockfiles
- should be generated too, see :ref:`cirrus_test_env`.
+ should be generated too, see :ref:`gha_test_env`.
#. **Check the source documentation been updated to explain all new or changed
- features**. See :ref:`docstrings`.
+ features**. Note, we now use numpydoc strings. Any touched code should
+ be updated to use the docstrings formatting. See :ref:`docstrings`.
#. **Include code examples inside the docstrings where appropriate**. See
:ref:`contributing.documentation.testing`.
@@ -42,8 +43,6 @@ is merged. Before submitting a pull request please consider this list.
#. **Check the documentation builds without warnings or errors**. See
:ref:`contributing.documentation.building`
-#. **Check for any new dependencies in the** `.cirrus.yml`_ **config file.**
-
#. **Check for any new dependencies in the** `readthedocs.yml`_ **file**. This
file is used to build the documentation that is served from
https://scitools-iris.readthedocs.io/en/latest/
@@ -51,12 +50,10 @@ is merged. Before submitting a pull request please consider this list.
#. **Check for updates needed for supporting projects for test or example
data**. For example:
- * `iris-test-data`_ is a github project containing all the data to support
- the tests.
- * `iris-sample-data`_ is a github project containing all the data to support
- the gallery and examples.
- * `test-iris-imagehash`_ is a github project containing reference plot
- images to support Iris :ref:`testing.graphics`.
+ * `iris-test-data`_ is a github project containing all the data to support
+ the tests.
+ * `iris-sample-data`_ is a github project containing all the data to support
+ the gallery and examples.
If new files are required by tests or code examples, they must be added to
the appropriate supporting project via a suitable pull-request. This pull
diff --git a/docs/src/developers_guide/contributing_running_tests.rst b/docs/src/developers_guide/contributing_running_tests.rst
index ab36172283..f60cedba05 100644
--- a/docs/src/developers_guide/contributing_running_tests.rst
+++ b/docs/src/developers_guide/contributing_running_tests.rst
@@ -5,13 +5,22 @@
Running the Tests
*****************
-Using setuptools for Testing Iris
-=================================
+There are two options for running the tests:
-.. warning:: The `setuptools`_ ``test`` command was deprecated in `v41.5.0`_. See :ref:`using nox`.
+* Use an environment you created yourself. This requires more manual steps to
+ set up, but gives you more flexibility. For example, you can run a subset of
+ the tests or use ``python`` interactively to investigate any issues. See
+ :ref:`test manual env`.
-A prerequisite of running the tests is to have the Python environment
-setup. For more information on this see :ref:`installing_from_source`.
+* Use ``nox``. This will automatically generate an environment and run test
+ sessions consistent with our GitHub continuous integration. See :ref:`using nox`.
+
+.. _test manual env:
+
+Testing Iris in a Manually Created Environment
+==============================================
+
+To create a suitable environment for running the tests, see :ref:`installing_from_source`.
Many Iris tests will use data that may be defined in the test itself, however
this is not always the case as sometimes example files may be used. Due to
@@ -32,81 +41,76 @@ The example command below uses ``~/projects`` as the parent directory::
git clone git@github.com:SciTools/iris-test-data.git
export OVERRIDE_TEST_DATA_REPOSITORY=~/projects/iris-test-data/test_data
-All the Iris tests may be run from the root ``iris`` project directory via::
+All the Iris tests may be run from the root ``iris`` project directory using
+``pytest``. For example::
- python setup.py test
-
-You can also run a specific test, the example below runs the tests for
-mapping::
+ pytest -n 2
- cd lib/iris/tests
- python test_mapping.py
+will run the tests across two processes. For more options, use the command
+``pytest -h``. Below is a trimmed example of the output::
-When running the test directly as above you can view the command line options
-using the commands ``python test_mapping.py -h`` or
-``python test_mapping.py --help``.
+ ============================= test session starts ==============================
+ platform linux -- Python 3.10.5, pytest-7.1.2, pluggy-1.0.0
+ rootdir: /path/to/git/clone/iris, configfile: pyproject.toml, testpaths: lib/iris
+ plugins: xdist-2.5.0, forked-1.4.0
+ gw0 I / gw1 I
+ gw0 [6361] / gw1 [6361]
-.. tip:: A useful command line option to use is ``-d``. This will display
- matplotlib_ figures as the tests are run. For example::
-
- python test_mapping.py -d
-
- You can also use the ``-d`` command line option when running all
- the tests but this will take a while to run and will require the
- manual closing of each of the figures for the tests to continue.
-
-The output from running the tests is verbose as it will run ~5000 separate
-tests. Below is a trimmed example of the output::
-
- running test
- Running test suite(s): default
-
- Running test discovery on iris.tests with 2 processors.
- test_circular_subset (iris.tests.experimental.regrid.test_regrid_area_weighted_rectilinear_src_and_grid.TestAreaWeightedRegrid) ... ok
- test_cross_section (iris.tests.experimental.regrid.test_regrid_area_weighted_rectilinear_src_and_grid.TestAreaWeightedRegrid) ... ok
- test_different_cs (iris.tests.experimental.regrid.test_regrid_area_weighted_rectilinear_src_and_grid.TestAreaWeightedRegrid) ... ok
- ...
+ ........................................................................ [ 1%]
+ ........................................................................ [ 2%]
+ ........................................................................ [ 3%]
...
- test_ellipsoid (iris.tests.unit.experimental.raster.test_export_geotiff.TestProjection) ... SKIP: Test requires 'gdal'.
- test_no_ellipsoid (iris.tests.unit.experimental.raster.test_export_geotiff.TestProjection) ... SKIP: Test requires 'gdal'.
+ .......................ssssssssssssssssss............................... [ 99%]
+ ........................ [100%]
+ =============================== warnings summary ===============================
...
+ -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
+ =========================== short test summary info ============================
+ SKIPPED [1] lib/iris/tests/experimental/test_raster.py:152: Test requires 'gdal'.
+ SKIPPED [1] lib/iris/tests/experimental/test_raster.py:155: Test requires 'gdal'.
...
- test_slice (iris.tests.test_util.TestAsCompatibleShape) ... ok
- test_slice_and_transpose (iris.tests.test_util.TestAsCompatibleShape) ... ok
- test_transpose (iris.tests.test_util.TestAsCompatibleShape) ... ok
-
- ----------------------------------------------------------------------
- Ran 4762 tests in 238.649s
-
- OK (SKIP=22)
+ ========= 6340 passed, 21 skipped, 1659 warnings in 193.57s (0:03:13) ==========
There may be some tests that have been **skipped**. This is due to a Python
decorator being present in the test script that will intentionally skip a test
if a certain condition is not met. In the example output above there are
-**22** skipped tests, at the point in time when this was run this was primarily
-due to an experimental dependency not being present.
-
+**21** skipped tests. At the point in time when this was run this was due to an
+experimental dependency not being present.
.. tip::
The most common reason for tests to be skipped is when the directory for the
``iris-test-data`` has not been set which would shows output such as::
- test_coord_coord_map (iris.tests.test_plot.Test1dScatter) ... SKIP: Test(s) require external data.
- test_coord_coord (iris.tests.test_plot.Test1dScatter) ... SKIP: Test(s) require external data.
- test_coord_cube (iris.tests.test_plot.Test1dScatter) ... SKIP: Test(s) require external data.
-
+ SKIPPED [1] lib/iris/tests/unit/fileformats/test_rules.py:157: Test(s) require external data.
+ SKIPPED [1] lib/iris/tests/unit/fileformats/pp/test__interpret_field.py:97: Test(s) require external data.
+ SKIPPED [1] lib/iris/tests/unit/util/test_demote_dim_coord_to_aux_coord.py:29: Test(s) require external data.
+
All Python decorators that skip tests will be defined in
``lib/iris/tests/__init__.py`` with a function name with a prefix of
``skip_``.
+You can also run a specific test module. The example below runs the tests for
+mapping::
+
+ cd lib/iris/tests
+ python test_mapping.py
+
+When running the test directly as above you can view the command line options
+using the commands ``python test_mapping.py -h`` or
+``python test_mapping.py --help``.
+
+.. tip:: A useful command line option to use is ``-d``. This will display
+ matplotlib_ figures as the tests are run. For example::
+
+ python test_mapping.py -d
.. _using nox:
Using Nox for Testing Iris
==========================
-Iris has adopted the use of the `nox`_ tool for automated testing on `cirrus-ci`_
+The `nox`_ tool has for adopted for automated testing on `Iris GitHub Actions`_
and also locally on the command-line for developers.
`nox`_ is similar to `tox`_, but instead leverages the expressiveness and power of a Python
@@ -124,15 +128,12 @@ automates the process of:
* building the documentation and executing the doc-tests
* building the documentation gallery
* running the documentation URL link check
-* linting the code-base
-* ensuring the code-base style conforms to the `black`_ standard
-
You can perform all of these tasks manually yourself, however the onus is on you to first ensure
that all of the required package dependencies are installed and available in the testing environment.
`Nox`_ has been configured to automatically do this for you, and provides a means to easily replicate
-the remote testing behaviour of `cirrus-ci`_ locally for the developer.
+the remote testing behaviour of `Iris GitHub Actions`_ locally for the developer.
Installing Nox
diff --git a/docs/src/developers_guide/contributing_testing.rst b/docs/src/developers_guide/contributing_testing.rst
index d0c96834a9..a65bcebd55 100644
--- a/docs/src/developers_guide/contributing_testing.rst
+++ b/docs/src/developers_guide/contributing_testing.rst
@@ -8,8 +8,8 @@ Test Categories
There are two main categories of tests within Iris:
- - :ref:`testing.unit_test`
- - :ref:`testing.integration`
+- :ref:`testing.unit_test`
+- :ref:`testing.integration`
Ideally, all code changes should be accompanied by one or more unit
tests, and by zero or more integration tests.
diff --git a/docs/src/developers_guide/contributing_testing_index.rst b/docs/src/developers_guide/contributing_testing_index.rst
index c5cf1b997b..2f5ae411e8 100644
--- a/docs/src/developers_guide/contributing_testing_index.rst
+++ b/docs/src/developers_guide/contributing_testing_index.rst
@@ -7,7 +7,8 @@ Testing
:maxdepth: 3
contributing_testing
+ testing_tools
contributing_graphics_tests
- imagehash_index
contributing_running_tests
contributing_ci_tests
+ contributing_benchmarks
diff --git a/docs/src/developers_guide/documenting/docstrings.rst b/docs/src/developers_guide/documenting/docstrings.rst
index 8a06024ee2..eeefc71e40 100644
--- a/docs/src/developers_guide/documenting/docstrings.rst
+++ b/docs/src/developers_guide/documenting/docstrings.rst
@@ -8,10 +8,10 @@ Every public object in the Iris package should have an appropriate docstring.
This is important as the docstrings are used by developers to understand
the code and may be read directly in the source or via the :ref:`Iris`.
-This document has been influenced by the following PEP's,
-
- * Attribute Docstrings :pep:`224`
- * Docstring Conventions :pep:`257`
+.. note::
+ As of April 2022 we are looking to adopt `numpydoc`_ strings as standard.
+ We aim to complete the adoption over time as we do changes to the codebase.
+ For examples of use see `numpydoc`_ and `sphinxcontrib-napoleon`_
For consistency always use:
@@ -20,91 +20,14 @@ For consistency always use:
docstrings.
* ``u"""Unicode triple-quoted string"""`` for Unicode docstrings
-All docstrings should be written in reST (reStructuredText) markup. See the
-:ref:`reST_quick_start` for more detail.
-
-There are two forms of docstrings: **single-line** and **multi-line**
-docstrings.
-
-
-Single-Line Docstrings
-======================
-
-The single line docstring of an object must state the **purpose** of that
-object, known as the **purpose section**. This terse overview must be on one
-line and ideally no longer than 80 characters.
-
-
-Multi-Line Docstrings
-=====================
-
-Multi-line docstrings must consist of at least a purpose section akin to the
-single-line docstring, followed by a blank line and then any other content, as
-described below. The entire docstring should be indented to the same level as
-the quotes at the docstring's first line.
-
-
-Description
------------
-
-The multi-line docstring *description section* should expand on what was
-stated in the one line *purpose section*. The description section should try
-not to document *argument* and *keyword argument* details. Such information
-should be documented in the following *arguments and keywords section*.
-
-
-Sample Multi-Line Docstring
----------------------------
-
-Here is a simple example of a standard docstring:
-
-.. literalinclude:: docstrings_sample_routine.py
-
-This would be rendered as:
-
- .. currentmodule:: documenting.docstrings_sample_routine
-
- .. automodule:: documenting.docstrings_sample_routine
- :members:
- :undoc-members:
-
-Additionally, a summary can be extracted automatically, which would result in:
-
- .. autosummary::
-
- documenting.docstrings_sample_routine.sample_routine
-
-
-Documenting Classes
-===================
-
-The class constructor should be documented in the docstring for its
-``__init__`` or ``__new__`` method. Methods should be documented by their own
-docstring, not in the class header itself.
-
-If a class subclasses another class and its behaviour is mostly inherited from
-that class, its docstring should mention this and summarise the differences.
-Use the verb "override" to indicate that a subclass method replaces a
-superclass method and does not call the superclass method; use the verb
-"extend" to indicate that a subclass method calls the superclass method
-(in addition to its own behaviour).
-
-
-Attribute and Property Docstrings
----------------------------------
-
-Here is a simple example of a class containing an attribute docstring and a
-property docstring:
-
-.. literalinclude:: docstrings_attribute.py
+All docstrings can use reST (reStructuredText) markup to augment the
+rendered formatting. See the :ref:`reST_quick_start` for more detail.
-This would be rendered as:
+For more information including examples pleasee see:
- .. currentmodule:: documenting.docstrings_attribute
+* `numpydoc`_
+* `sphinxcontrib-napoleon`_
- .. automodule:: documenting.docstrings_attribute
- :members:
- :undoc-members:
-.. note:: The purpose section of the property docstring **must** state whether
- the property is read-only.
+.. _numpydoc: https://numpydoc.readthedocs.io/en/latest/format.html#style-guide
+.. _sphinxcontrib-napoleon: https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html
\ No newline at end of file
diff --git a/docs/src/developers_guide/documenting/rest_guide.rst b/docs/src/developers_guide/documenting/rest_guide.rst
index 4845132b15..c4330b1e63 100644
--- a/docs/src/developers_guide/documenting/rest_guide.rst
+++ b/docs/src/developers_guide/documenting/rest_guide.rst
@@ -14,8 +14,8 @@ reST is a lightweight markup language intended to be highly readable in
source format. This guide will cover some of the more frequently used advanced
reST markup syntaxes, for the basics of reST the following links may be useful:
- * https://www.sphinx-doc.org/en/master/usage/restructuredtext/
- * http://packages.python.org/an_example_pypi_project/sphinx.html
+* https://www.sphinx-doc.org/en/master/usage/restructuredtext/
+* http://packages.python.org/an_example_pypi_project/sphinx.html
Reference documentation for reST can be found at http://docutils.sourceforge.net/rst.html.
diff --git a/docs/src/developers_guide/documenting/whats_new_contributions.rst b/docs/src/developers_guide/documenting/whats_new_contributions.rst
index 576fc5f6a6..aa19722a69 100644
--- a/docs/src/developers_guide/documenting/whats_new_contributions.rst
+++ b/docs/src/developers_guide/documenting/whats_new_contributions.rst
@@ -1,24 +1,21 @@
+.. include:: ../../common_links.inc
+
.. _whats_new_contributions:
=================================
Contributing a "What's New" Entry
=================================
-Iris uses a file named ``dev.rst`` to keep a draft of upcoming development changes
-that will form the next stable release. Contributions to the :ref:`iris_whatsnew`
-document are written by the developer most familiar with the change made.
-The contribution should be included as part of the Iris Pull Request that
-introduces the change.
+Iris uses a file named ``latest.rst`` to keep a draft of upcoming development
+changes that will form the next stable release. Contributions to the
+:ref:`iris_whatsnew` document are written by the developer most familiar
+with the change made. The contribution should be included as part of
+the Iris Pull Request that introduces the change.
-The ``dev.rst`` and the past release notes are kept in the
+The ``latest.rst`` and the past release notes are kept in the
``docs/src/whatsnew/`` directory. If you are writing the first contribution after
-an Iris release: **create the new** ``dev.rst`` by copying the content from
-``dev.rst.template`` in the same directory.
-
-.. note::
-
- Ensure that the symbolic link ``latest.rst`` references the ``dev.rst`` file
- within the ``docs/src/whatsnew`` directory.
+an Iris release: **create the new** ``latest.rst`` by copying the content from
+``latest.rst.template`` in the same directory.
Since the `Contribution categories`_ include Internal changes, **all** Iris
Pull Requests should be accompanied by a "What's New" contribution.
@@ -27,7 +24,7 @@ Pull Requests should be accompanied by a "What's New" contribution.
Git Conflicts
=============
-If changes to ``dev.rst`` are being suggested in several simultaneous
+If changes to ``latest.rst`` are being suggested in several simultaneous
Iris Pull Requests, Git will likely encounter merge conflicts. If this
situation is thought likely (large PR, high repo activity etc.):
@@ -38,17 +35,17 @@ situation is thought likely (large PR, high repo activity etc.):
a **new pull request** be created specifically for the "What's New" entry,
which references the main pull request and titled (e.g. for PR#9999):
- What's New for #9999
+ What's New for #9999
* PR author: create the "What's New" pull request
* PR reviewer: once the "What's New" PR is created, **merge the main PR**.
- (this will fix any `cirrus-ci`_ linkcheck errors where the links in the
+ (this will fix any `Iris GitHub Actions`_ linkcheck errors where the links in the
"What's New" PR reference new features introduced in the main PR)
* PR reviewer: review the "What's New" PR, merge once acceptable
-These measures should mean the suggested ``dev.rst`` changes are outstanding
+These measures should mean the suggested ``latest.rst`` changes are outstanding
for the minimum time, minimising conflicts and minimising the need to rebase or
merge from trunk.
@@ -74,6 +71,9 @@ The required content, in order, is as follows:
user name. Link the name to their GitHub profile. E.g.
```@tkknight `_ changed...``
+ * Bigger changes take a lot of effort to review, too! Make sure you credit
+ the reviewer(s) where appropriate.
+
* The new/changed behaviour
* Context to the change. Possible examples include: what this fixes, why
@@ -87,8 +87,9 @@ The required content, in order, is as follows:
For example::
- #. `@tkknight `_ changed changed argument ``x``
- to be optional in :class:`~iris.module.class` and
+ #. `@tkknight `_ and
+ `@trexfeathers `_ (reviewer) changed
+ argument ``x`` to be optional in :class:`~iris.module.class` and
:meth:`iris.module.method`. This allows greater flexibility as requested in
:issue:`9999`. (:pull:`1111`, :pull:`9999`)
@@ -98,13 +99,11 @@ links to code. For more inspiration on possible content and references, please
examine past what's :ref:`iris_whatsnew` entries.
.. note:: The reStructuredText syntax will be checked as part of building
- the documentation. Any warnings should be corrected.
- `cirrus-ci`_ will automatically build the documentation when
+ the documentation. Any warnings should be corrected. The
+ `Iris GitHub Actions`_ will automatically build the documentation when
creating a pull request, however you can also manually
:ref:`build ` the documentation.
-.. _cirrus-ci: https://cirrus-ci.com/github/SciTools/iris
-
Contribution Categories
=======================
diff --git a/docs/src/developers_guide/github_app.rst b/docs/src/developers_guide/github_app.rst
new file mode 100644
index 0000000000..402cfe0c75
--- /dev/null
+++ b/docs/src/developers_guide/github_app.rst
@@ -0,0 +1,281 @@
+.. include:: ../common_links.inc
+
+Token GitHub App
+----------------
+
+.. note::
+
+ This section of the documentation is applicable only to GitHub `SciTools`_
+ Organisation **owners** and **administrators**.
+
+.. note::
+
+ The ``iris-actions`` GitHub App has been rebranded with the more generic
+ name ``scitools-ci``, as the app can be used for any `SciTools`_ repository,
+ not just ``iris`` specifically.
+
+ All of the following instructions are still applicable.
+
+
+This section describes how to create, configure, install and use our `SciTools`_
+GitHub App for generating tokens for use with *GitHub Actions* (GHA).
+
+
+Background
+^^^^^^^^^^
+
+Our GitHub *Continuous Integration* (CI) workflows require fully reproducible
+`conda`_ environments to test ``iris`` and build our documentation.
+
+The ``iris`` `refresh-lockfiles`_ GHA workflow uses the `conda-lock`_ package to routinely
+generate a platform specific ``lockfile`` containing all the package dependencies
+required by ``iris`` for a specific version of ``python``.
+
+The environment lockfiles created by the `refresh-lockfiles`_ GHA are contributed
+back to ``iris`` though a pull-request that is automatically generated using the
+third-party `create-pull-request`_ GHA. By default, pull-requests created by such an
+action using the standard ``GITHUB_TOKEN`` **cannot** trigger other workflows, such
+as our CI.
+
+As a result, we use a dedicated authentication **GitHub App** to securely generate tokens
+for the `create-pull-request`_ GHA, which then permits our full suite of CI testing workflows
+to be triggered against the lockfiles pull-request. Ensuring that the CI is triggered gives us
+confidence that the proposed new lockfiles have not introduced a package level incompatibility
+or issue within ``iris``. See :ref:`use gha`.
+
+
+Create GitHub App
+^^^^^^^^^^^^^^^^^
+
+The **GitHub App** is created for the sole purpose of generating tokens for use with actions,
+and **must** be owned by the `SciTools`_ organisation.
+
+To create a minimal `GitHub App`_ for this purpose, perform the following steps:
+
+1. Click the `SciTools`_ organisation ``⚙️ Settings`` option.
+
+.. figure:: assets/scitools-settings.png
+ :alt: SciTools organisation Settings option
+ :align: center
+ :width: 75%
+
+2. Click the ``GitHub Apps`` option from the ``<> Developer settings``
+ section in the left hand sidebar.
+
+.. figure:: assets/developer-settings-github-apps.png
+ :alt: Developer settings, GitHub Apps option
+ :align: center
+ :width: 25%
+
+3. Now click the ``New GitHub App`` button to display the ``Register new GitHub App``
+ form.
+
+Within the ``Register new GitHub App`` form, complete the following fields:
+
+4. Set the **mandatory** ``GitHub App name`` field to be ``iris-actions``.
+5. Set the **mandatory** ``Homepage URL`` field to be ``https://github.com/SciTools/iris``
+6. Under the ``Webhook`` section, **uncheck** the ``Active`` checkbox.
+ Note that, **no** ``Webhook URL`` is required.
+
+.. figure:: assets/webhook-active.png
+ :alt: Webhook active checkbox
+ :align: center
+ :width: 75%
+
+7. Under the ``Repository permissions`` section, set the ``Contents`` field to
+ be ``Access: Read and write``.
+
+.. figure:: assets/repo-perms-contents.png
+ :alt: Repository permissions Contents option
+ :align: center
+ :width: 75%
+
+8. Under the ``Repository permissions`` section, set the ``Pull requests`` field
+ to be ``Access: Read and write``.
+
+.. figure:: assets/repo-perms-pull-requests.png
+ :alt: Repository permissions Pull requests option
+ :align: center
+ :width: 75%
+
+9. Under the ``Organization permissions`` section, set the ``Members`` field to
+ be ``Access: Read-only``.
+
+.. figure:: assets/org-perms-members.png
+ :alt: Organization permissions Members
+ :align: center
+ :width: 75%
+
+10. Under the ``User permissions`` section, for the ``Where can this GitHub App be installed?``
+ field, **check** the ``Only on this account`` radio-button i.e., only allow
+ this GitHub App to be installed on the **SciTools** account.
+
+.. figure:: assets/user-perms.png
+ :alt: User permissions
+ :align: center
+ :width: 75%
+
+11. Finally, click the ``Create GitHub App`` button.
+
+
+Configure GitHub App
+^^^^^^^^^^^^^^^^^^^^
+
+Creating the GitHub App will automatically redirect you to the ``SciTools settings / iris-actions``
+form for the newly created app.
+
+Perform the following GitHub App configuration steps:
+
+.. _app id:
+
+1. Under the ``About`` section, note of the GitHub ``App ID`` as this value is
+ required later. See :ref:`gha secrets`.
+2. Under the ``Display information`` section, optionally upload the ``iris`` logo
+ as a ``png`` image.
+3. Under the ``Private keys`` section, click the ``Generate a private key`` button.
+
+.. figure:: assets/generate-key.png
+ :alt: Private keys Generate a private key
+ :align: center
+ :width: 75%
+
+.. _private key:
+
+GitHub will automatically generate a private key to sign access token requests
+for the app. Also a separate browser pop-up window will appear with the GitHub
+App private key in ``OpenSSL PEM`` format.
+
+.. figure:: assets/download-pem.png
+ :alt: Download OpenSSL PEM file
+ :align: center
+ :width: 50%
+
+.. important::
+
+ Please ensure that you save the ``OpenSSL PEM`` file and **securely** archive
+ its contents. The private key within this file is required later.
+ See :ref:`gha secrets`.
+
+
+Install GitHub App
+^^^^^^^^^^^^^^^^^^
+
+To install the GitHub App:
+
+1. Select the ``Install App`` option from the top left menu of the
+ ``Scitools settings / iris-actions`` form, then click the ``Install`` button.
+
+.. figure:: assets/install-app.png
+ :alt: Private keys Generate a private key
+ :align: center
+ :width: 75%
+
+2. Select the ``Only select repositories`` radio-button from the ``Install iris-actions``
+ form, and choose the ``SciTools/iris`` repository.
+
+.. figure:: assets/install-iris-actions.png
+ :alt: Install iris-actions GitHub App
+ :align: center
+ :width: 75%
+
+3. Click the ``Install`` button.
+
+ The successfully installed ``iris-actions`` GitHub App is now available under
+ the ``GitHub Apps`` option in the ``Integrations`` section of the `SciTools`_
+ organisation ``Settings``. Note that, to reconfigure the installed app click
+ the ``⚙️ App settings`` option.
+
+.. figure:: assets/installed-app.png
+ :alt: Installed GitHub App
+ :align: center
+ :width: 80%
+
+4. Finally, confirm that the ``iris-actions`` GitHub App is now available within
+ the `SciTools/iris`_ repository by clicking the ``GitHub apps`` option in the
+ ``⚙️ Settings`` section.
+
+.. figure:: assets/iris-github-apps.png
+ :alt: Iris installed GitHub App
+ :align: center
+ :width: 80%
+
+
+.. _gha secrets:
+
+Create Repository Secrets
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The GitHub Action that requests an access token from the ``iris-actions``
+GitHub App must be configured with the following information:
+
+* the ``App ID``, and
+* the ``OpenSSL PEM`` private key
+
+associated with the ``iris-actions`` GitHub App. This **sensitive** information is
+made **securely** available by creating `SciTools/iris`_ repository secrets:
+
+1. Click the `SciTools/iris`_ repository ``⚙️ Settings`` option.
+
+.. figure:: assets/iris-settings.png
+ :alt: Iris Settings
+ :align: center
+ :width: 75%
+
+2. Click the ``Actions`` option from the ``Security`` section in the left hand
+ sidebar.
+
+.. figure:: assets/iris-security-actions.png
+ :alt: Iris Settings Security Actions
+ :align: center
+ :width: 25%
+
+3. Click the ``New repository secret`` button.
+
+.. figure:: assets/iris-actions-secret.png
+ :alt: Iris Actions Secret
+ :align: center
+ :width: 75%
+
+4. Complete the ``Actions secrets / New secret`` form for the ``App ID``:
+
+ * Set the ``Name`` field to be ``AUTH_APP_ID``.
+ * Set the ``Value`` field to be the numerical ``iris-actions`` GitHub ``App ID``.
+ See :ref:`here `.
+ * Click the ``Add secret`` button.
+
+5. Click the ``New repository secret`` button again, and complete the form
+ for the ``OpenSSL PEM``:
+
+ * Set the ``Name`` field to be ``AUTH_APP_PRIVATE_KEY``.
+ * Set the ``Value`` field to be the entire contents of the ``OpenSSL PEM`` file.
+ See :ref:`here `.
+ * Click the ``Add secret`` button.
+
+A summary of the newly created `SciTools/iris`_ repository secrets is now available:
+
+.. figure:: assets/iris-secrets-created.png
+ :alt: Iris Secrets created
+ :align: center
+ :width: 75%
+
+
+.. _use gha:
+
+Use GitHub App
+^^^^^^^^^^^^^^
+
+The following example workflow shows how to use the `github-app-token`_ GHA
+to generate a token for use with the `create-pull-request`_ GHA:
+
+.. figure:: assets/gha-token-example.png
+ :alt: GitHub Action token example
+ :align: center
+ :width: 50%
+
+
+.. _GitHub App: https://docs.github.com/en/developers/apps/building-github-apps/creating-a-github-app
+.. _SciTools/iris: https://github.com/SciTools/iris
+.. _conda-lock: https://github.com/conda-incubator/conda-lock
+.. _create-pull-request: https://github.com/peter-evans/create-pull-request
+.. _github-app-token: https://github.com/tibdex/github-app-token
+.. _refresh-lockfiles: https://github.com/SciTools/iris/blob/main/.github/workflows/refresh-lockfiles.yml
diff --git a/docs/src/developers_guide/gitwash/development_workflow.rst b/docs/src/developers_guide/gitwash/development_workflow.rst
index 0536ebfb62..b086922d5b 100644
--- a/docs/src/developers_guide/gitwash/development_workflow.rst
+++ b/docs/src/developers_guide/gitwash/development_workflow.rst
@@ -25,7 +25,7 @@ In what follows we'll refer to the upstream iris ``main`` branch, as
* If you can possibly avoid it, avoid merging trunk or any other branches into
your feature branch while you are working.
* If you do find yourself merging from trunk, consider :ref:`rebase-on-trunk`
-* Ask on the `Iris GitHub Discussions`_ if you get stuck.
+* Ask on the Iris `GitHub Discussions`_ if you get stuck.
* Ask for code review!
This way of working helps to keep work well organized, with readable history.
@@ -157,7 +157,7 @@ Ask for Your Changes to be Reviewed or Merged
When you are ready to ask for someone to review your code and consider a merge:
#. Go to the URL of your forked repo, say
- ``http://github.com/your-user-name/iris``.
+ ``https://github.com/your-user-name/iris``.
#. Use the 'Switch Branches' dropdown menu near the top left of the page to
select the branch with your changes:
@@ -190,7 +190,7 @@ Delete a Branch on Github
git push origin :my-unwanted-branch
Note the colon ``:`` before ``test-branch``. See also:
-http://github.com/guides/remove-a-remote-branch
+https://github.com/guides/remove-a-remote-branch
Several People Sharing a Single Repository
@@ -203,7 +203,7 @@ share it via github.
First fork iris into your account, as from :ref:`forking`.
Then, go to your forked repository github page, say
-``http://github.com/your-user-name/iris``, select :guilabel:`Settings`,
+``https://github.com/your-user-name/iris``, select :guilabel:`Settings`,
:guilabel:`Manage Access` and then :guilabel:`Invite collaborator`.
.. note:: For more information on sharing your repository see the
diff --git a/docs/src/developers_guide/gitwash/forking.rst b/docs/src/developers_guide/gitwash/forking.rst
index 161847ed79..247e3cf678 100644
--- a/docs/src/developers_guide/gitwash/forking.rst
+++ b/docs/src/developers_guide/gitwash/forking.rst
@@ -7,7 +7,7 @@ Making Your own Copy (fork) of Iris
===================================
You need to do this only once. The instructions here are very similar
-to the instructions at http://help.github.com/forking/, please see
+to the instructions at https://help.github.com/forking/, please see
that page for more detail. We're repeating some of it here just to give the
specifics for the `Iris`_ project, and to suggest some default names.
diff --git a/docs/src/developers_guide/gitwash/git_links.inc b/docs/src/developers_guide/gitwash/git_links.inc
index 9a87b55d4d..11d037ccf4 100644
--- a/docs/src/developers_guide/gitwash/git_links.inc
+++ b/docs/src/developers_guide/gitwash/git_links.inc
@@ -9,8 +9,8 @@
nipy, NIPY, Nipy, etc...
.. _git: http://git-scm.com/
-.. _github: http://github.com
-.. _github help: http://help.github.com
+.. _github: https://github.com
+.. _github help: https://help.github.com
.. _git documentation: https://git-scm.com/docs
.. _git clone: http://schacon.github.com/git/git-clone.html
diff --git a/docs/src/developers_guide/imagehash_index.rst b/docs/src/developers_guide/imagehash_index.rst
deleted file mode 100644
index a11ae8a531..0000000000
--- a/docs/src/developers_guide/imagehash_index.rst
+++ /dev/null
@@ -1,20 +0,0 @@
-.. include:: ../common_links.inc
-
-.. _testing.imagehash_index:
-
-Graphical Test Hash Index
-*************************
-
-The iris test suite produces plots of data using matplotlib and cartopy.
-The images produced are compared to known "good" output, the images for
-which are kept in `scitools/test-iris-imagehash `_.
-
-For an overview of iris' graphics tests, see :ref:`testing.graphics`
-
-Typically running the iris test suite will output the rendered
-images to ``$PROJECT_DIR/iris_image_test_output``.
-The known good output for each test can be seen at the links below
-for comparison.
-
-
-.. imagetest-list::
\ No newline at end of file
diff --git a/docs/src/developers_guide/release.rst b/docs/src/developers_guide/release.rst
index f4d44781fc..25a426e20b 100644
--- a/docs/src/developers_guide/release.rst
+++ b/docs/src/developers_guide/release.rst
@@ -19,7 +19,8 @@ A Release Manager will be nominated for each release of Iris. This role involves
* deciding which features and bug fixes should be included in the release
* managing the project board for the release
-* using a `GitHub Releases Discussion Forum`_ for documenting intent and capturing any
+* using :discussion:`GitHub Discussion releases category `
+ for documenting intent and capturing any
discussion about the release
The Release Manager will make the release, ensuring that all the steps outlined
@@ -99,12 +100,14 @@ Steps to achieve this can be found in the :ref:`iris_development_releases_steps`
The Release
-----------
-The final steps of the release are to change the version string ``__version__``
-in the source of :literal:`iris.__init__.py` and ensure the release date and details
+The final steps of the release are to ensure that the release date and details
are correct in the relevant ``whatsnew`` page within the documentation.
-Once all checks are complete, the release is cut by the creation of a new tag
-in the ``SciTools/iris`` repository.
+There is no need to update the ``iris.__version__``, as this is managed
+automatically by `setuptools-scm`_.
+
+Once all checks are complete, the release is published on GitHub by
+creating a new tag in the ``SciTools/iris`` repository.
Update conda-forge
@@ -120,6 +123,14 @@ conda package on the `conda-forge Anaconda channel`_.
Update PyPI
-----------
+.. note::
+
+ As part of our Continuous-Integration (CI), the building and publishing of
+ PyPI artifacts is now automated by a dedicated GitHub Action.
+
+ The following instructions **no longer** require to be performed manually,
+ but remain part of the documentation for reference purposes only.
+
Update the `scitools-iris`_ project on PyPI with the latest Iris release.
To do this perform the following steps.
@@ -178,14 +189,14 @@ For further details on how to test Iris, see :ref:`developer_running_tests`.
Merge Back
----------
-After the release is cut, the changes from the release branch should be merged
+After the release is published, the changes from the release branch should be merged
back onto the ``SciTools/iris`` ``main`` branch.
To achieve this, first cut a local branch from the latest ``main`` branch,
and `git merge` the :literal:`.x` release branch into it. Ensure that the
-``iris.__version__``, ``docs/src/whatsnew/index.rst``, ``docs/src/whatsnew/dev.rst``,
-and ``docs/src/whatsnew/latest.rst`` are correct, before committing these changes
-and then proposing a pull-request on the ``main`` branch of ``SciTools/iris``.
+``docs/src/whatsnew/index.rst`` and ``docs/src/whatsnew/latest.rst`` are
+correct, before committing these changes and then proposing a pull-request
+on the ``main`` branch of ``SciTools/iris``.
Point Releases
@@ -218,24 +229,24 @@ Release Steps
#. Update the ``iris.__init__.py`` version string e.g., to ``1.9.0``
#. Update the ``whatsnew`` for the release:
- * Use ``git`` to rename ``docs/src/whatsnew/dev.rst`` to the release
- version file ``v1.9.rst``
- * Update the symbolic link ``latest.rst`` to reference the latest
- whatsnew ``v1.9.rst``
- * Use ``git`` to delete the ``docs/src/whatsnew/dev.rst.template`` file
- * In ``v1.9.rst`` remove the ``[unreleased]`` caption from the page title.
- Note that, the Iris version and release date are updated automatically
- when the documentation is built
- * Review the file for correctness
- * Work with the development team to populate the ``Release Highlights``
- dropdown at the top of the file, which provides extra detail on notable
- changes
- * Use ``git`` to add and commit all changes, including removal of
- ``dev.rst.template`` and update to the ``latest.rst`` symbolic link.
+ * Use ``git`` to rename ``docs/src/whatsnew/latest.rst`` to the release
+ version file ``v1.9.rst``
+ * Update ``docs/src/whatsnews/index.rst`` to rename ``latest.rst`` in the
+ include statement and toctree.
+ * Use ``git`` to delete the ``docs/src/whatsnew/latest.rst.template`` file
+ * In ``v1.9.rst`` remove the ``[unreleased]`` caption from the page title.
+ Note that, the Iris version and release date are updated automatically
+ when the documentation is built
+ * Review the file for correctness
+ * Work with the development team to populate the ``Release Highlights``
+ dropdown at the top of the file, which provides extra detail on notable
+ changes
+ * Use ``git`` to add and commit all changes, including removal of
+ ``latest.rst.template``.
#. Update the ``whatsnew`` index ``docs/src/whatsnew/index.rst``
- * Remove the reference to ``dev.rst``
+ * Remove the reference to ``latest.rst``
* Add a reference to ``v1.9.rst`` to the top of the list
#. Check your changes by building the documentation and reviewing
@@ -256,12 +267,14 @@ Post Release Steps
`Read The Docs`_ to ensure that the appropriate versions are ``Active``
and/or ``Hidden``. To do this ``Edit`` the appropriate version e.g.,
see `Editing v3.0.0rc0`_ (must be logged into Read the Docs).
+#. Make a new ``latest.rst`` from ``latest.rst.template`` and update the include
+ statement and the toctree in ``index.rst`` to point at the new
+ ``latest.rst``.
#. Merge back to ``main``
.. _SciTools/iris: https://github.com/SciTools/iris
.. _tag on the SciTools/Iris: https://github.com/SciTools/iris/releases
-.. _GitHub Releases Discussion Forum: https://github.com/SciTools/iris/discussions/categories/releases
.. _conda-forge Anaconda channel: https://anaconda.org/conda-forge/iris
.. _conda-forge iris-feedstock: https://github.com/conda-forge/iris-feedstock
.. _CFEP-05: https://github.com/conda-forge/cfep/blob/master/cfep-05.md
@@ -271,4 +284,5 @@ Post Release Steps
.. _rc_iris: https://anaconda.org/conda-forge/iris/labels
.. _Generating Distribution Archives: https://packaging.python.org/tutorials/packaging-projects/#generating-distribution-archives
.. _Packaging Your Project: https://packaging.python.org/guides/distributing-packages-using-setuptools/#packaging-your-project
-.. _latest CF standard names: http://cfconventions.org/standard-names.html
\ No newline at end of file
+.. _latest CF standard names: http://cfconventions.org/standard-names.html
+.. _setuptools-scm: https://github.com/pypa/setuptools_scm
\ No newline at end of file
diff --git a/docs/src/developers_guide/testing_tools.rst b/docs/src/developers_guide/testing_tools.rst
new file mode 100755
index 0000000000..dd628d37fc
--- /dev/null
+++ b/docs/src/developers_guide/testing_tools.rst
@@ -0,0 +1,80 @@
+.. include:: ../common_links.inc
+
+.. _testing_tools:
+
+Testing tools
+*************
+
+Iris has various internal convenience functions and utilities available to
+support writing tests. Using these makes tests quicker and easier to write, and
+also consistent with the rest of Iris (which makes it easier to work with the
+code). Most of these conveniences are accessed through the
+:class:`iris.tests.IrisTest` class, from
+which Iris' test classes then inherit.
+
+.. tip::
+
+ All functions listed on this page are defined within
+ :mod:`iris.tests.__init__.py` as methods of
+ :class:`iris.tests.IrisTest_nometa` (which :class:`iris.tests.IrisTest`
+ inherits from). They can be accessed within a test using
+ ``self.exampleFunction``.
+
+Custom assertions
+=================
+
+:class:`iris.tests.IrisTest` supports a variety of custom unittest-style
+assertions, such as :meth:`~iris.tests.IrisTest_nometa.assertArrayEqual`,
+:meth:`~iris.tests.IrisTest_nometa.assertArrayAlmostEqual`.
+
+.. _create-missing:
+
+Saving results
+--------------
+
+Some tests compare the generated output to the expected result contained in a
+file. Custom assertions for this include
+:meth:`~iris.tests.IrisTest_nometa.assertCMLApproxData`
+:meth:`~iris.tests.IrisTest_nometa.assertCDL`
+:meth:`~iris.tests.IrisTest_nometa.assertCML` and
+:meth:`~iris.tests.IrisTest_nometa.assertTextFile`. See docstrings for more
+information.
+
+.. note::
+
+ Sometimes code changes alter the results expected from a test containing the
+ above methods. These can be updated by removing the existing result files
+ and then running the file containing the test with a ``--create-missing``
+ command line argument, or setting the ``IRIS_TEST_CREATE_MISSING``
+ environment variable to anything non-zero. This will create the files rather
+ than erroring, allowing you to commit the updated results.
+
+Context managers
+================
+
+Capturing exceptions and logging
+--------------------------------
+
+:class:`iris.tests.IrisTest` includes several context managers that can be used
+to make test code tidier and easier to read. These include
+:meth:`~iris.tests.IrisTest_nometa.assertWarnsRegexp` and
+:meth:`~iris.tests.IrisTest_nometa.assertLogs`.
+
+Temporary files
+---------------
+
+It's also possible to generate temporary files in a concise fashion with
+:meth:`~iris.tests.IrisTest_nometa.temp_filename`.
+
+Patching
+========
+
+:meth:`~iris.tests.IrisTest_nometa.patch` is a wrapper around ``unittest.patch``
+that will be automatically cleaned up at the end of the test.
+
+Graphic tests
+=============
+
+As a package capable of generating graphical outputs, Iris has utilities for
+creating and updating graphical tests - see :ref:`testing.graphics` for more
+information.
\ No newline at end of file
diff --git a/docs/src/further_topics/index.rst b/docs/src/further_topics/index.rst
deleted file mode 100644
index 81bff2f764..0000000000
--- a/docs/src/further_topics/index.rst
+++ /dev/null
@@ -1,26 +0,0 @@
-.. _further topics:
-
-Introduction
-============
-
-Some specific areas of Iris may require further explanation or a deep dive
-into additional detail above and beyond that offered by the
-:ref:`User Guide `.
-
-This section provides a collection of additional material on focused topics
-that may be of interest to the more advanced or curious user.
-
-.. hint::
-
- If you wish further documentation on any specific topics or areas of Iris
- that are missing, then please let us know by raising a :issue:`GitHub Documentation Issue`
- on `SciTools/Iris`_.
-
-
-* :doc:`metadata`
-* :doc:`lenient_metadata`
-* :doc:`lenient_maths`
-* :ref:`ugrid`
-
-
-.. _SciTools/iris: https://github.com/SciTools/iris
diff --git a/docs/src/further_topics/metadata.rst b/docs/src/further_topics/metadata.rst
index 1b81f7055c..de1afb15af 100644
--- a/docs/src/further_topics/metadata.rst
+++ b/docs/src/further_topics/metadata.rst
@@ -1,3 +1,4 @@
+.. _further topics:
.. _metadata:
Metadata
@@ -63,25 +64,26 @@ For example, the collective metadata used to define an
``var_name``, ``units``, and ``attributes`` members. Note that, these are the
actual `data attribute`_ names of the metadata members on the Iris class.
+
.. _metadata members table:
-.. table:: - Iris classes that model `CF Conventions`_ metadata
+.. table:: Iris classes that model `CF Conventions`_ metadata
:widths: auto
:align: center
- =================== ======================================= ============================== ========================================== ================================= ======================== ============================== ===================
- Metadata Members :class:`~iris.coords.AncillaryVariable` :class:`~iris.coords.AuxCoord` :class:`~iris.aux_factory.AuxCoordFactory` :class:`~iris.coords.CellMeasure` :class:`~iris.cube.Cube` :class:`~iris.coords.DimCoord` Metadata Members
- =================== ======================================= ============================== ========================================== ================================= ======================== ============================== ===================
- ``standard_name`` ✔ ✔ ✔ ✔ ✔ ✔ ``standard_name``
- ``long_name`` ✔ ✔ ✔ ✔ ✔ ✔ ``long_name``
- ``var_name`` ✔ ✔ ✔ ✔ ✔ ✔ ``var_name``
- ``units`` ✔ ✔ ✔ ✔ ✔ ✔ ``units``
- ``attributes`` ✔ ✔ ✔ ✔ ✔ ✔ ``attributes``
- ``coord_system`` ✔ ✔ ✔ ``coord_system``
- ``climatological`` ✔ ✔ ✔ ``climatological``
- ``measure`` ✔ ``measure``
- ``cell_methods`` ✔ ``cell_methods``
- ``circular`` ✔ ``circular``
- =================== ======================================= ============================== ========================================== ================================= ======================== ============================== ===================
+ =================== ======================================= ============================== ========================================== ================================= ======================== ==============================
+ Metadata Members :class:`~iris.coords.AncillaryVariable` :class:`~iris.coords.AuxCoord` :class:`~iris.aux_factory.AuxCoordFactory` :class:`~iris.coords.CellMeasure` :class:`~iris.cube.Cube` :class:`~iris.coords.DimCoord`
+ =================== ======================================= ============================== ========================================== ================================= ======================== ==============================
+ ``standard_name`` ✔ ✔ ✔ ✔ ✔ ✔
+ ``long_name`` ✔ ✔ ✔ ✔ ✔ ✔
+ ``var_name`` ✔ ✔ ✔ ✔ ✔ ✔
+ ``units`` ✔ ✔ ✔ ✔ ✔ ✔
+ ``attributes`` ✔ ✔ ✔ ✔ ✔ ✔
+ ``coord_system`` ✔ ✔ ✔
+ ``climatological`` ✔ ✔ ✔
+ ``measure`` ✔
+ ``cell_methods`` ✔
+ ``circular`` ✔
+ =================== ======================================= ============================== ========================================== ================================= ======================== ==============================
.. note::
diff --git a/docs/src/further_topics/ugrid/data_model.rst b/docs/src/further_topics/ugrid/data_model.rst
index 4a2f64f627..cc3cc7b793 100644
--- a/docs/src/further_topics/ugrid/data_model.rst
+++ b/docs/src/further_topics/ugrid/data_model.rst
@@ -52,7 +52,7 @@ example.
.. _data_structured_grid:
.. figure:: images/data_structured_grid.svg
:alt: Diagram of how data is represented on a structured grid
- :align: right
+ :align: left
:width: 1280
Data on a structured grid.
@@ -131,7 +131,7 @@ example of what is described above.
.. _data_ugrid_mesh:
.. figure:: images/data_ugrid_mesh.svg
:alt: Diagram of how data is represented on an unstructured mesh
- :align: right
+ :align: left
:width: 1280
Data on an unstructured mesh
@@ -157,7 +157,7 @@ elements. See :numref:`ugrid_element_centres` for a visualised example.
.. _ugrid_element_centres:
.. figure:: images/ugrid_element_centres.svg
:alt: Diagram demonstrating mesh face-centred data.
- :align: right
+ :align: left
:width: 1280
Data can be assigned to mesh edge/face/volume 'centres'
@@ -180,7 +180,7 @@ Every node is completely independent - every one can have unique X andY (and Z)
.. _ugrid_node_independence:
.. figure:: images/ugrid_node_independence.svg
:alt: Diagram demonstrating the independence of each mesh node
- :align: right
+ :align: left
:width: 300
Every mesh node is completely independent
@@ -199,7 +199,7 @@ array. See :numref:`ugrid_variable_faces`.
.. _ugrid_variable_faces:
.. figure:: images/ugrid_variable_faces.svg
:alt: Diagram demonstrating mesh faces with variable node counts
- :align: right
+ :align: left
:width: 300
Mesh faces can have different node counts (using masking)
@@ -216,7 +216,7 @@ areas (faces). See :numref:`ugrid_edge_data`.
.. _ugrid_edge_data:
.. figure:: images/ugrid_edge_data.svg
:alt: Diagram demonstrating data assigned to mesh edges
- :align: right
+ :align: left
:width: 300
Data can be assigned to mesh edges
@@ -405,6 +405,9 @@ the :class:`~iris.cube.Cube`\'s unstructured dimension.
Mesh coordinates:
latitude x -
longitude x -
+ Mesh:
+ name my_mesh
+ location edge
>>> print(edge_cube.location)
edge
diff --git a/docs/src/further_topics/ugrid/images/fesom_mesh.png b/docs/src/further_topics/ugrid/images/fesom_mesh.png
new file mode 100644
index 0000000000..283899a94b
Binary files /dev/null and b/docs/src/further_topics/ugrid/images/fesom_mesh.png differ
diff --git a/docs/src/further_topics/ugrid/images/smc_mesh.png b/docs/src/further_topics/ugrid/images/smc_mesh.png
new file mode 100644
index 0000000000..8c5a9d86eb
Binary files /dev/null and b/docs/src/further_topics/ugrid/images/smc_mesh.png differ
diff --git a/docs/src/further_topics/ugrid/index.rst b/docs/src/further_topics/ugrid/index.rst
index 81ba24428a..c45fd271a2 100644
--- a/docs/src/further_topics/ugrid/index.rst
+++ b/docs/src/further_topics/ugrid/index.rst
@@ -38,6 +38,7 @@ Read on to find out more...
* :doc:`data_model` - learn why the mesh experience is so different.
* :doc:`partner_packages` - meet some optional dependencies that provide powerful mesh operations.
* :doc:`operations` - experience how your workflows will look when written for mesh data.
+* :doc:`other_meshes` - check out some examples of converting various mesh formats into Iris' mesh format.
..
Need an actual TOC to get Sphinx working properly, but have hidden it in
@@ -50,5 +51,6 @@ Read on to find out more...
data_model
partner_packages
operations
+ other_meshes
__ CF-UGRID_
diff --git a/docs/src/further_topics/ugrid/operations.rst b/docs/src/further_topics/ugrid/operations.rst
index f96e3e406c..a4e0e593d7 100644
--- a/docs/src/further_topics/ugrid/operations.rst
+++ b/docs/src/further_topics/ugrid/operations.rst
@@ -189,6 +189,9 @@ Creating a :class:`~iris.cube.Cube` is unchanged; the
Mesh coordinates:
latitude x -
longitude x -
+ Mesh:
+ name my_mesh
+ location edge
Save
@@ -392,6 +395,9 @@ etcetera:
Mesh coordinates:
latitude x -
longitude x -
+ Mesh:
+ name my_mesh
+ location face
Attributes:
Conventions 'CF-1.7'
@@ -620,6 +626,9 @@ the link between :class:`~iris.cube.Cube` and
Mesh coordinates:
latitude x -
longitude x -
+ Mesh:
+ name my_mesh
+ location edge
# Sub-setted MeshCoords have become AuxCoords.
>>> print(edge_cube[:-1])
@@ -976,13 +985,26 @@ on dimensions other than the :meth:`~iris.cube.Cube.mesh_dim`, since such
Arithmetic
----------
-.. |tagline: arithmetic| replace:: |pending|
+.. |tagline: arithmetic| replace:: |unchanged|
.. rubric:: |tagline: arithmetic|
-:class:`~iris.cube.Cube` Arithmetic (described in :doc:`/userguide/cube_maths`)
-has not yet been adapted to handle :class:`~iris.cube.Cube`\s that include
-:class:`~iris.experimental.ugrid.MeshCoord`\s.
+Cube Arithmetic (described in :doc:`/userguide/cube_maths`)
+has been extended to handle :class:`~iris.cube.Cube`\s that include
+:class:`~iris.experimental.ugrid.MeshCoord`\s, and hence have a ``cube.mesh``.
+
+Cubes with meshes can be combined in arithmetic operations like
+"ordinary" cubes. They can combine with other cubes without that mesh
+(and its dimension); or with a matching mesh, which may be on a different
+dimension.
+Arithmetic can also be performed between a cube with a mesh and a mesh
+coordinate with a matching mesh.
+
+In all cases, the result will have the same mesh as the input cubes.
+
+Meshes only match if they are fully equal -- i.e. they contain all the same
+coordinates and connectivities, with identical names, units, attributes and
+data content.
.. todo:
diff --git a/docs/src/further_topics/ugrid/other_meshes.rst b/docs/src/further_topics/ugrid/other_meshes.rst
new file mode 100644
index 0000000000..e6f477624e
--- /dev/null
+++ b/docs/src/further_topics/ugrid/other_meshes.rst
@@ -0,0 +1,225 @@
+.. _other_meshes:
+
+Converting Other Mesh Formats
+*****************************
+
+Iris' Mesh Data Model is based primarily on the CF-UGRID conventions (see
+:doc:`data_model`), but other mesh formats can be converted to fit into this
+model, **enabling use of Iris' specialised mesh support**. Below are some
+examples demonstrating how this works for various mesh formats.
+
+.. contents::
+ :local:
+
+`FESOM 1.4`_ Voronoi Polygons
+-----------------------------
+.. figure:: images/fesom_mesh.png
+ :width: 300
+ :alt: Sample of FESOM mesh voronoi polygons, with variable numbers of sides.
+
+A FESOM mesh encoded in a NetCDF file includes:
+
+* X+Y point coordinates
+* X+Y corners coordinates of the Voronoi Polygons around these points -
+ represented as the bounds of the coordinates
+
+To represent the Voronoi Polygons as faces, the corner coordinates will be used
+as the **nodes** when creating the Iris
+:class:`~iris.experimental.ugrid.mesh.Mesh`.
+
+.. dropdown:: :opticon:`code`
+
+ .. code-block:: python
+
+ >>> import iris
+ >>> from iris.experimental.ugrid import Mesh
+
+
+ >>> temperature_cube = iris.load_cube("my_file.nc", "sea_surface_temperature")
+ >>> print(temperature_cube)
+ sea_surface_temperature / (degC) (time: 12; -- : 126859)
+ Dimension coordinates:
+ time x -
+ Auxiliary coordinates:
+ latitude - x
+ longitude - x
+ Cell methods:
+ mean where sea area
+ mean time
+ Attributes:
+ grid 'FESOM 1.4 (unstructured grid in the horizontal with 126859 wet nodes;...
+ ...
+
+ >>> print(temperature_cube.coord("longitude"))
+ AuxCoord : longitude / (degrees)
+ points:
+ bounds:
+ shape: (126859,) bounds(126859, 18)
+ dtype: float64
+ standard_name: 'longitude'
+ var_name: 'lon'
+
+ # Use a Mesh to represent the Cube's horizontal geography, by replacing
+ # the existing face AuxCoords with new MeshCoords.
+ >>> fesom_mesh = Mesh.from_coords(temperature_cube.coord('longitude'),
+ ... temperature_cube.coord('latitude'))
+ >>> for new_coord in fesom_mesh.to_MeshCoords("face"):
+ ... old_coord = temperature_cube.coord(new_coord.name())
+ ... unstructured_dim, = old_coord.cube_dims(temperature_cube)
+ ... temperature_cube.remove_coord(old_coord)
+ ... temperature_cube.add_aux_coord(new_coord, unstructured_dim)
+
+ >>> print(temperature_cube)
+ sea_surface_temperature / (degC) (time: 12; -- : 126859)
+ Dimension coordinates:
+ time x -
+ Mesh coordinates:
+ latitude - x
+ longitude - x
+ Cell methods:
+ mean where sea area
+ mean time
+ Attributes:
+ grid 'FESOM 1.4 (unstructured grid in the horizontal with 126859 wet nodes;...
+ ...
+
+ >>> print(temperature_cube.mesh)
+ Mesh : 'unknown'
+ topology_dimension: 2
+ node
+ node_dimension: 'Mesh2d_node'
+ node coordinates
+ shape(2283462,)>
+ shape(2283462,)>
+ face
+ face_dimension: 'Mesh2d_face'
+ face_node_connectivity: shape(126859, 18)>
+ face coordinates
+ shape(126859,)>
+ shape(126859,)>
+
+`WAVEWATCH III`_ Spherical Multi-Cell (SMC) WAVE Quad Grid
+----------------------------------------------------------
+.. figure:: images/smc_mesh.png
+ :width: 300
+ :alt: Sample of an SMC mesh, with decreasing quad sizes at the coastlines.
+
+An SMC grid encoded in a NetCDF file includes:
+
+* X+Y face centre coordinates
+* X+Y base face sizes
+* X+Y face size factors
+
+From this information we can derive face corner coordinates, which will be used
+as the **nodes** when creating the Iris
+:class:`~iris.experimental.ugrid.mesh.Mesh`.
+
+
+.. dropdown:: :opticon:`code`
+
+ .. code-block:: python
+
+ >>> import iris
+ >>> from iris.experimental.ugrid import Mesh
+ >>> import numpy as np
+
+
+ >>> wave_cube = iris.load_cube("my_file.nc", "sea_surface_wave_significant_height")
+ >>> print(wave_cube)
+ sea_surface_wave_significant_height / (m) (time: 7; -- : 666328)
+ Dimension coordinates:
+ time x -
+ Auxiliary coordinates:
+ forecast_period x -
+ latitude - x
+ latitude cell size factor - x
+ longitude - x
+ longitude cell size factor - x
+ Scalar coordinates:
+ forecast_reference_time 2021-12-05 00:00:00
+ Attributes:
+ SIN4 namelist parameter BETAMAX 1.39
+ SMC_grid_type 'seapoint'
+ WAVEWATCH_III_switches 'NOGRB SHRD PR2 UNO SMC FLX0 LN1 ST4 NL1 BT1 DB1 TR0 BS0 IC0 IS0 REF0 WNT1...
+ WAVEWATCH_III_version_number '7.13'
+ altitude_resolution 'n/a'
+ area 'Global wave model GS512L4EUK'
+ base_lat_size 0.029296871
+ base_lon_size 0.043945305
+ ...
+
+ >>> faces_x = wave_cube.coord("longitude")
+ >>> faces_y = wave_cube.coord("latitude")
+ >>> face_size_factor_x = wave_cube.coord("longitude cell size factor")
+ >>> face_size_factor_y = wave_cube.coord("latitude cell size factor")
+ >>> base_x_size = wave_cube.attributes["base_lon_size"]
+ >>> base_y_size = wave_cube.attributes["base_lat_size"]
+
+ # Calculate face corners from face centres and face size factors.
+ >>> face_centres_x = faces_x.points
+ >>> face_centres_y = faces_y.points
+ >>> face_size_x = face_size_factor_x.points * base_x_size
+ >>> face_size_y = face_size_factor_y.points * base_y_size
+
+ >>> x_mins = (face_centres_x - 0.5 * face_size_x).reshape(-1, 1)
+ >>> x_maxs = (face_centres_x + 0.5 * face_size_x).reshape(-1, 1)
+ >>> y_mins = (face_centres_y - 0.5 * face_size_y).reshape(-1, 1)
+ >>> y_maxs = (face_centres_y + 0.5 * face_size_y).reshape(-1, 1)
+
+ >>> face_corners_x = np.hstack([x_mins, x_maxs, x_maxs, x_mins])
+ >>> face_corners_y = np.hstack([y_mins, y_mins, y_maxs, y_maxs])
+
+ # Add face corners as coordinate bounds.
+ >>> faces_x.bounds = face_corners_x
+ >>> faces_y.bounds = face_corners_y
+
+ # Use a Mesh to represent the Cube's horizontal geography, by replacing
+ # the existing face AuxCoords with new MeshCoords.
+ >>> smc_mesh = Mesh.from_coords(faces_x, faces_y)
+ >>> for new_coord in smc_mesh.to_MeshCoords("face"):
+ ... old_coord = wave_cube.coord(new_coord.name())
+ ... unstructured_dim, = old_coord.cube_dims(wave_cube)
+ ... wave_cube.remove_coord(old_coord)
+ ... wave_cube.add_aux_coord(new_coord, unstructured_dim)
+
+ >>> print(wave_cube)
+ sea_surface_wave_significant_height / (m) (time: 7; -- : 666328)
+ Dimension coordinates:
+ time x -
+ Mesh coordinates:
+ latitude - x
+ longitude - x
+ Auxiliary coordinates:
+ forecast_period x -
+ latitude cell size factor - x
+ longitude cell size factor - x
+ Scalar coordinates:
+ forecast_reference_time 2021-12-05 00:00:00
+ Attributes:
+ SIN4 namelist parameter BETAMAX 1.39
+ SMC_grid_type 'seapoint'
+ WAVEWATCH_III_switches 'NOGRB SHRD PR2 UNO SMC FLX0 LN1 ST4 NL1 BT1 DB1 TR0 BS0 IC0 IS0 REF0 WNT1...
+ WAVEWATCH_III_version_number '7.13'
+ altitude_resolution 'n/a'
+ area 'Global wave model GS512L4EUK'
+ base_lat_size 0.029296871
+ base_lon_size 0.043945305
+ ...
+
+ >>> print(wave_cube.mesh)
+ Mesh : 'unknown'
+ topology_dimension: 2
+ node
+ node_dimension: 'Mesh2d_node'
+ node coordinates
+
+
+ face
+ face_dimension: 'Mesh2d_face'
+ face_node_connectivity:
+ face coordinates
+
+
+
+.. _WAVEWATCH III: https://github.com/NOAA-EMC/WW3
+.. _FESOM 1.4: https://fesom.de/models/fesom14/
diff --git a/docs/src/getting_started.rst b/docs/src/getting_started.rst
new file mode 100644
index 0000000000..24299a4060
--- /dev/null
+++ b/docs/src/getting_started.rst
@@ -0,0 +1,15 @@
+.. _getting_started_index:
+
+Getting Started
+===============
+
+To get started with Iris we recommend reading :ref:`why_iris` was created and to
+explore the examples in the :ref:`gallery_index` after :ref:`installing_iris`
+Iris.
+
+.. toctree::
+ :maxdepth: 1
+
+ why_iris
+ installing
+ generated/gallery/index
\ No newline at end of file
diff --git a/docs/src/index.rst b/docs/src/index.rst
index e6a787a220..b9f7faaa03 100644
--- a/docs/src/index.rst
+++ b/docs/src/index.rst
@@ -1,7 +1,9 @@
+.. include:: common_links.inc
.. _iris_docs:
-Iris |version|
-========================
+
+Iris
+====
**A powerful, format-agnostic, community-driven Python package for analysing
and visualising Earth science data.**
@@ -11,157 +13,137 @@ giving you a powerful, format-agnostic interface for working with your data.
It excels when working with multi-dimensional Earth Science data, where tabular
representations become unwieldy and inefficient.
-`CF Standard names `_,
-`units `_, and coordinate metadata
-are built into Iris, giving you a rich and expressive interface for maintaining
-an accurate representation of your data. Its treatment of data and
-associated metadata as first-class objects includes:
-
-* visualisation interface based on `matplotlib `_ and
- `cartopy `_,
-* unit conversion,
-* subsetting and extraction,
-* merge and concatenate,
-* aggregations and reductions (including min, max, mean and weighted averages),
-* interpolation and regridding (including nearest-neighbor, linear and
- area-weighted), and
-* operator overloads (``+``, ``-``, ``*``, ``/``, etc.).
-
-A number of file formats are recognised by Iris, including CF-compliant NetCDF,
-GRIB, and PP, and it has a plugin architecture to allow other formats to be
-added seamlessly.
-
-Building upon `NumPy `_ and
-`dask `_, Iris scales from efficient
-single-machine workflows right through to multi-core clusters and HPC.
-Interoperability with packages from the wider scientific Python ecosystem comes
-from Iris' use of standard NumPy/dask arrays as its underlying data storage.
-
-Iris is part of SciTools, for more information see https://scitools.org.uk/.
-For **Iris 2.4** and earlier documentation please see the
-:link-badge:`https://scitools.org.uk/iris/docs/v2.4.0/,"legacy documentation",cls=badge-info text-white`.
-
+For more information see :ref:`why_iris`.
.. panels::
:container: container-lg pb-3
- :column: col-lg-4 col-md-4 col-sm-6 col-xs-12 p-2
+ :column: col-lg-4 col-md-4 col-sm-6 col-xs-12 p-2 text-center
+ :img-top-cls: w-50 m-auto px-1 py-2
- Install Iris as a user or developer.
- +++
- .. link-button:: installing_iris
- :type: ref
- :text: Installing Iris
- :classes: btn-outline-primary btn-block
---
- Example code to create a variety of plots.
+ :img-top: _static/icon_shuttle.svg
+
+ Information on Iris, how to install and a gallery of examples that
+ create plots.
+++
- .. link-button:: sphx_glr_generated_gallery
+ .. link-button:: getting_started
:type: ref
- :text: Gallery
- :classes: btn-outline-primary btn-block
+ :text: Getting Started
+ :classes: btn-outline-info btn-block
+
+
---
- Find out what has recently changed in Iris.
+ :img-top: _static/icon_instructions.svg
+
+ Learn how to use Iris, including loading, navigating, saving,
+ plotting and more.
+++
- .. link-button:: iris_whatsnew
+ .. link-button:: user_guide_index
:type: ref
- :text: What's New
- :classes: btn-outline-primary btn-block
+ :text: User Guide
+ :classes: btn-outline-info btn-block
+
---
- Learn how to use Iris.
+ :img-top: _static/icon_development.svg
+
+ As a developer you can contribute to Iris.
+++
- .. link-button:: user_guide_index
+ .. link-button:: development_where_to_start
:type: ref
- :text: User Guide
- :classes: btn-outline-primary btn-block
+ :text: Developers Guide
+ :classes: btn-outline-info btn-block
+
---
+ :img-top: _static/icon_api.svg
+
Browse full Iris functionality by module.
+++
.. link-button:: Iris
:type: ref
:text: Iris API
- :classes: btn-outline-primary btn-block
+ :classes: btn-outline-info btn-block
+
---
- As a developer you can contribute to Iris.
+ :img-top: _static/icon_new_product.svg
+
+ Find out what has recently changed in Iris.
+++
- .. link-button:: development_where_to_start
+ .. link-button:: iris_whatsnew
+ :type: ref
+ :text: What's New
+ :classes: btn-outline-info btn-block
+
+ ---
+ :img-top: _static/icon_thumb.png
+
+ Raise the profile of issues by voting on them.
+ +++
+ .. link-button:: voted_issues_top
:type: ref
- :text: Getting Involved
- :classes: btn-outline-primary btn-block
+ :text: Voted Issues
+ :classes: btn-outline-info btn-block
-.. toctree::
- :maxdepth: 1
- :caption: Getting Started
- :hidden:
+Icons made by `FreePik `_ from
+`Flaticon `_
+
+
+Support
+~~~~~~~
+
+We, the Iris developers have adopted `GitHub Discussions`_ to capture any
+discussions or support questions related to Iris.
+
+See also `StackOverflow for "How Do I? `_
+that may be useful but we do not actively monitor this.
+
+The legacy support resources:
- installing
- generated/gallery/index
+* `Users Google Group `_
+* `Developers Google Group `_
+* `Legacy Documentation`_ (Iris 2.4 or earlier)
.. toctree::
+ :caption: Getting Started
:maxdepth: 1
- :caption: What's New in Iris
:hidden:
- whatsnew/latest
- Archive
+ getting_started
.. toctree::
- :maxdepth: 1
:caption: User Guide
+ :maxdepth: 1
:name: userguide_index
:hidden:
userguide/index
- userguide/iris_cubes
- userguide/loading_iris_cubes
- userguide/saving_iris_cubes
- userguide/navigating_a_cube
- userguide/subsetting_a_cube
- userguide/real_and_lazy_data
- userguide/plotting_a_cube
- userguide/interpolation_and_regridding
- userguide/merge_and_concat
- userguide/cube_statistics
- userguide/cube_maths
- userguide/citation
- userguide/code_maintenance
-
-
-.. _developers_guide:
+
.. toctree::
+ :caption: Developers Guide
:maxdepth: 1
- :caption: Further Topics
+ :name: developers_index
:hidden:
- further_topics/index
- further_topics/metadata
- further_topics/lenient_metadata
- further_topics/lenient_maths
- further_topics/ugrid/index
+ developers_guide/contributing_getting_involved
.. toctree::
- :maxdepth: 2
- :caption: Developers Guide
- :name: development_index
+ :caption: Iris API
+ :maxdepth: 1
:hidden:
- developers_guide/contributing_getting_involved
- developers_guide/gitwash/index
- developers_guide/contributing_documentation
- developers_guide/contributing_codebase_index
- developers_guide/contributing_changes
- developers_guide/release
+ generated/api/iris
.. toctree::
+ :caption: What's New in Iris
:maxdepth: 1
- :caption: Reference
+ :name: whats_new_index
:hidden:
- generated/api/iris
- techpapers/index
- copyright
+ whatsnew/index
+
+.. todolist::
\ No newline at end of file
diff --git a/docs/src/installing.rst b/docs/src/installing.rst
index 37a8942ab3..6a2d2f6131 100644
--- a/docs/src/installing.rst
+++ b/docs/src/installing.rst
@@ -1,7 +1,7 @@
.. _installing_iris:
-Installing Iris
-===============
+Installing
+==========
Iris is available using conda for the following platforms:
@@ -119,9 +119,9 @@ Running the Tests
To ensure your setup is configured correctly you can run the test suite using
the command::
- python setup.py test
+ pytest
-For more information see :ref:`developer_running_tests`.
+For more information see :ref:`test manual env`.
Custom Site Configuration
diff --git a/docs/src/sphinxext/image_test_output.py b/docs/src/sphinxext/image_test_output.py
deleted file mode 100644
index 9e492a5be9..0000000000
--- a/docs/src/sphinxext/image_test_output.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# Copyright Iris contributors
-#
-# This file is part of Iris and is released under the LGPL license.
-# See COPYING and COPYING.LESSER in the root of the repository for full
-# licensing details.
-
-import json
-import re
-from typing import Dict, List
-
-from docutils import nodes
-from sphinx.application import Sphinx
-from sphinx.util.docutils import SphinxDirective
-
-ImageRepo = Dict[str, List[str]]
-
-HASH_MATCH = re.compile(r"([^\/]+)\.png$")
-
-
-def hash_from_url(url: str) -> str:
- match = HASH_MATCH.search(url)
- if not match:
- raise ValueError(f"url {url} does not match form `http...hash.png`")
- else:
- return match.groups()[0]
-
-
-class ImageTestDirective(SphinxDirective):
- def run(self):
- with open(self.config["image_test_json"], "r") as fh:
- imagerepo = json.load(fh)
- enum_list = nodes.enumerated_list()
- nodelist = []
- nodelist.append(enum_list)
- for test in sorted(imagerepo):
- link_node = nodes.raw(
- "",
- f'{test}',
- format="html",
- )
- li_node = nodes.list_item("")
- li_node += link_node
- enum_list += li_node
- return nodelist
-
-
-def collect_imagehash_pages(app: Sphinx):
- """Generate pages for each entry in the imagerepo.json"""
- with open(app.config["image_test_json"], "r") as fh:
- imagerepo: ImageRepo = json.load(fh)
- pages = []
- for test, hashfiles in imagerepo.items():
- hashstrs = [hash_from_url(h) for h in hashfiles]
- pages.append(
- (
- f"generated/image_test/{test}",
- {"test": test, "hashfiles": zip(hashstrs, hashfiles)},
- "imagehash.html",
- )
- )
- return pages
-
-
-def setup(app: Sphinx):
- app.add_config_value(
- "image_test_json",
- "../../lib/iris/tests/results/imagerepo.json",
- "html",
- )
-
- app.add_directive("imagetest-list", ImageTestDirective)
- app.connect("html-collect-pages", collect_imagehash_pages)
-
- return {
- "version": "0.1",
- "parallel_read_safe": True,
- "parallel_write_safe": True,
- }
diff --git a/docs/src/techpapers/um_files_loading.rst b/docs/src/techpapers/um_files_loading.rst
index 72d34962ce..f8c94cab08 100644
--- a/docs/src/techpapers/um_files_loading.rst
+++ b/docs/src/techpapers/um_files_loading.rst
@@ -350,7 +350,7 @@ information is contained in the :attr:`~iris.coords.Coord.units` property.
always 1st Jan 1970 (times before this are represented as negative values).
The units.calendar property of time coordinates is set from the lowest decimal
-digit of LBTIM, known as LBTIM.IC. Note that the non-gregorian calendars (e.g.
+digit of LBTIM, known as LBTIM.IC. Note that the non-standard calendars (e.g.
360-day 'model' calendar) are defined in CF, not udunits.
There are a number of different time encoding methods used in UM data, but the
diff --git a/docs/src/userguide/code_maintenance.rst b/docs/src/userguide/code_maintenance.rst
index b2b498bc80..c01c1975a7 100644
--- a/docs/src/userguide/code_maintenance.rst
+++ b/docs/src/userguide/code_maintenance.rst
@@ -12,17 +12,17 @@ In practice, as Iris develops, most users will want to periodically upgrade
their installed version to access new features or at least bug fixes.
This is obvious if you are still developing other code that uses Iris, or using
-code from other sources.
+code from other sources.
However, even if you have only legacy code that remains untouched, some code
maintenance effort is probably still necessary:
- * On the one hand, *in principle*, working code will go on working, as long
- as you don't change anything else.
+* On the one hand, *in principle*, working code will go on working, as long
+ as you don't change anything else.
- * However, such "version stasis" can easily become a growing burden, if you
- are simply waiting until an update becomes unavoidable, often that will
- eventually occur when you need to update some other software component,
- for some completely unconnected reason.
+* However, such "version stasis" can easily become a growing burden, if you
+ are simply waiting until an update becomes unavoidable, often that will
+ eventually occur when you need to update some other software component,
+ for some completely unconnected reason.
Principles of Change Management
@@ -35,13 +35,13 @@ In Iris, however, we aim to reduce code maintenance problems to an absolute
minimum by following defined change management rules.
These ensure that, *within a major release number* :
- * you can be confident that your code will still work with subsequent minor
- releases
+* you can be confident that your code will still work with subsequent minor
+ releases
- * you will be aware of future incompatibility problems in advance
+* you will be aware of future incompatibility problems in advance
- * you can defer making code compatibility changes for some time, until it
- suits you
+* you can defer making code compatibility changes for some time, until it
+ suits you
The above applies to minor version upgrades : e.g. code that works with version
"1.4.2" should still work with a subsequent minor release such as "1.5.0" or
diff --git a/docs/src/userguide/cube_maths.rst b/docs/src/userguide/cube_maths.rst
index e8a1744a44..fe9a5d63d2 100644
--- a/docs/src/userguide/cube_maths.rst
+++ b/docs/src/userguide/cube_maths.rst
@@ -38,7 +38,7 @@ Let's load some air temperature which runs from 1860 to 2100::
air_temp = iris.load_cube(filename, 'air_temperature')
We can now get the first and last time slices using indexing
-(see :ref:`subsetting_a_cube` for a reminder)::
+(see :ref:`cube_indexing` for a reminder)::
t_first = air_temp[0, :, :]
t_last = air_temp[-1, :, :]
diff --git a/docs/src/userguide/index.rst b/docs/src/userguide/index.rst
index 2a3b32fe11..08923e7662 100644
--- a/docs/src/userguide/index.rst
+++ b/docs/src/userguide/index.rst
@@ -1,31 +1,47 @@
.. _user_guide_index:
.. _user_guide_introduction:
-Introduction
-============
+User Guide
+==========
-If you are reading this user guide for the first time it is strongly recommended that you read the user guide
-fully before experimenting with your own data files.
+If you are reading this user guide for the first time it is strongly
+recommended that you read the user guide fully before experimenting with your
+own data files.
-
-Much of the content has supplementary links to the reference documentation; you will not need to follow these
-links in order to understand the guide but they may serve as a useful reference for future exploration.
+Much of the content has supplementary links to the reference documentation;
+you will not need to follow these links in order to understand the guide but
+they may serve as a useful reference for future exploration.
.. only:: html
- Since later pages depend on earlier ones, try reading this user guide sequentially using the ``next`` and ``previous`` links.
-
-
-* :doc:`iris_cubes`
-* :doc:`loading_iris_cubes`
-* :doc:`saving_iris_cubes`
-* :doc:`navigating_a_cube`
-* :doc:`subsetting_a_cube`
-* :doc:`real_and_lazy_data`
-* :doc:`plotting_a_cube`
-* :doc:`interpolation_and_regridding`
-* :doc:`merge_and_concat`
-* :doc:`cube_statistics`
-* :doc:`cube_maths`
-* :doc:`citation`
-* :doc:`code_maintenance`
+ Since later pages depend on earlier ones, try reading this user guide
+ sequentially using the ``next`` and ``previous`` links at the bottom
+ of each page.
+
+
+.. toctree::
+ :maxdepth: 2
+
+ iris_cubes
+ loading_iris_cubes
+ saving_iris_cubes
+ navigating_a_cube
+ subsetting_a_cube
+ real_and_lazy_data
+ plotting_a_cube
+ interpolation_and_regridding
+ merge_and_concat
+ cube_statistics
+ cube_maths
+ citation
+ code_maintenance
+
+
+.. toctree::
+ :maxdepth: 2
+ :caption: Further Topics
+
+ ../further_topics/metadata
+ ../further_topics/lenient_metadata
+ ../further_topics/lenient_maths
+ ../further_topics/ugrid/index
diff --git a/docs/src/userguide/interpolation_and_regridding.rst b/docs/src/userguide/interpolation_and_regridding.rst
index f590485606..deae4427ed 100644
--- a/docs/src/userguide/interpolation_and_regridding.rst
+++ b/docs/src/userguide/interpolation_and_regridding.rst
@@ -19,14 +19,14 @@ In Iris we refer to the available types of interpolation and regridding as
`schemes`. The following are the interpolation schemes that are currently
available in Iris:
- * linear interpolation (:class:`iris.analysis.Linear`), and
- * nearest-neighbour interpolation (:class:`iris.analysis.Nearest`).
+* linear interpolation (:class:`iris.analysis.Linear`), and
+* nearest-neighbour interpolation (:class:`iris.analysis.Nearest`).
The following are the regridding schemes that are currently available in Iris:
- * linear regridding (:class:`iris.analysis.Linear`),
- * nearest-neighbour regridding (:class:`iris.analysis.Nearest`), and
- * area-weighted regridding (:class:`iris.analysis.AreaWeighted`, first-order conservative).
+* linear regridding (:class:`iris.analysis.Linear`),
+* nearest-neighbour regridding (:class:`iris.analysis.Nearest`), and
+* area-weighted regridding (:class:`iris.analysis.AreaWeighted`, first-order conservative).
The linear, nearest-neighbor, and area-weighted regridding schemes support
lazy regridding, i.e. if the source cube has lazy data, the resulting cube
@@ -42,8 +42,8 @@ Interpolation
Interpolating a cube is achieved with the :meth:`~iris.cube.Cube.interpolate`
method. This method expects two arguments:
- #. the sample points to interpolate, and
- #. the interpolation scheme to use.
+#. the sample points to interpolate, and
+#. the interpolation scheme to use.
The result is a new cube, interpolated at the sample points.
@@ -51,9 +51,9 @@ Sample points must be defined as an iterable of ``(coord, value(s))`` pairs.
The `coord` argument can be either a coordinate name or coordinate instance.
The specified coordinate must exist on the cube being interpolated! For example:
- * coordinate names and scalar sample points: ``[('latitude', 51.48), ('longitude', 0)]``,
- * a coordinate instance and a scalar sample point: ``[(cube.coord('latitude'), 51.48)]``, and
- * a coordinate name and a NumPy array of sample points: ``[('longitude', np.linspace(-11, 2, 14))]``
+* coordinate names and scalar sample points: ``[('latitude', 51.48), ('longitude', 0)]``,
+* a coordinate instance and a scalar sample point: ``[(cube.coord('latitude'), 51.48)]``, and
+* a coordinate name and a NumPy array of sample points: ``[('longitude', np.linspace(-11, 2, 14))]``
are all examples of valid sample points.
@@ -175,11 +175,11 @@ The extrapolation mode is controlled by the ``extrapolation_mode`` keyword.
For the available interpolation schemes available in Iris, the ``extrapolation_mode``
keyword must be one of:
- * ``extrapolate`` -- the extrapolation points will be calculated by extending the gradient of the closest two points,
- * ``error`` -- a ValueError exception will be raised, notifying an attempt to extrapolate,
- * ``nan`` -- the extrapolation points will be be set to NaN,
- * ``mask`` -- the extrapolation points will always be masked, even if the source data is not a MaskedArray, or
- * ``nanmask`` -- if the source data is a MaskedArray the extrapolation points will be masked. Otherwise they will be set to NaN.
+* ``extrapolate`` -- the extrapolation points will be calculated by extending the gradient of the closest two points,
+* ``error`` -- a ValueError exception will be raised, notifying an attempt to extrapolate,
+* ``nan`` -- the extrapolation points will be be set to NaN,
+* ``mask`` -- the extrapolation points will always be masked, even if the source data is not a MaskedArray, or
+* ``nanmask`` -- if the source data is a MaskedArray the extrapolation points will be masked. Otherwise they will be set to NaN.
Using an extrapolation mode is achieved by constructing an interpolation scheme
with the extrapolation mode keyword set as required. The constructed scheme
@@ -206,8 +206,8 @@ intensive part of an interpolation is setting up the interpolator.
To cache an interpolator you must set up an interpolator scheme and call the
scheme's interpolator method. The interpolator method takes as arguments:
- #. a cube to be interpolated, and
- #. an iterable of coordinate names or coordinate instances of the coordinates that are to be interpolated over.
+#. a cube to be interpolated, and
+#. an iterable of coordinate names or coordinate instances of the coordinates that are to be interpolated over.
For example:
@@ -244,8 +244,8 @@ regridding is based on the **horizontal** grid of *another cube*.
Regridding a cube is achieved with the :meth:`cube.regrid() ` method.
This method expects two arguments:
- #. *another cube* that defines the target grid onto which the cube should be regridded, and
- #. the regridding scheme to use.
+#. *another cube* that defines the target grid onto which the cube should be regridded, and
+#. the regridding scheme to use.
.. note::
@@ -278,15 +278,15 @@ mode when defining the regridding scheme.
For the available regridding schemes in Iris, the ``extrapolation_mode`` keyword
must be one of:
- * ``extrapolate`` --
+* ``extrapolate`` --
- * for :class:`~iris.analysis.Linear` the extrapolation points will be calculated by extending the gradient of the closest two points.
- * for :class:`~iris.analysis.Nearest` the extrapolation points will take their value from the nearest source point.
+ * for :class:`~iris.analysis.Linear` the extrapolation points will be calculated by extending the gradient of the closest two points.
+ * for :class:`~iris.analysis.Nearest` the extrapolation points will take their value from the nearest source point.
- * ``nan`` -- the extrapolation points will be be set to NaN.
- * ``error`` -- a ValueError exception will be raised, notifying an attempt to extrapolate.
- * ``mask`` -- the extrapolation points will always be masked, even if the source data is not a MaskedArray.
- * ``nanmask`` -- if the source data is a MaskedArray the extrapolation points will be masked. Otherwise they will be set to NaN.
+* ``nan`` -- the extrapolation points will be be set to NaN.
+* ``error`` -- a ValueError exception will be raised, notifying an attempt to extrapolate.
+* ``mask`` -- the extrapolation points will always be masked, even if the source data is not a MaskedArray.
+* ``nanmask`` -- if the source data is a MaskedArray the extrapolation points will be masked. Otherwise they will be set to NaN.
The ``rotated_psl`` cube is defined on a limited area rotated pole grid. If we regridded
the ``rotated_psl`` cube onto the global grid as defined by the ``global_air_temp`` cube
@@ -395,8 +395,8 @@ intensive part of a regrid is setting up the regridder.
To cache a regridder you must set up a regridder scheme and call the
scheme's regridder method. The regridder method takes as arguments:
- #. a cube (that is to be regridded) defining the source grid, and
- #. a cube defining the target grid to regrid the source cube to.
+#. a cube (that is to be regridded) defining the source grid, and
+#. a cube defining the target grid to regrid the source cube to.
For example:
diff --git a/docs/src/userguide/iris_cubes.rst b/docs/src/userguide/iris_cubes.rst
index d13dee369c..29d8f3cefc 100644
--- a/docs/src/userguide/iris_cubes.rst
+++ b/docs/src/userguide/iris_cubes.rst
@@ -4,82 +4,105 @@
Iris Data Structures
====================
-The top level object in Iris is called a cube. A cube contains data and metadata about a phenomenon.
+The top level object in Iris is called a cube. A cube contains data and
+metadata about a phenomenon.
-In Iris, a cube is an interpretation of the *Climate and Forecast (CF) Metadata Conventions* whose purpose is to:
+In Iris, a cube is an interpretation of the *Climate and Forecast (CF)
+Metadata Conventions* whose purpose is to:
- *require conforming datasets to contain sufficient metadata that they are self-describing... including physical
- units if appropriate, and that each value can be located in space (relative to earth-based coordinates) and time.*
+.. panels::
+ :container: container-lg pb-3
+ :column: col-lg-12 p-2
-Whilst the CF conventions are often mentioned alongside NetCDF, Iris implements several major format importers which can take
-files of specific formats and turn them into Iris cubes. Additionally, a framework is provided which allows users
-to extend Iris' import capability to cater for specialist or unimplemented formats.
+ *require conforming datasets to contain sufficient metadata that they are
+ self-describing... including physical units if appropriate, and that each
+ value can be located in space (relative to earth-based coordinates) and
+ time.*
-A single cube describes one and only one phenomenon, always has a name, a unit and
-an n-dimensional data array to represents the cube's phenomenon. In order to locate the
-data spatially, temporally, or in any other higher-dimensional space, a collection of *coordinates*
-exist on the cube.
+
+Whilst the CF conventions are often mentioned alongside NetCDF, Iris implements
+several major format importers which can take files of specific formats and
+turn them into Iris cubes. Additionally, a framework is provided which allows
+users to extend Iris' import capability to cater for specialist or
+unimplemented formats.
+
+A single cube describes one and only one phenomenon, always has a name, a unit
+and an n-dimensional data array to represents the cube's phenomenon. In order
+to locate the data spatially, temporally, or in any other higher-dimensional
+space, a collection of *coordinates* exist on the cube.
Coordinates
===========
-A coordinate is a container to store metadata about some dimension(s) of a cube's data array and therefore,
-by definition, its phenomenon.
-
- * Each coordinate has a name and a unit.
- * When a coordinate is added to a cube, the data dimensions that it represents are also provided.
-
- * The shape of a coordinate is always the same as the shape of the associated data dimension(s) on the cube.
- * A dimension not explicitly listed signifies that the coordinate is independent of that dimension.
- * Each dimension of a coordinate must be mapped to a data dimension. The only coordinates with no mapping are
- scalar coordinates.
-
- * Depending on the underlying data that the coordinate is representing, its values may be discrete points or be
- bounded to represent interval extents (e.g. temperature at *point x* **vs** rainfall accumulation *between 0000-1200 hours*).
- * Coordinates have an attributes dictionary which can hold arbitrary extra metadata, excluding certain restricted CF names
- * More complex coordinates may contain a coordinate system which is necessary to fully interpret the values
- contained within the coordinate.
-
+A coordinate is a container to store metadata about some dimension(s) of a
+cube's data array and therefore, by definition, its phenomenon.
+
+* Each coordinate has a name and a unit.
+* When a coordinate is added to a cube, the data dimensions that it
+ represents are also provided.
+
+ * The shape of a coordinate is always the same as the shape of the
+ associated data dimension(s) on the cube.
+ * A dimension not explicitly listed signifies that the coordinate is
+ independent of that dimension.
+ * Each dimension of a coordinate must be mapped to a data dimension. The
+ only coordinates with no mapping are scalar coordinates.
+
+* Depending on the underlying data that the coordinate is representing, its
+ values may be discrete points or be bounded to represent interval extents
+ (e.g. temperature at *point x* **vs** rainfall accumulation *between
+ 0000-1200 hours*).
+* Coordinates have an attributes dictionary which can hold arbitrary extra
+ metadata, excluding certain restricted CF names
+* More complex coordinates may contain a coordinate system which is
+ necessary to fully interpret the values contained within the coordinate.
+
There are two classes of coordinates:
- **DimCoord**
-
- * Numeric
- * Monotonic
- * Representative of, at most, a single data dimension (1d)
+**DimCoord**
+
+* Numeric
+* Monotonic
+* Representative of, at most, a single data dimension (1d)
+
+**AuxCoord**
+
+* May be of any type, including strings
+* May represent multiple data dimensions (n-dimensional)
- **AuxCoord**
-
- * May be of any type, including strings
- * May represent multiple data dimensions (n-dimensional)
-
Cube
====
A cube consists of:
- * a standard name and/or a long name and an appropriate unit
- * a data array who's values are representative of the phenomenon
- * a collection of coordinates and associated data dimensions on the cube's data array, which are split into two separate lists:
+* a standard name and/or a long name and an appropriate unit
+* a data array who's values are representative of the phenomenon
+* a collection of coordinates and associated data dimensions on the cube's
+ data array, which are split into two separate lists:
+
+ * *dimension coordinates* - DimCoords which uniquely map to exactly one
+ data dimension, ordered by dimension.
+ * *auxiliary coordinates* - DimCoords or AuxCoords which map to as many
+ data dimensions as the coordinate has dimensions.
- * *dimension coordinates* - DimCoords which uniquely map to exactly one data dimension, ordered by dimension.
- * *auxiliary coordinates* - DimCoords or AuxCoords which map to as many data dimensions as the coordinate has dimensions.
-
- * an attributes dictionary which, other than some protected CF names, can hold arbitrary extra metadata.
- * a list of cell methods to represent operations which have already been applied to the data (e.g. "mean over time")
- * a list of coordinate "factories" used for deriving coordinates from the values of other coordinates in the cube
+* an attributes dictionary which, other than some protected CF names, can
+ hold arbitrary extra metadata.
+* a list of cell methods to represent operations which have already been
+ applied to the data (e.g. "mean over time")
+* a list of coordinate "factories" used for deriving coordinates from the
+ values of other coordinates in the cube
Cubes in Practice
-----------------
-
A Simple Cube Example
=====================
-Suppose we have some gridded data which has 24 air temperature readings (in Kelvin) which is located at
-4 different longitudes, 2 different latitudes and 3 different heights. Our data array can be represented pictorially:
+Suppose we have some gridded data which has 24 air temperature readings
+(in Kelvin) which is located at 4 different longitudes, 2 different latitudes
+and 3 different heights. Our data array can be represented pictorially:
.. image:: multi_array.png
@@ -87,61 +110,66 @@ Where dimensions 0, 1, and 2 have lengths 3, 2 and 4 respectively.
The Iris cube to represent this data would consist of:
- * a standard name of ``air_temperature`` and a unit of ``kelvin``
- * a data array of shape ``(3, 2, 4)``
- * a coordinate, mapping to dimension 0, consisting of:
-
- * a standard name of ``height`` and unit of ``meters``
- * an array of length 3 representing the 3 ``height`` points
-
- * a coordinate, mapping to dimension 1, consisting of:
-
- * a standard name of ``latitude`` and unit of ``degrees``
- * an array of length 2 representing the 2 latitude points
- * a coordinate system such that the ``latitude`` points could be fully located on the globe
-
- * a coordinate, mapping to dimension 2, consisting of:
-
- * a standard name of ``longitude`` and unit of ``degrees``
- * an array of length 4 representing the 4 longitude points
- * a coordinate system such that the ``longitude`` points could be fully located on the globe
-
+* a standard name of ``air_temperature`` and a unit of ``kelvin``
+* a data array of shape ``(3, 2, 4)``
+* a coordinate, mapping to dimension 0, consisting of:
+
+ * a standard name of ``height`` and unit of ``meters``
+ * an array of length 3 representing the 3 ``height`` points
+* a coordinate, mapping to dimension 1, consisting of:
+ * a standard name of ``latitude`` and unit of ``degrees``
+ * an array of length 2 representing the 2 latitude points
+ * a coordinate system such that the ``latitude`` points could be fully
+ located on the globe
-Pictorially the cube has taken on more information than a simple array:
+* a coordinate, mapping to dimension 2, consisting of:
+
+ * a standard name of ``longitude`` and unit of ``degrees``
+ * an array of length 4 representing the 4 longitude points
+ * a coordinate system such that the ``longitude`` points could be fully
+ located on the globe
+
+Pictorially the cube has taken on more information than a simple array:
.. image:: multi_array_to_cube.png
-Additionally further information may be optionally attached to the cube.
-For example, it is possible to attach any of the following:
-
- * a coordinate, not mapping to any data dimensions, consisting of:
-
- * a standard name of ``time`` and unit of ``days since 2000-01-01 00:00``
- * a data array of length 1 representing the time that the data array is valid for
-
- * an auxiliary coordinate, mapping to dimensions 1 and 2, consisting of:
-
- * a long name of ``place name`` and no unit
- * a 2d string array of shape ``(2, 4)`` with the names of the 8 places that the lat/lons correspond to
-
- * an auxiliary coordinate "factory", which can derive its own mapping, consisting of:
-
- * a standard name of ``height`` and a unit of ``feet``
- * knowledge of how data values for this coordinate can be calculated given the ``height in meters`` coordinate
-
- * a cell method of "mean" over "ensemble" to indicate that the data has been meaned over
- a collection of "ensembles" (i.e. multiple model runs).
+Additionally further information may be optionally attached to the cube.
+For example, it is possible to attach any of the following:
+
+* a coordinate, not mapping to any data dimensions, consisting of:
+
+ * a standard name of ``time`` and unit of ``days since 2000-01-01 00:00``
+ * a data array of length 1 representing the time that the data array is
+ valid for
+
+* an auxiliary coordinate, mapping to dimensions 1 and 2, consisting of:
+
+ * a long name of ``place name`` and no unit
+ * a 2d string array of shape ``(2, 4)`` with the names of the 8 places
+ that the lat/lons correspond to
+
+* an auxiliary coordinate "factory", which can derive its own mapping,
+ consisting of:
+
+ * a standard name of ``height`` and a unit of ``feet``
+ * knowledge of how data values for this coordinate can be calculated
+ given the ``height in meters`` coordinate
+
+* a cell method of "mean" over "ensemble" to indicate that the data has been
+ meaned over a collection of "ensembles" (i.e. multiple model runs).
Printing a Cube
===============
-Every Iris cube can be printed to screen as you will see later in the user guide. It is worth familiarising yourself with the
-output as this is the quickest way of inspecting the contents of a cube. Here is the result of printing a real life cube:
+Every Iris cube can be printed to screen as you will see later in the user
+guide. It is worth familiarising yourself with the output as this is the
+quickest way of inspecting the contents of a cube. Here is the result of
+printing a real life cube:
.. _hybrid_cube_printout:
@@ -150,7 +178,7 @@ output as this is the quickest way of inspecting the contents of a cube. Here is
import iris
filename = iris.sample_data_path('uk_hires.pp')
- # NOTE: Every time the output of this cube changes, the full list of deductions below should be re-assessed.
+ # NOTE: Every time the output of this cube changes, the full list of deductions below should be re-assessed.
print(iris.load_cube(filename, 'air_potential_temperature'))
.. testoutput::
@@ -178,16 +206,22 @@ output as this is the quickest way of inspecting the contents of a cube. Here is
Using this output we can deduce that:
- * The cube represents air potential temperature.
- * There are 4 data dimensions, and the data has a shape of ``(3, 7, 204, 187)``
- * The 4 data dimensions are mapped to the ``time``, ``model_level_number``,
- ``grid_latitude``, ``grid_longitude`` coordinates respectively
- * There are three 1d auxiliary coordinates and one 2d auxiliary (``surface_altitude``)
- * There is a single ``altitude`` derived coordinate, which spans 3 data dimensions
- * There are 7 distinct values in the "model_level_number" coordinate. Similar inferences can
- be made for the other dimension coordinates.
- * There are 7, not necessarily distinct, values in the ``level_height`` coordinate.
- * There is a single ``forecast_reference_time`` scalar coordinate representing the entire cube.
- * The cube has one further attribute relating to the phenomenon.
- In this case the originating file format, PP, encodes information in a STASH code which in some cases can
- be useful for identifying advanced experiment information relating to the phenomenon.
+* The cube represents air potential temperature.
+* There are 4 data dimensions, and the data has a shape of ``(3, 7, 204, 187)``
+* The 4 data dimensions are mapped to the ``time``, ``model_level_number``,
+ ``grid_latitude``, ``grid_longitude`` coordinates respectively
+* There are three 1d auxiliary coordinates and one 2d auxiliary
+ (``surface_altitude``)
+* There is a single ``altitude`` derived coordinate, which spans 3 data
+ dimensions
+* There are 7 distinct values in the "model_level_number" coordinate. Similar
+ inferences can
+ be made for the other dimension coordinates.
+* There are 7, not necessarily distinct, values in the ``level_height``
+ coordinate.
+* There is a single ``forecast_reference_time`` scalar coordinate representing
+ the entire cube.
+* The cube has one further attribute relating to the phenomenon.
+ In this case the originating file format, PP, encodes information in a STASH
+ code which in some cases can be useful for identifying advanced experiment
+ information relating to the phenomenon.
diff --git a/docs/src/userguide/loading_iris_cubes.rst b/docs/src/userguide/loading_iris_cubes.rst
index fb938975e8..33ad932d70 100644
--- a/docs/src/userguide/loading_iris_cubes.rst
+++ b/docs/src/userguide/loading_iris_cubes.rst
@@ -39,15 +39,15 @@ This shows that there were 2 cubes as a result of loading the file, they were:
The ``surface_altitude`` cube was 2 dimensional with:
- * the two dimensions have extents of 204 and 187 respectively and are
- represented by the ``grid_latitude`` and ``grid_longitude`` coordinates.
+* the two dimensions have extents of 204 and 187 respectively and are
+ represented by the ``grid_latitude`` and ``grid_longitude`` coordinates.
The ``air_potential_temperature`` cubes were 4 dimensional with:
- * the same length ``grid_latitude`` and ``grid_longitude`` dimensions as
- ``surface_altitide``
- * a ``time`` dimension of length 3
- * a ``model_level_number`` dimension of length 7
+* the same length ``grid_latitude`` and ``grid_longitude`` dimensions as
+ ``surface_altitide``
+* a ``time`` dimension of length 3
+* a ``model_level_number`` dimension of length 7
.. note::
@@ -55,7 +55,7 @@ The ``air_potential_temperature`` cubes were 4 dimensional with:
(even if it only contains one :class:`iris.cube.Cube` - see
:ref:`strict-loading`). Anything that can be done with a Python
:class:`list` can be done with an :class:`iris.cube.CubeList`.
-
+
The order of this list should not be relied upon. Ways of loading a
specific cube or cubes are covered in :ref:`constrained-loading` and
:ref:`strict-loading`.
@@ -206,241 +206,8 @@ a specific ``model_level_number``::
level_10 = iris.Constraint(model_level_number=10)
cubes = iris.load(filename, level_10)
-Constraints can be combined using ``&`` to represent a more restrictive
-constraint to ``load``::
-
- filename = iris.sample_data_path('uk_hires.pp')
- forecast_6 = iris.Constraint(forecast_period=6)
- level_10 = iris.Constraint(model_level_number=10)
- cubes = iris.load(filename, forecast_6 & level_10)
-
-.. note::
-
- Whilst ``&`` is supported, the ``|`` that might reasonably be expected is
- not. Explanation as to why is in the :class:`iris.Constraint` reference
- documentation.
-
- For an example of constraining to multiple ranges of the same coordinate to
- generate one cube, see the :class:`iris.Constraint` reference documentation.
-
- To generate multiple cubes, each constrained to a different range of the
- same coordinate, use :py:func:`iris.load_cubes`.
-
-As well as being able to combine constraints using ``&``,
-the :class:`iris.Constraint` class can accept multiple arguments,
-and a list of values can be given to constrain a coordinate to one of
-a collection of values::
-
- filename = iris.sample_data_path('uk_hires.pp')
- level_10_or_16_fp_6 = iris.Constraint(model_level_number=[10, 16], forecast_period=6)
- cubes = iris.load(filename, level_10_or_16_fp_6)
-
-A common requirement is to limit the value of a coordinate to a specific range,
-this can be achieved by passing the constraint a function::
-
- def bottom_16_levels(cell):
- # return True or False as to whether the cell in question should be kept
- return cell <= 16
-
- filename = iris.sample_data_path('uk_hires.pp')
- level_lt_16 = iris.Constraint(model_level_number=bottom_16_levels)
- cubes = iris.load(filename, level_lt_16)
-
-.. note::
-
- As with many of the examples later in this documentation, the
- simple function above can be conveniently written as a lambda function
- on a single line::
-
- bottom_16_levels = lambda cell: cell <= 16
-
-
-Note also the :ref:`warning on equality constraints with floating point coordinates `.
-
-
-Cube attributes can also be part of the constraint criteria. Supposing a
-cube attribute of ``STASH`` existed, as is the case when loading ``PP`` files,
-then specific STASH codes can be filtered::
-
- filename = iris.sample_data_path('uk_hires.pp')
- level_10_with_stash = iris.AttributeConstraint(STASH='m01s00i004') & iris.Constraint(model_level_number=10)
- cubes = iris.load(filename, level_10_with_stash)
-
-.. seealso::
-
- For advanced usage there are further examples in the
- :class:`iris.Constraint` reference documentation.
-
-
-Constraining a Circular Coordinate Across its Boundary
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Occasionally you may need to constrain your cube with a region that crosses the
-boundary of a circular coordinate (this is often the meridian or the dateline /
-antimeridian). An example use-case of this is to extract the entire Pacific Ocean
-from a cube whose longitudes are bounded by the dateline.
-
-This functionality cannot be provided reliably using constraints. Instead you should use the
-functionality provided by :meth:`cube.intersection `
-to extract this region.
-
-
-.. _using-time-constraints:
-
-Constraining on Time
-^^^^^^^^^^^^^^^^^^^^
-Iris follows NetCDF-CF rules in representing time coordinate values as normalised,
-purely numeric, values which are normalised by the calendar specified in the coordinate's
-units (e.g. "days since 1970-01-01").
-However, when constraining by time we usually want to test calendar-related
-aspects such as hours of the day or months of the year, so Iris
-provides special features to facilitate this:
-
-Firstly, when Iris evaluates Constraint expressions, it will convert time-coordinate
-values (points and bounds) from numbers into :class:`~datetime.datetime`-like objects
-for ease of calendar-based testing.
-
- >>> filename = iris.sample_data_path('uk_hires.pp')
- >>> cube_all = iris.load_cube(filename, 'air_potential_temperature')
- >>> print('All times :\n' + str(cube_all.coord('time')))
- All times :
- DimCoord : time / (hours since 1970-01-01 00:00:00, gregorian calendar)
- points: [2009-11-19 10:00:00, 2009-11-19 11:00:00, 2009-11-19 12:00:00]
- shape: (3,)
- dtype: float64
- standard_name: 'time'
- >>> # Define a function which accepts a datetime as its argument (this is simplified in later examples).
- >>> hour_11 = iris.Constraint(time=lambda cell: cell.point.hour == 11)
- >>> cube_11 = cube_all.extract(hour_11)
- >>> print('Selected times :\n' + str(cube_11.coord('time')))
- Selected times :
- DimCoord : time / (hours since 1970-01-01 00:00:00, gregorian calendar)
- points: [2009-11-19 11:00:00]
- shape: (1,)
- dtype: float64
- standard_name: 'time'
-
-Secondly, the :class:`iris.time` module provides flexible time comparison
-facilities. An :class:`iris.time.PartialDateTime` object can be compared to
-objects such as :class:`datetime.datetime` instances, and this comparison will
-then test only those 'aspects' which the PartialDateTime instance defines:
-
- >>> import datetime
- >>> from iris.time import PartialDateTime
- >>> dt = datetime.datetime(2011, 3, 7)
- >>> print(dt > PartialDateTime(year=2010, month=6))
- True
- >>> print(dt > PartialDateTime(month=6))
- False
- >>>
-
-These two facilities can be combined to provide straightforward calendar-based
-time selections when loading or extracting data.
-
-The previous constraint example can now be written as:
-
- >>> the_11th_hour = iris.Constraint(time=iris.time.PartialDateTime(hour=11))
- >>> print(iris.load_cube(
- ... iris.sample_data_path('uk_hires.pp'),
- ... 'air_potential_temperature' & the_11th_hour).coord('time'))
- DimCoord : time / (hours since 1970-01-01 00:00:00, gregorian calendar)
- points: [2009-11-19 11:00:00]
- shape: (1,)
- dtype: float64
- standard_name: 'time'
-
-It is common that a cube will need to be constrained between two given dates.
-In the following example we construct a time sequence representing the first
-day of every week for many years:
-
-.. testsetup:: timeseries_range
-
- import datetime
- import numpy as np
- from iris.time import PartialDateTime
- long_ts = iris.cube.Cube(np.arange(150), long_name='data', units='1')
- _mondays = iris.coords.DimCoord(7 * np.arange(150), standard_name='time', units='days since 2007-04-09')
- long_ts.add_dim_coord(_mondays, 0)
-
-
-.. doctest:: timeseries_range
- :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
-
- >>> print(long_ts.coord('time'))
- DimCoord : time / (days since 2007-04-09, gregorian calendar)
- points: [
- 2007-04-09 00:00:00, 2007-04-16 00:00:00, ...,
- 2010-02-08 00:00:00, 2010-02-15 00:00:00]
- shape: (150,)
- dtype: int64
- standard_name: 'time'
-
-Given two dates in datetime format, we can select all points between them.
-
-.. doctest:: timeseries_range
- :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
-
- >>> d1 = datetime.datetime.strptime('20070715T0000Z', '%Y%m%dT%H%MZ')
- >>> d2 = datetime.datetime.strptime('20070825T0000Z', '%Y%m%dT%H%MZ')
- >>> st_swithuns_daterange_07 = iris.Constraint(
- ... time=lambda cell: d1 <= cell.point < d2)
- >>> within_st_swithuns_07 = long_ts.extract(st_swithuns_daterange_07)
- >>> print(within_st_swithuns_07.coord('time'))
- DimCoord : time / (days since 2007-04-09, gregorian calendar)
- points: [
- 2007-07-16 00:00:00, 2007-07-23 00:00:00, 2007-07-30 00:00:00,
- 2007-08-06 00:00:00, 2007-08-13 00:00:00, 2007-08-20 00:00:00]
- shape: (6,)
- dtype: int64
- standard_name: 'time'
-
-Alternatively, we may rewrite this using :class:`iris.time.PartialDateTime`
-objects.
-
-.. doctest:: timeseries_range
- :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
-
- >>> pdt1 = PartialDateTime(year=2007, month=7, day=15)
- >>> pdt2 = PartialDateTime(year=2007, month=8, day=25)
- >>> st_swithuns_daterange_07 = iris.Constraint(
- ... time=lambda cell: pdt1 <= cell.point < pdt2)
- >>> within_st_swithuns_07 = long_ts.extract(st_swithuns_daterange_07)
- >>> print(within_st_swithuns_07.coord('time'))
- DimCoord : time / (days since 2007-04-09, gregorian calendar)
- points: [
- 2007-07-16 00:00:00, 2007-07-23 00:00:00, 2007-07-30 00:00:00,
- 2007-08-06 00:00:00, 2007-08-13 00:00:00, 2007-08-20 00:00:00]
- shape: (6,)
- dtype: int64
- standard_name: 'time'
-
-A more complex example might require selecting points over an annually repeating
-date range. We can select points within a certain part of the year, in this case
-between the 15th of July through to the 25th of August. By making use of
-PartialDateTime this becomes simple:
-
-.. doctest:: timeseries_range
-
- >>> st_swithuns_daterange = iris.Constraint(
- ... time=lambda cell: PartialDateTime(month=7, day=15) <= cell < PartialDateTime(month=8, day=25))
- >>> within_st_swithuns = long_ts.extract(st_swithuns_daterange)
- ...
- >>> # Note: using summary(max_values) to show more of the points
- >>> print(within_st_swithuns.coord('time').summary(max_values=100))
- DimCoord : time / (days since 2007-04-09, gregorian calendar)
- points: [
- 2007-07-16 00:00:00, 2007-07-23 00:00:00, 2007-07-30 00:00:00,
- 2007-08-06 00:00:00, 2007-08-13 00:00:00, 2007-08-20 00:00:00,
- 2008-07-21 00:00:00, 2008-07-28 00:00:00, 2008-08-04 00:00:00,
- 2008-08-11 00:00:00, 2008-08-18 00:00:00, 2009-07-20 00:00:00,
- 2009-07-27 00:00:00, 2009-08-03 00:00:00, 2009-08-10 00:00:00,
- 2009-08-17 00:00:00, 2009-08-24 00:00:00]
- shape: (17,)
- dtype: int64
- standard_name: 'time'
-
-Notice how the dates printed are between the range specified in the ``st_swithuns_daterange``
-and that they span multiple years.
+Further details on using :class:`iris.Constraint` are
+discussed later in :ref:`cube_extraction`.
.. _strict-loading:
diff --git a/docs/src/userguide/merge_and_concat.rst b/docs/src/userguide/merge_and_concat.rst
index e8425df5ec..08c3ce9711 100644
--- a/docs/src/userguide/merge_and_concat.rst
+++ b/docs/src/userguide/merge_and_concat.rst
@@ -22,14 +22,14 @@ result in fewer cubes as output. The following diagram illustrates the two proce
There is one major difference between the ``merge`` and ``concatenate`` processes.
- * The ``merge`` process combines multiple input cubes into a
- single resultant cube with new dimensions created from the
- *scalar coordinate values* of the input cubes.
-
- * The ``concatenate`` process combines multiple input cubes into a
- single resultant cube with the same *number of dimensions* as the input cubes,
- but with the length of one or more dimensions extended by *joining together
- sequential dimension coordinates*.
+* The ``merge`` process combines multiple input cubes into a
+ single resultant cube with new dimensions created from the
+ *scalar coordinate values* of the input cubes.
+
+* The ``concatenate`` process combines multiple input cubes into a
+ single resultant cube with the same *number of dimensions* as the input cubes,
+ but with the length of one or more dimensions extended by *joining together
+ sequential dimension coordinates*.
Let's imagine 28 individual cubes representing the
temperature at a location ``(y, x)``; one cube for each day of February. We can use
diff --git a/docs/src/userguide/plotting_a_cube.rst b/docs/src/userguide/plotting_a_cube.rst
index cfb3445d9b..a2334367c5 100644
--- a/docs/src/userguide/plotting_a_cube.rst
+++ b/docs/src/userguide/plotting_a_cube.rst
@@ -101,15 +101,15 @@ see :py:func:`matplotlib.pyplot.savefig`).
Some of the formats which are supported by **plt.savefig**:
- ====== ====== ======================================================================
- Format Type Description
- ====== ====== ======================================================================
- EPS Vector Encapsulated PostScript
- PDF Vector Portable Document Format
- PNG Raster Portable Network Graphics, a format with a lossless compression method
- PS Vector PostScript, ideal for printer output
- SVG Vector Scalable Vector Graphics, XML based
- ====== ====== ======================================================================
+====== ====== ======================================================================
+Format Type Description
+====== ====== ======================================================================
+EPS Vector Encapsulated PostScript
+PDF Vector Portable Document Format
+PNG Raster Portable Network Graphics, a format with a lossless compression method
+PS Vector PostScript, ideal for printer output
+SVG Vector Scalable Vector Graphics, XML based
+====== ====== ======================================================================
******************
Iris Cube Plotting
@@ -125,12 +125,12 @@ wrapper functions.
As a rule of thumb:
- * if you wish to do a visualisation with a cube, use ``iris.plot`` or
- ``iris.quickplot``.
- * if you wish to show, save or manipulate **any** visualisation,
- including ones created with Iris, use ``matplotlib.pyplot``.
- * if you wish to create a non cube visualisation, also use
- ``matplotlib.pyplot``.
+* if you wish to do a visualisation with a cube, use ``iris.plot`` or
+ ``iris.quickplot``.
+* if you wish to show, save or manipulate **any** visualisation,
+ including ones created with Iris, use ``matplotlib.pyplot``.
+* if you wish to create a non cube visualisation, also use
+ ``matplotlib.pyplot``.
The ``iris.quickplot`` module is exactly the same as the ``iris.plot`` module,
except that ``quickplot`` will add a title, x and y labels and a colorbar
diff --git a/docs/src/userguide/real_and_lazy_data.rst b/docs/src/userguide/real_and_lazy_data.rst
index 0bc1846457..9d66a2f086 100644
--- a/docs/src/userguide/real_and_lazy_data.rst
+++ b/docs/src/userguide/real_and_lazy_data.rst
@@ -140,11 +140,11 @@ Core Data
Cubes have the concept of "core data". This returns the cube's data in its
current state:
- * If a cube has lazy data, calling the cube's :meth:`~iris.cube.Cube.core_data` method
- will return the cube's lazy dask array. Calling the cube's
- :meth:`~iris.cube.Cube.core_data` method **will never realise** the cube's data.
- * If a cube has real data, calling the cube's :meth:`~iris.cube.Cube.core_data` method
- will return the cube's real NumPy array.
+* If a cube has lazy data, calling the cube's :meth:`~iris.cube.Cube.core_data` method
+ will return the cube's lazy dask array. Calling the cube's
+ :meth:`~iris.cube.Cube.core_data` method **will never realise** the cube's data.
+* If a cube has real data, calling the cube's :meth:`~iris.cube.Cube.core_data` method
+ will return the cube's real NumPy array.
For example::
@@ -174,14 +174,14 @@ In the same way that Iris cubes contain a data array, Iris coordinates contain a
points array and an optional bounds array.
Coordinate points and bounds arrays can also be real or lazy:
- * A :class:`~iris.coords.DimCoord` will only ever have **real** points and bounds
- arrays because of monotonicity checks that realise lazy arrays.
- * An :class:`~iris.coords.AuxCoord` can have **real or lazy** points and bounds.
- * An :class:`~iris.aux_factory.AuxCoordFactory` (or derived coordinate)
- can have **real or lazy** points and bounds. If all of the
- :class:`~iris.coords.AuxCoord` instances used to construct the derived coordinate
- have real points and bounds then the derived coordinate will have real points
- and bounds, otherwise the derived coordinate will have lazy points and bounds.
+* A :class:`~iris.coords.DimCoord` will only ever have **real** points and bounds
+ arrays because of monotonicity checks that realise lazy arrays.
+* An :class:`~iris.coords.AuxCoord` can have **real or lazy** points and bounds.
+* An :class:`~iris.aux_factory.AuxCoordFactory` (or derived coordinate)
+ can have **real or lazy** points and bounds. If all of the
+ :class:`~iris.coords.AuxCoord` instances used to construct the derived coordinate
+ have real points and bounds then the derived coordinate will have real points
+ and bounds, otherwise the derived coordinate will have lazy points and bounds.
Iris cubes and coordinates have very similar interfaces, which extends to accessing
coordinates' lazy points and bounds:
diff --git a/docs/src/userguide/subsetting_a_cube.rst b/docs/src/userguide/subsetting_a_cube.rst
index 5112d9689a..c4f55490af 100644
--- a/docs/src/userguide/subsetting_a_cube.rst
+++ b/docs/src/userguide/subsetting_a_cube.rst
@@ -10,9 +10,10 @@ However it is often necessary to reduce the dimensionality of a cube down to som
Iris provides several ways of reducing both the amount of data and/or the number of dimensions in your cube depending on the circumstance.
In all cases **the subset of a valid cube is itself a valid cube**.
+.. _cube_extraction:
Cube Extraction
-^^^^^^^^^^^^^^^^
+---------------
A subset of a cube can be "extracted" from a multi-dimensional cube in order to reduce its dimensionality:
>>> import iris
@@ -34,15 +35,14 @@ A subset of a cube can be "extracted" from a multi-dimensional cube in order to
In this example we start with a 3 dimensional cube, with dimensions of ``height``, ``grid_latitude`` and ``grid_longitude``,
-and extract every point where the latitude is 0, resulting in a 2d cube with axes of ``height`` and ``grid_longitude``.
-
+and use :class:`iris.Constraint` to extract every point where the latitude is 0, resulting in a 2d cube with axes of ``height`` and ``grid_longitude``.
.. _floating-point-warning:
.. warning::
Caution is required when using equality constraints with floating point coordinates such as ``grid_latitude``.
Printing the points of a coordinate does not necessarily show the full precision of the underlying number and it
- is very easy return no matches to a constraint when one was expected.
+ is very easy to return no matches to a constraint when one was expected.
This can be avoided by using a function as the argument to the constraint::
def near_zero(cell):
@@ -68,6 +68,33 @@ The two steps required to get ``height`` of 9000 m at the equator can be simplif
equator_height_9km_slice = cube.extract(iris.Constraint(grid_latitude=0, height=9000))
print(equator_height_9km_slice)
+Alternatively, constraints can be combined using ``&``::
+
+ cube = iris.load_cube(filename, 'electron density')
+ equator_constraint = iris.Constraint(grid_latitude=0)
+ height_constraint = iris.Constraint(height=9000)
+ equator_height_9km_slice = cube.extract(equator_constraint & height_constraint)
+
+.. note::
+
+ Whilst ``&`` is supported, the ``|`` that might reasonably be expected is
+ not. Explanation as to why is in the :class:`iris.Constraint` reference
+ documentation.
+
+ For an example of constraining to multiple ranges of the same coordinate to
+ generate one cube, see the :class:`iris.Constraint` reference documentation.
+
+A common requirement is to limit the value of a coordinate to a specific range,
+this can be achieved by passing the constraint a function::
+
+ def below_9km(cell):
+ # return True or False as to whether the cell in question should be kept
+ return cell <= 9000
+
+ cube = iris.load_cube(filename, 'electron density')
+ height_below_9km = iris.Constraint(height=below_9km)
+ below_9km_slice = cube.extract(height_below_9km)
+
As we saw in :doc:`loading_iris_cubes` the result of :func:`iris.load` is a :class:`CubeList `.
The ``extract`` method also exists on a :class:`CubeList ` and behaves in exactly the
same way as loading with constraints:
@@ -100,9 +127,203 @@ same way as loading with constraints:
source 'Data from Met Office Unified Model'
um_version '7.3'
+Cube attributes can also be part of the constraint criteria. Supposing a
+cube attribute of ``STASH`` existed, as is the case when loading ``PP`` files,
+then specific STASH codes can be filtered::
+
+ filename = iris.sample_data_path('uk_hires.pp')
+ level_10_with_stash = iris.AttributeConstraint(STASH='m01s00i004') & iris.Constraint(model_level_number=10)
+ cubes = iris.load(filename).extract(level_10_with_stash)
+
+.. seealso::
+
+ For advanced usage there are further examples in the
+ :class:`iris.Constraint` reference documentation.
+
+Constraining a Circular Coordinate Across its Boundary
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Occasionally you may need to constrain your cube with a region that crosses the
+boundary of a circular coordinate (this is often the meridian or the dateline /
+antimeridian). An example use-case of this is to extract the entire Pacific Ocean
+from a cube whose longitudes are bounded by the dateline.
+
+This functionality cannot be provided reliably using constraints. Instead you should use the
+functionality provided by :meth:`cube.intersection `
+to extract this region.
+
+
+.. _using-time-constraints:
+
+Constraining on Time
+^^^^^^^^^^^^^^^^^^^^
+Iris follows NetCDF-CF rules in representing time coordinate values as normalised,
+purely numeric, values which are normalised by the calendar specified in the coordinate's
+units (e.g. "days since 1970-01-01").
+However, when constraining by time we usually want to test calendar-related
+aspects such as hours of the day or months of the year, so Iris
+provides special features to facilitate this.
+
+Firstly, when Iris evaluates :class:`iris.Constraint` expressions, it will convert
+time-coordinate values (points and bounds) from numbers into :class:`~datetime.datetime`-like
+objects for ease of calendar-based testing.
+
+ >>> filename = iris.sample_data_path('uk_hires.pp')
+ >>> cube_all = iris.load_cube(filename, 'air_potential_temperature')
+ >>> print('All times :\n' + str(cube_all.coord('time')))
+ All times :
+ DimCoord : time / (hours since 1970-01-01 00:00:00, standard calendar)
+ points: [2009-11-19 10:00:00, 2009-11-19 11:00:00, 2009-11-19 12:00:00]
+ shape: (3,)
+ dtype: float64
+ standard_name: 'time'
+ >>> # Define a function which accepts a datetime as its argument (this is simplified in later examples).
+ >>> hour_11 = iris.Constraint(time=lambda cell: cell.point.hour == 11)
+ >>> cube_11 = cube_all.extract(hour_11)
+ >>> print('Selected times :\n' + str(cube_11.coord('time')))
+ Selected times :
+ DimCoord : time / (hours since 1970-01-01 00:00:00, standard calendar)
+ points: [2009-11-19 11:00:00]
+ shape: (1,)
+ dtype: float64
+ standard_name: 'time'
+
+Secondly, the :class:`iris.time` module provides flexible time comparison
+facilities. An :class:`iris.time.PartialDateTime` object can be compared to
+objects such as :class:`datetime.datetime` instances, and this comparison will
+then test only those 'aspects' which the PartialDateTime instance defines:
+
+ >>> import datetime
+ >>> from iris.time import PartialDateTime
+ >>> dt = datetime.datetime(2011, 3, 7)
+ >>> print(dt > PartialDateTime(year=2010, month=6))
+ True
+ >>> print(dt > PartialDateTime(month=6))
+ False
+
+These two facilities can be combined to provide straightforward calendar-based
+time selections when loading or extracting data.
+
+The previous constraint example can now be written as:
+
+ >>> the_11th_hour = iris.Constraint(time=iris.time.PartialDateTime(hour=11))
+ >>> print(iris.load_cube(
+ ... iris.sample_data_path('uk_hires.pp'),
+ ... 'air_potential_temperature' & the_11th_hour).coord('time'))
+ DimCoord : time / (hours since 1970-01-01 00:00:00, standard calendar)
+ points: [2009-11-19 11:00:00]
+ shape: (1,)
+ dtype: float64
+ standard_name: 'time'
+
+It is common that a cube will need to be constrained between two given dates.
+In the following example we construct a time sequence representing the first
+day of every week for many years:
+
+.. testsetup:: timeseries_range
+
+ import datetime
+ import numpy as np
+ from iris.time import PartialDateTime
+ long_ts = iris.cube.Cube(np.arange(150), long_name='data', units='1')
+ _mondays = iris.coords.DimCoord(7 * np.arange(150), standard_name='time', units='days since 2007-04-09')
+ long_ts.add_dim_coord(_mondays, 0)
+
+
+.. doctest:: timeseries_range
+ :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
+
+ >>> print(long_ts.coord('time'))
+ DimCoord : time / (days since 2007-04-09, standard calendar)
+ points: [
+ 2007-04-09 00:00:00, 2007-04-16 00:00:00, ...,
+ 2010-02-08 00:00:00, 2010-02-15 00:00:00]
+ shape: (150,)
+ dtype: int64
+ standard_name: 'time'
+
+Given two dates in datetime format, we can select all points between them.
+Instead of constraining at loaded time, we already have the time coord so
+we constrain that coord using :class:`iris.cube.Cube.extract`
+
+.. doctest:: timeseries_range
+ :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
+
+ >>> d1 = datetime.datetime.strptime('20070715T0000Z', '%Y%m%dT%H%MZ')
+ >>> d2 = datetime.datetime.strptime('20070825T0000Z', '%Y%m%dT%H%MZ')
+ >>> st_swithuns_daterange_07 = iris.Constraint(
+ ... time=lambda cell: d1 <= cell.point < d2)
+ >>> within_st_swithuns_07 = long_ts.extract(st_swithuns_daterange_07)
+ >>> print(within_st_swithuns_07.coord('time'))
+ DimCoord : time / (days since 2007-04-09, standard calendar)
+ points: [
+ 2007-07-16 00:00:00, 2007-07-23 00:00:00, 2007-07-30 00:00:00,
+ 2007-08-06 00:00:00, 2007-08-13 00:00:00, 2007-08-20 00:00:00]
+ shape: (6,)
+ dtype: int64
+ standard_name: 'time'
+
+Alternatively, we may rewrite this using :class:`iris.time.PartialDateTime`
+objects.
+
+.. doctest:: timeseries_range
+ :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
+
+ >>> pdt1 = PartialDateTime(year=2007, month=7, day=15)
+ >>> pdt2 = PartialDateTime(year=2007, month=8, day=25)
+ >>> st_swithuns_daterange_07 = iris.Constraint(
+ ... time=lambda cell: pdt1 <= cell.point < pdt2)
+ >>> within_st_swithuns_07 = long_ts.extract(st_swithuns_daterange_07)
+ >>> print(within_st_swithuns_07.coord('time'))
+ DimCoord : time / (days since 2007-04-09, standard calendar)
+ points: [
+ 2007-07-16 00:00:00, 2007-07-23 00:00:00, 2007-07-30 00:00:00,
+ 2007-08-06 00:00:00, 2007-08-13 00:00:00, 2007-08-20 00:00:00]
+ shape: (6,)
+ dtype: int64
+ standard_name: 'time'
+
+A more complex example might require selecting points over an annually repeating
+date range. We can select points within a certain part of the year, in this case
+between the 15th of July through to the 25th of August. By making use of
+PartialDateTime this becomes simple:
+
+.. doctest:: timeseries_range
+
+ >>> st_swithuns_daterange = iris.Constraint(
+ ... time=lambda cell: PartialDateTime(month=7, day=15) <= cell.point < PartialDateTime(month=8, day=25))
+ >>> within_st_swithuns = long_ts.extract(st_swithuns_daterange)
+ ...
+ >>> # Note: using summary(max_values) to show more of the points
+ >>> print(within_st_swithuns.coord('time').summary(max_values=100))
+ DimCoord : time / (days since 2007-04-09, standard calendar)
+ points: [
+ 2007-07-16 00:00:00, 2007-07-23 00:00:00, 2007-07-30 00:00:00,
+ 2007-08-06 00:00:00, 2007-08-13 00:00:00, 2007-08-20 00:00:00,
+ 2008-07-21 00:00:00, 2008-07-28 00:00:00, 2008-08-04 00:00:00,
+ 2008-08-11 00:00:00, 2008-08-18 00:00:00, 2009-07-20 00:00:00,
+ 2009-07-27 00:00:00, 2009-08-03 00:00:00, 2009-08-10 00:00:00,
+ 2009-08-17 00:00:00, 2009-08-24 00:00:00]
+ shape: (17,)
+ dtype: int64
+ standard_name: 'time'
+
+Notice how the dates printed are between the range specified in the ``st_swithuns_daterange``
+and that they span multiple years.
+
+The above examples involve constraining on the points of the time coordinate. Constraining
+on bounds can be done in the following way::
+
+ filename = iris.sample_data_path('ostia_monthly.nc')
+ cube = iris.load_cube(filename, 'surface_temperature')
+ dtmin = datetime.datetime(2008, 1, 1)
+ cube.extract(iris.Constraint(time = lambda cell: any(bound > dtmin for bound in cell.bound)))
+
+The above example constrains to cells where either the upper or lower bound occur
+after 1st January 2008.
Cube Iteration
-^^^^^^^^^^^^^^^
+--------------
It is not possible to directly iterate over an Iris cube. That is, you cannot use code such as
``for x in cube:``. However, you can iterate over cube slices, as this section details.
@@ -151,9 +372,10 @@ slicing the 3 dimensional cube (15, 100, 100) by longitude (i starts at 0 and 15
Once the your code can handle a 2d slice, it is then an easy step to loop over **all** 2d slices within the bigger
cube using the slices method.
+.. _cube_indexing:
Cube Indexing
-^^^^^^^^^^^^^
+-------------
In the same way that you would expect a numeric multidimensional array to be **indexed** to take a subset of your
original array, you can **index** a Cube for the same purpose.
diff --git a/docs/src/voted_issues.rst b/docs/src/voted_issues.rst
new file mode 100644
index 0000000000..7d983448b9
--- /dev/null
+++ b/docs/src/voted_issues.rst
@@ -0,0 +1,56 @@
+.. include:: common_links.inc
+
+.. _voted_issues_top:
+
+Voted Issues
+============
+
+You can help us to prioritise development of new features by leaving a 👍
+reaction on the header (not subsequent comments) of any issue.
+
+.. tip:: We suggest you subscribe to the issue so you will be updated.
+ When viewing the issue there is a **Notifications**
+ section where you can select to subscribe.
+
+Below is a sorted table of all issues that have 1 or more 👍 from our github
+project. Please note that there is more development activity than what is on
+the below table.
+
+.. _voted-issues.json: https://github.com/scitools/voted_issues/blob/main/voted-issues.json
+
+.. raw:: html
+
+
+
+
+
👍
+
Issue
+
Author
+
Title
+
+
+
+
+
+
+
+
+
+
+.. note:: The data in this table is updated every 30 minutes and is sourced
+ from `voted-issues.json`_.
+ For the latest data please see the `issues on GitHub`_.
+ Note that the list on Github does not show the number of votes 👍
+ only the total number of comments for the whole issue.
\ No newline at end of file
diff --git a/docs/src/whatsnew/3.0.rst b/docs/src/whatsnew/3.0.rst
index 771a602954..223ef60011 100644
--- a/docs/src/whatsnew/3.0.rst
+++ b/docs/src/whatsnew/3.0.rst
@@ -97,9 +97,8 @@ v3.0.2 (27 May 2021)
from collaborators targeting the Iris ``master`` branch. (:pull:`4007`)
[``pre-v3.1.0``]
- #. `@bjlittle`_ added conditional task execution to `.cirrus.yml`_ to allow
- developers to easily disable `cirrus-ci`_ tasks. See
- :ref:`skipping Cirrus-CI tasks`. (:pull:`4019`) [``pre-v3.1.0``]
+ #. `@bjlittle`_ added conditional task execution to ``.cirrus.yml`` to allow
+ developers to easily disable `cirrus-ci`_ tasks. (:pull:`4019`) [``pre-v3.1.0``]
#. `@pp-mo`_ adjusted the use of :func:`dask.array.from_array` in :func:`iris._lazy_data.as_lazy_data`,
to avoid the dask 'test access'. This makes loading of netcdf files with a
diff --git a/docs/src/whatsnew/3.1.rst b/docs/src/whatsnew/3.1.rst
index bd046a0a24..1f076572bc 100644
--- a/docs/src/whatsnew/3.1.rst
+++ b/docs/src/whatsnew/3.1.rst
@@ -227,9 +227,8 @@ This document explains the changes made to Iris for this release
#. `@akuhnregnier`_ replaced `deprecated numpy 1.20 aliases for builtin types`_.
(:pull:`3997`)
-#. `@bjlittle`_ added conditional task execution to `.cirrus.yml`_ to allow
- developers to easily disable `cirrus-ci`_ tasks. See
- :ref:`skipping Cirrus-CI tasks`. (:pull:`4019`)
+#. `@bjlittle`_ added conditional task execution to ``.cirrus.yml`` to allow
+ developers to easily disable `cirrus-ci`_ tasks. (:pull:`4019`)
#. `@bjlittle`_ and `@jamesp`_ addressed a regression in behaviour when using
`conda`_ 4.10.0 within `cirrus-ci`_. (:pull:`4084`)
@@ -291,9 +290,8 @@ This document explains the changes made to Iris for this release
#. `@bjlittle`_ enabled `cirrus-ci`_ compute credits for non-draft pull-requests
from collaborators targeting the Iris ``master`` branch. (:pull:`4007`)
-#. `@bjlittle`_ added conditional task execution to `.cirrus.yml`_ to allow
- developers to easily disable `cirrus-ci`_ tasks. See
- :ref:`skipping Cirrus-CI tasks`. (:pull:`4019`)
+#. `@bjlittle`_ added conditional task execution to ``.cirrus.yml`` to allow
+ developers to easily disable `cirrus-ci`_ tasks. (:pull:`4019`)
diff --git a/docs/src/whatsnew/dev.rst b/docs/src/whatsnew/3.2.rst
similarity index 92%
rename from docs/src/whatsnew/dev.rst
rename to docs/src/whatsnew/3.2.rst
index e2d4c2bc0b..723f26345e 100644
--- a/docs/src/whatsnew/dev.rst
+++ b/docs/src/whatsnew/3.2.rst
@@ -1,13 +1,13 @@
.. include:: ../common_links.inc
-|iris_version| |build_date| [unreleased]
-****************************************
+v3.2 (15 Feb 2022)
+******************
This document explains the changes made to Iris for this release
(:doc:`View all changes `.)
-.. dropdown:: :opticon:`report` |iris_version| Release Highlights
+.. dropdown:: :opticon:`report` v3.2.0 Release Highlights
:container: + shadow
:title: text-primary text-center font-weight-bold
:body: bg-light
@@ -18,14 +18,37 @@ This document explains the changes made to Iris for this release
* We've added experimental support for
:ref:`Meshes `, which can now be loaded and
- attached to a cube. Mesh support is based on the based on `CF-UGRID`_
- model.
+ attached to a cube. Mesh support is based on the `CF-UGRID`_ model.
* We've also dropped support for ``Python 3.7``.
And finally, get in touch with us on :issue:`GitHub` if you have
any issues or feature requests for improving Iris. Enjoy!
+v3.2.1 (11 Mar 2022)
+====================
+
+.. dropdown:: :opticon:`alert` v3.2.1 Patches
+ :container: + shadow
+ :title: text-primary text-center font-weight-bold
+ :body: bg-light
+ :animate: fade-in
+
+ 📢 **Welcome** to `@dennissergeev`_, who made his first contribution to Iris. Nice work!
+
+ The patches in this release of Iris include:
+
+ 🐛 **Bugs Fixed**
+
+ #. `@dennissergeev`_ changed _crs_distance_differentials() so that it uses the `Globe`
+ attribute from a given CRS instead of creating a new `ccrs.Globe()` object.
+ Iris can now handle non-Earth semi-major axes, as discussed in :issue:`4582` (:pull:`4605`).
+
+ #. `@trexfeathers`_ avoided a dimensionality mismatch when streaming the
+ :attr:`~iris.coords.Coord.bounds` array for a scalar
+ :class:`~iris.coords.Coord`. (:pull:`4610`).
+
+
📢 Announcements
================
@@ -103,7 +126,7 @@ This document explains the changes made to Iris for this release
of Iris (:issue:`4523`).
#. `@pp-mo`_ removed broken tooling for deriving Iris metadata translations
- from `Metarelate`_. From now we intend to manage phenonemon translation
+ from ``Metarelate``. From now we intend to manage phenonemon translation
in Iris itself. (:pull:`4484`)
#. `@pp-mo`_ improved printout of various cube data component objects :
@@ -175,9 +198,12 @@ This document explains the changes made to Iris for this release
from assuming the globe to be the Earth (:issue:`4408`, :pull:`4497`)
#. `@rcomer`_ corrected the ``long_name`` mapping from UM stash code ``m01s09i215``
- to indicate cloud fraction greater than 7.9 oktas, rather than 7.5
+ to indicate cloud fraction greater than 7.9 oktas, rather than 7.5
(:issue:`3305`, :pull:`4535`)
+#. `@lbdreyer`_ fixed a bug in :class:`iris.io.load_http` which was missing an import
+ (:pull:`4580`)
+
💣 Incompatible Changes
=======================
@@ -263,7 +289,7 @@ This document explains the changes made to Iris for this release
#. `@rcomer`_ updated the "Plotting Wind Direction Using Quiver" Gallery
example. (:pull:`4120`)
-#. `@trexfeathers`_ included `Iris GitHub Discussions`_ in
+#. `@trexfeathers`_ included Iris `GitHub Discussions`_ in
:ref:`get involved `. (:pull:`4307`)
#. `@wjbenfold`_ improved readability in :ref:`userguide interpolation
@@ -349,7 +375,7 @@ This document explains the changes made to Iris for this release
#. `@lbdreyer`_ corrected the license PyPI classifier. (:pull:`4435`)
-#. `@aaronspring `_ exchanged ``dask`` with
+#. `@aaronspring`_ exchanged ``dask`` with
``dask-core`` in testing environments reducing the number of dependencies
installed for testing. (:pull:`4434`)
@@ -366,6 +392,7 @@ This document explains the changes made to Iris for this release
.. _@aaronspring: https://github.com/aaronspring
.. _@akuhnregnier: https://github.com/akuhnregnier
.. _@bsherratt: https://github.com/bsherratt
+.. _@dennissergeev: https://github.com/dennissergeev
.. _@larsbarring: https://github.com/larsbarring
.. _@pdearnshaw: https://github.com/pdearnshaw
.. _@SimonPeatman: https://github.com/SimonPeatman
@@ -375,7 +402,6 @@ This document explains the changes made to Iris for this release
Whatsnew resources in alphabetical order:
.. _NEP-29: https://numpy.org/neps/nep-0029-deprecation_policy.html
-.. _Metarelate: http://www.metarelate.net/
.. _UGRID: http://ugrid-conventions.github.io/ugrid-conventions/
.. _iris-emsf-regrid: https://github.com/SciTools-incubator/iris-esmf-regrid
.. _faster documentation building: https://docs.readthedocs.io/en/stable/guides/conda.html#making-builds-faster-with-mamba
diff --git a/docs/src/whatsnew/3.3.rst b/docs/src/whatsnew/3.3.rst
new file mode 100644
index 0000000000..5812b79860
--- /dev/null
+++ b/docs/src/whatsnew/3.3.rst
@@ -0,0 +1,341 @@
+.. include:: ../common_links.inc
+
+v3.3 (1 Sep 2022)
+*****************
+
+This document explains the changes made to Iris for this release
+(:doc:`View all changes `.)
+
+
+.. dropdown:: :opticon:`report` v3.3.0 Release Highlights
+ :container: + shadow
+ :title: text-primary text-center font-weight-bold
+ :body: bg-light
+ :animate: fade-in
+ :open:
+
+ The highlights for this minor release of Iris include:
+
+ * We've added support for datums, loading them from NetCDF when the
+ :obj:`iris.FUTURE.datum_support` flag is set.
+ * We've greatly improved the speed of linear interpolation.
+ * We've added the function :func:`iris.pandas.as_cubes` for richer
+ conversion from Pandas.
+ * We've improved the functionality of :func:`iris.util.mask_cube`.
+ * We've improved the functionality and performance of the
+ :obj:`iris.analysis.PERCENTILE` aggregator.
+ * We've completed implementation of our :ref:`contributing.benchmarks`
+ infrastructure.
+
+ And finally, get in touch with us on :issue:`GitHub` if you have
+ any issues or feature requests for improving Iris. Enjoy!
+
+
+📢 Announcements
+================
+
+#. Welcome to `@krikru`_ who made their first contribution to Iris 🎉
+
+
+✨ Features
+===========
+
+#. `@schlunma`_ added weighted aggregation over "group coordinates":
+ :meth:`~iris.cube.Cube.aggregated_by` now accepts the keyword `weights` if a
+ :class:`~iris.analysis.WeightedAggregator` is used. (:issue:`4581`,
+ :pull:`4589`)
+
+#. `@wjbenfold`_ added support for ``false_easting`` and ``false_northing`` to
+ :class:`~iris.coord_systems.Mercator`. (:issue:`3107`, :pull:`4524`)
+
+#. `@rcomer`_ and `@wjbenfold`_ (reviewer) implemented lazy aggregation for the
+ :obj:`iris.analysis.PERCENTILE` aggregator. (:pull:`3901`)
+
+#. `@pp-mo`_ fixed cube arithmetic operation for cubes with meshes.
+ (:issue:`4454`, :pull:`4651`)
+
+#. `@wjbenfold`_ added support for CF-compliant treatment of
+ ``standard_parallel`` and ``scale_factor_at_projection_origin`` to
+ :class:`~iris.coord_systems.Mercator`. (:issue:`3844`, :pull:`4609`)
+
+#. `@wjbenfold`_ added support datums associated with coordinate systems (e.g.
+ :class:`~iris.coord_systems.GeogCS` other subclasses of
+ :class:`~iris.coord_systems.CoordSystem`). Loading of datum information from
+ a netCDF file only happens when the :obj:`iris.FUTURE.datum_support` flag is
+ set. (:issue:`4619`, :pull:`4704`)
+
+#. `@wjbenfold`_ and `@stephenworsley`_ (reviewer) added a maximum run length
+ aggregator (:class:`~iris.analysis.MAX_RUN`). (:pull:`4676`)
+
+#. `@wjbenfold`_ and `@rcomer`_ (reviewer) added a ``climatological`` keyword to
+ :meth:`~iris.cube.Cube.aggregated_by` that causes the climatological flag to
+ be set and the point for each cell to equal its first bound, thereby
+ preserving the time of year. (:issue:`1422`, :issue:`4098`, :issue:`4665`,
+ :pull:`4723`)
+
+#. `@wjbenfold`_ and `@pp-mo`_ (reviewer) implemented the
+ :class:`~iris.coord_systems.PolarStereographic` CRS. (:issue:`4770`,
+ :pull:`4773`)
+
+#. `@rcomer`_ and `@wjbenfold`_ (reviewer) enabled passing of the
+ :func:`numpy.percentile` keywords through the :obj:`~iris.analysis.PERCENTILE`
+ aggregator. (:pull:`4791`)
+
+#. `@wjbenfold`_ and `@bjlittle`_ (reviewer) implemented
+ :func:`iris.plot.fill_between` and :func:`iris.quickplot.fill_between`.
+ (:issue:`3493`, :pull:`4647`)
+
+#. `@rcomer`_ and `@bjlittle`_ (reviewer) re-wrote :func:`iris.util.mask_cube`
+ to provide lazy evaluation and greater flexibility with respect to input types.
+ (:issue:`3936`, :pull:`4889`)
+
+#. `@stephenworsley`_ and `@lbdreyer`_ added a new kwarg ``expand_extras`` to
+ :func:`iris.util.new_axis` which can be used to specify instances of
+ :class:`~iris.coords.AuxCoord`, :class:`~iris.coords.CellMeasure` and
+ :class:`~iris.coords.AncillaryVariable` which should also be expanded to map
+ to the new axis. (:pull:`4896`)
+
+#. `@stephenworsley`_ updated to the latest CF Standard Names Table ``v79``
+ (19 March 2022). (:pull:`4910`)
+
+#. `@trexfeathers`_ and `@lbdreyer`_ (reviewer) added
+ :func:`iris.pandas.as_cubes`, which provides richer conversion from
+ Pandas :class:`~pandas.Series` / :class:`~pandas.DataFrame`\s to one or more
+ :class:`~iris.cube.Cube`\s. This includes: n-dimensional datasets,
+ :class:`~iris.coords.AuxCoord`\s, :class:`~iris.coords.CellMeasure`\s,
+ :class:`~iris.coords.AncillaryVariable`\s, and multi-dimensional
+ coordinates. (:pull:`4890`)
+
+
+🐛 Bugs Fixed
+=============
+
+#. `@rcomer`_ reverted part of the change from :pull:`3906` so that
+ :func:`iris.plot.plot` no longer defaults to placing a "Y" coordinate (e.g.
+ latitude) on the y-axis of the plot. (:issue:`4493`, :pull:`4601`)
+
+#. `@rcomer`_ enabled passing of scalar objects to :func:`~iris.plot.plot` and
+ :func:`~iris.plot.scatter`. (:pull:`4616`)
+
+#. `@rcomer`_ fixed :meth:`~iris.cube.Cube.aggregated_by` with `mdtol` for 1D
+ cubes where an aggregated section is entirely masked, reported at
+ :issue:`3190`. (:pull:`4246`)
+
+#. `@rcomer`_ ensured that a :class:`matplotlib.axes.Axes`'s position is preserved
+ when Iris replaces it with a :class:`cartopy.mpl.geoaxes.GeoAxes`, fixing
+ :issue:`1157`. (:pull:`4273`)
+
+#. `@rcomer`_ fixed :meth:`~iris.coords.Coord.nearest_neighbour_index` for edge
+ cases where the requested point is float and the coordinate has integer
+ bounds, reported at :issue:`2969`. (:pull:`4245`)
+
+#. `@rcomer`_ modified bounds setting on :obj:`~iris.coords.DimCoord` instances
+ so that the order of the cell bounds is automatically reversed
+ to match the coordinate's direction if necessary. This is consistent with
+ the `Bounds for 1-D coordinate variables` subsection of the `Cell Boundaries`_
+ section of the CF Conventions and ensures that contiguity is preserved if a
+ coordinate's direction is reversed. (:issue:`3249`, :issue:`423`,
+ :issue:`4078`, :issue:`3756`, :pull:`4466`)
+
+#. `@wjbenfold`_ and `@evertrol`_ prevented an ``AttributeError`` being logged
+ to ``stderr`` when a :class:`~iris.fileformats.cf.CFReader` that fails to
+ initialise is garbage collected. (:issue:`3312`, :pull:`4646`)
+
+#. `@wjbenfold`_ fixed plotting of circular coordinates to extend kwarg arrays
+ as well as the data. (:issue:`466`, :pull:`4649`)
+
+#. `@wjbenfold`_ and `@rcomer`_ (reviewer) corrected the axis on which masking
+ is applied when an aggregator adds a trailing dimension. (:pull:`4755`)
+
+#. `@rcomer`_ and `@pp-mo`_ ensured that all methods to create or modify a
+ :class:`iris.cube.CubeList` check that it only contains cubes. According to
+ code comments, this was supposedly already the case, but there were several bugs
+ and loopholes. (:issue:`1897`, :pull:`4767`)
+
+#. `@rcomer`_ modified cube arithmetic to handle mismatches in the cube's data
+ array type. This prevents masks being lost in some cases and therefore
+ resolves :issue:`2987`. (:pull:`3790`)
+
+#. `@krikru`_ and `@rcomer`_ updated :mod:`iris.quickplot` such that the
+ colorbar is added to the correct ``axes`` when specified as a keyword
+ argument to a plotting routine. Otherwise, by default the colorbar will be
+ added to the current axes of the current figure. (:pull:`4894`)
+
+#. `@rcomer`_ and `@bjlittle`_ (reviewer) modified :func:`iris.util.mask_cube` so it
+ either works in place or returns a new cube (:issue:`3717`, :pull:`4889`)
+
+
+💣 Incompatible Changes
+=======================
+
+#. `@rcomer`_ and `@bjlittle`_ (reviewer) updated Iris's calendar handling to be
+ consistent with ``cf-units`` version 3.1. In line with the `Calendar`_
+ section in version 1.9 of the CF Conventions, we now use "standard" rather
+ than the deprecated "gregorian" label for the default calendar. Units may
+ still be instantiated with ``calendar="gregorian"`` but their calendar
+ attribute will be silently changed to "standard". This may cause failures in
+ code that explicitly checks the calendar attribute. (:pull:`4847`)
+
+
+🚀 Performance
+==============
+
+#. `@wjbenfold`_ added caching to the calculation of the points array in a
+ :class:`~iris.coords.DimCoord` created using
+ :meth:`~iris.coords.DimCoord.from_regular`. (:pull:`4698`)
+
+#. `@wjbenfold`_ introduced caching in :func:`_lazy_data._optimum_chunksize` and
+ :func:`iris.fileformats.pp_load_rules._epoch_date_hours` to reduce time spent
+ repeating calculations. (:pull:`4716`)
+
+#. `@pp-mo`_ made :meth:`~iris.cube.Cube.add_aux_factory` faster.
+ (:pull:`4718`)
+
+#. `@wjbenfold`_ and `@rcomer`_ (reviewer) permitted the fast percentile
+ aggregation method to be used on masked data when the missing data tolerance
+ is set to 0. (:issue:`4735`, :pull:`4755`)
+
+#. `@wjbenfold`_ improved the speed of linear interpolation using
+ :meth:`iris.analysis.trajectory.interpolate` (:pull:`4366`)
+
+#. NumPy ``v1.23`` behaviour changes mean that
+ :func:`iris.experimental.ugrid.utils.recombine_submeshes` now uses ~3x as
+ much memory; testing shows a ~16-million point mesh will now use ~600MB.
+ Investigated by `@pp-mo`_ and `@trexfeathers`_. (:issue:`4845`)
+
+
+🔥 Deprecations
+===============
+
+#. `@trexfeathers`_ and `@lbdreyer`_ (reviewer) deprecated
+ :func:`iris.pandas.as_cube` in favour of the new
+ :func:`iris.pandas.as_cubes` - see `✨ Features`_ for more details.
+ (:pull:`4890`)
+
+
+🔗 Dependencies
+===============
+
+#. `@rcomer`_ introduced the ``nc-time-axis >=1.4`` minimum pin, reflecting that
+ we no longer use the deprecated :class:`nc_time_axis.CalendarDateTime`
+ when plotting against time coordinates. (:pull:`4584`)
+
+#. `@wjbenfold`_ and `@bjlittle`_ (reviewer) unpinned ``pillow``. (:pull:`4826`)
+
+#. `@rcomer`_ introduced the ``cf-units >=3.1`` minimum pin, reflecting the
+ alignment of calendar behaviour in the two packages (see Incompatible Changes).
+ (:pull:`4847`)
+
+#. `@bjlittle`_ introduced the ``sphinx-gallery >=0.11.0`` minimum pin.
+ (:pull:`4885`)
+
+#. `@trexfeathers`_ updated the install process to work with setuptools
+ ``>=v64``, making ``v64`` the minimum compatible version. (:pull:`4903`)
+
+#. `@stephenworsley`_ and `@trexfeathers`_ introduced the ``shapely !=1.8.3``
+ pin, avoiding a bug caused by its interaction with cartopy.
+ (:pull:`4911`, :pull:`4917`)
+
+
+📚 Documentation
+================
+
+#. `@tkknight`_ added a page to show the issues that have been voted for. See
+ :ref:`voted_issues_top`. (:issue:`3307`, :pull:`4617`)
+
+#. `@wjbenfold`_ added a note about fixing proxy URLs in lockfiles generated
+ because dependencies have changed. (:pull:`4666`)
+
+#. `@lbdreyer`_ moved most of the User Guide's :class:`iris.Constraint` examples
+ from :ref:`loading_iris_cubes` to :ref:`cube_extraction` and added an
+ example of constraining on bounded time. (:pull:`4656`)
+
+#. `@tkknight`_ adopted the `PyData Sphinx Theme`_ for the documentation.
+ (:discussion:`4344`, :pull:`4661`)
+
+#. `@tkknight`_ updated our developers guidance to show our intent to adopt
+ numpydoc strings and fixed some API documentation rendering.
+ See :ref:`docstrings`. (:issue:`4657`, :pull:`4689`)
+
+#. `@trexfeathers`_ and `@lbdreyer`_ added a page with examples of converting
+ various mesh formats into the Iris Mesh Data Model. (:pull:`4739`)
+
+#. `@rcomer`_ updated the "Load a Time Series of Data From the NEMO Model"
+ gallery example. (:pull:`4741`)
+
+#. `@wjbenfold`_ added developer documentation to highlight some of the
+ utilities offered by :class:`iris.IrisTest` and how to update CML and other
+ output files. (:issue:`4544`, :pull:`4600`)
+
+#. `@trexfeathers`_ and `@abooton`_ modernised the Iris logo to be SVG format.
+ (:pull:`3935`)
+
+
+💼 Internal
+===========
+
+#. `@trexfeathers`_ and `@pp-mo`_ finished implementing a mature benchmarking
+ infrastructure (see :ref:`contributing.benchmarks`), building on 2 hard
+ years of lessons learned 🎉. (:pull:`4477`, :pull:`4562`, :pull:`4571`,
+ :pull:`4583`, :pull:`4621`)
+
+#. `@wjbenfold`_ used the aforementioned benchmarking infrastructure to
+ introduce deep (large 3rd dimension) loading and realisation benchmarks.
+ (:pull:`4654`)
+
+#. `@wjbenfold`_ made :func:`iris.tests.stock.simple_1d` respect the
+ ``with_bounds`` argument. (:pull:`4658`)
+
+#. `@lbdreyer`_ replaced `nose`_ with `pytest`_ as Iris' test runner.
+ (:pull:`4734`)
+
+#. `@bjlittle`_ and `@trexfeathers`_ (reviewer) migrated to GitHub Actions
+ for Continuous-Integration. (:pull:`4503`)
+
+#. `@pp-mo`_ made tests run certain linux executables from the Python env,
+ specifically ncdump and ncgen. These could otherwise fail when run in IDEs
+ such as PyCharm and Eclipse, which don't automatically include the Python env
+ bin in the system PATH.
+ (:pull:`4794`)
+
+#. `@trexfeathers`_ and `@pp-mo`_ improved generation of stock NetCDF files.
+ (:pull:`4827`, :pull:`4836`)
+
+#. `@rcomer`_ removed some now redundant testing functions. (:pull:`4838`,
+ :pull:`4878`)
+
+#. `@bjlittle`_ and `@jamesp`_ (reviewer) and `@lbdreyer`_ (reviewer) extended
+ the GitHub Continuous-Integration to cover testing on ``py38``, ``py39``,
+ and ``py310``. (:pull:`4840`)
+
+#. `@bjlittle`_ and `@trexfeathers`_ (reviewer) adopted `setuptools-scm`_ for
+ automated ``iris`` package versioning. (:pull:`4841`)
+
+#. `@bjlittle`_ and `@trexfeathers`_ (reviewer) added building, testing and
+ publishing of ``iris`` PyPI ``sdist`` and binary ``wheels`` as part of
+ our GitHub Continuous-Integration. (:pull:`4849`)
+
+#. `@rcomer`_ and `@wjbenfold`_ (reviewer) used ``pytest`` parametrization to
+ streamline the gallery test code. (:pull:`4792`)
+
+#. `@trexfeathers`_ improved settings to better working with
+ ``setuptools_scm``. (:pull:`4925`)
+
+
+.. comment
+ Whatsnew author names (@github name) in alphabetical order. Note that,
+ core dev names are automatically included by the common_links.inc:
+
+.. _@evertrol: https://github.com/evertrol
+.. _@krikru: https://github.com/krikru
+
+
+.. comment
+ Whatsnew resources in alphabetical order:
+
+.. _Calendar: https://cfconventions.org/Data/cf-conventions/cf-conventions-1.9/cf-conventions.html#calendar
+.. _Cell Boundaries: https://cfconventions.org/Data/cf-conventions/cf-conventions-1.9/cf-conventions.html#cell-boundaries
+.. _nose: https://nose.readthedocs.io
+.. _PyData Sphinx Theme: https://pydata-sphinx-theme.readthedocs.io/en/stable/index.html
+.. _pytest: https://docs.pytest.org
+.. _setuptools-scm: https://github.com/pypa/setuptools_scm
diff --git a/docs/src/whatsnew/index.rst b/docs/src/whatsnew/index.rst
index 51f03e8d8f..8cff21f32f 100644
--- a/docs/src/whatsnew/index.rst
+++ b/docs/src/whatsnew/index.rst
@@ -1,16 +1,19 @@
+.. include:: ../common_links.inc
+
.. _iris_whatsnew:
What's New in Iris
-******************
-
-These "What's new" pages describe the important changes between major
-Iris versions.
+------------------
+.. include:: latest.rst
.. toctree::
:maxdepth: 1
+ :hidden:
- dev.rst
+ latest.rst
+ 3.3.rst
+ 3.2.rst
3.1.rst
3.0.rst
2.4.rst
diff --git a/docs/src/whatsnew/latest.rst b/docs/src/whatsnew/latest.rst
deleted file mode 120000
index 56aebe92dd..0000000000
--- a/docs/src/whatsnew/latest.rst
+++ /dev/null
@@ -1 +0,0 @@
-dev.rst
\ No newline at end of file
diff --git a/docs/src/whatsnew/latest.rst b/docs/src/whatsnew/latest.rst
new file mode 100644
index 0000000000..a420494157
--- /dev/null
+++ b/docs/src/whatsnew/latest.rst
@@ -0,0 +1,120 @@
+.. include:: ../common_links.inc
+
+|iris_version| |build_date| [unreleased]
+****************************************
+
+This document explains the changes made to Iris for this release
+(:doc:`View all changes `.)
+
+
+.. dropdown:: :opticon:`report` |iris_version| Release Highlights
+ :container: + shadow
+ :title: text-primary text-center font-weight-bold
+ :body: bg-light
+ :animate: fade-in
+ :open:
+
+ The highlights for this major/minor release of Iris include:
+
+ * N/A
+
+ And finally, get in touch with us on :issue:`GitHub` if you have
+ any issues or feature requests for improving Iris. Enjoy!
+
+
+📢 Announcements
+================
+
+#. Welcome to `@ESadek-MO`_ and `@TTV-Intrepid`_ who made their first contributions to Iris 🎉
+
+
+✨ Features
+===========
+
+#. `@ESadek-MO`_ edited :func:`~iris.io.expand_filespecs` to allow expansion of
+ non-existing paths, and added expansion functionality to :func:`~iris.io.save`.
+ (:issue:`4772`, :pull:`4913`)
+
+
+🐛 Bugs Fixed
+=============
+
+#. `@rcomer`_ and `@pp-mo`_ (reviewer) factored masking into the returned
+ sum-of-weights calculation from :obj:`~iris.analysis.SUM`. (:pull:`4905`)
+
+#. `@schlunma`_ fixed a bug which prevented using
+ :meth:`iris.cube.Cube.collapsed` on coordinates whose number of bounds
+ differs from 0 or 2. This enables the use of this method on mesh
+ coordinates. (:issue:`4672`, :pull:`4870`)
+
+#. `@bjlittle`_ and `@lbdreyer`_ (reviewer) fixed the building of the CF
+ Standard Names module ``iris.std_names`` for the ``setup.py`` commands
+ ``develop`` and ``std_names``. (:issue:`4951`, :pull:`4952`)
+
+#. `@lbdreyer`_ and `@pp-mo`_ (reviewer) fixed the cube print out such that
+ scalar ancillary variables are displayed in a dedicated section rather than
+ being added to the vector ancillary variables section. Further, ancillary
+ variables and cell measures that map to a cube dimension of length 1 are now
+ included in the respective vector sections. (:pull:`4945`)
+
+
+💣 Incompatible Changes
+=======================
+
+#. N/A
+
+
+🚀 Performance Enhancements
+===========================
+
+#. `@rcomer`_ and `@pp-mo`_ (reviewer) increased aggregation speed for
+ :obj:`~iris.analysis.SUM`, :obj:`~iris.analysis.COUNT` and
+ :obj:`~iris.analysis.PROPORTION` on real data. (:pull:`4905`)
+
+
+🔥 Deprecations
+===============
+
+#. N/A
+
+
+🔗 Dependencies
+===============
+
+#. `@rcomer`_ introduced the ``dask >=2.26`` minimum pin, so that Iris can benefit
+ from Dask's support for `NEP13`_ and `NEP18`_. (:pull:`4905`)
+
+
+📚 Documentation
+================
+
+#. `@ESadek-MO`_, `@TTV-Intrepid`_ and `@trexfeathers`_ added a gallery example for zonal
+ means plotted parallel to a cartographic plot. (:pull:`4871`)
+
+
+💼 Internal
+===========
+
+#. `@rcomer`_ removed the obsolete ``setUpClass`` method from Iris testing.
+ (:pull:`4927`)
+
+#. `@bjlittle`_ and `@lbdreyer`_ (reviewer) removed support for
+ ``python setup.py test``, which is a deprecated approach to executing
+ package tests, see `pypa/setuptools#1684`_. Also performed assorted
+ ``setup.py`` script hygiene. (:pull:`4948`, :pull:`4949`, :pull:`4950`)
+
+
+.. comment
+ Whatsnew author names (@github name) in alphabetical order. Note that,
+ core dev names are automatically included by the common_links.inc:
+
+.. _@TTV-Intrepid: https://github.com/TTV-Intrepid
+
+
+
+.. comment
+ Whatsnew resources in alphabetical order:
+
+.. _NEP13: https://numpy.org/neps/nep-0013-ufunc-overrides.html
+.. _NEP18: https://numpy.org/neps/nep-0018-array-function-protocol.html
+.. _pypa/setuptools#1684: https://github.com/pypa/setuptools/issues/1684
\ No newline at end of file
diff --git a/docs/src/whatsnew/dev.rst.template b/docs/src/whatsnew/latest.rst.template
similarity index 99%
rename from docs/src/whatsnew/dev.rst.template
rename to docs/src/whatsnew/latest.rst.template
index 79c578ca65..661ee47f50 100644
--- a/docs/src/whatsnew/dev.rst.template
+++ b/docs/src/whatsnew/latest.rst.template
@@ -42,7 +42,7 @@ v3.X.X (DD MMM YYYY)
NOTE: section above is a template for bugfix patches
====================================================
(Please remove this section when creating an initial 'latest.rst')
-
+
📢 Announcements
diff --git a/docs/src/why_iris.rst b/docs/src/why_iris.rst
new file mode 100644
index 0000000000..63a515f68e
--- /dev/null
+++ b/docs/src/why_iris.rst
@@ -0,0 +1,44 @@
+.. _why_iris:
+
+Why Iris
+========
+
+**A powerful, format-agnostic, community-driven Python package for analysing
+and visualising Earth science data.**
+
+Iris implements a data model based on the `CF conventions `_
+giving you a powerful, format-agnostic interface for working with your data.
+It excels when working with multi-dimensional Earth Science data, where tabular
+representations become unwieldy and inefficient.
+
+`CF Standard names `_,
+`units `_, and coordinate metadata
+are built into Iris, giving you a rich and expressive interface for maintaining
+an accurate representation of your data. Its treatment of data and
+associated metadata as first-class objects includes:
+
+.. rst-class:: squarelist
+
+* visualisation interface based on `matplotlib `_ and
+ `cartopy `_,
+* unit conversion,
+* subsetting and extraction,
+* merge and concatenate,
+* aggregations and reductions (including min, max, mean and weighted averages),
+* interpolation and regridding (including nearest-neighbor, linear and
+ area-weighted), and
+* operator overloads (``+``, ``-``, ``*``, ``/``, etc.).
+
+A number of file formats are recognised by Iris, including CF-compliant NetCDF,
+GRIB, and PP, and it has a plugin architecture to allow other formats to be
+added seamlessly.
+
+Building upon `NumPy `_ and
+`dask `_, Iris scales from efficient
+single-machine workflows right through to multi-core clusters and HPC.
+Interoperability with packages from the wider scientific Python ecosystem comes
+from Iris' use of standard NumPy/dask arrays as its underlying data storage.
+
+Iris is part of SciTools, for more information see https://scitools.org.uk/.
+For **Iris 2.4** and earlier documentation please see the
+:link-badge:`https://scitools.org.uk/iris/docs/v2.4.0/,"legacy documentation",cls=badge-info text-white`.
diff --git a/etc/cf-standard-name-table.xml b/etc/cf-standard-name-table.xml
index bd76168192..9c5fcd9cf0 100644
--- a/etc/cf-standard-name-table.xml
+++ b/etc/cf-standard-name-table.xml
@@ -1,7 +1,7 @@
- 78
- 2021-09-21T11:55:06Z
+ 79
+ 2022-03-19T15:25:54ZCentre for Environmental Data Analysissupport@ceda.ac.uk
@@ -8014,6 +8014,20 @@
The phrase "magnitude_of_X" means magnitude of a vector X. The surface called "surface" means the lower boundary of the atmosphere. "Surface stress" means the shear stress (force per unit area) exerted by the wind at the surface. A downward stress is a downward flux of momentum. Over large bodies of water, wind stress can drive near-surface currents. "Downward" indicates a vector component which is positive when directed downward (negative upward).
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of 19’-butanoyloxyfucoxanthin is C46H64O8. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/BUTAXXXX/1/.
+
+
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of 19'-hexanoyloxyfucoxanthin is C48H68O8. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/HEXAXXXX/2/.
+
+
kg m-3
@@ -8028,6 +8042,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for aceto-nitrile is CH3CN. The IUPAC name for aceto-nitrile is ethanenitrile.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/ATPXZZDZ/2/.
+
+
kg m-3
@@ -8042,6 +8063,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. Alkenes are unsaturated hydrocarbons as they contain chemical double bonds between adjacent carbon atoms. Alkenes contain only hydrogen and carbon combined in the general proportions C(n)H(2n); "alkenes" is the term used in standard names to describe the group of chemical species having this common structure that are represented within a given model. The list of individual species that are included in a quantity having a group chemical standard name can vary between models. Where possible, the data variable should be accompanied by a complete description of the species represented, for example, by using a comment attribute. Standard names exist for some individual alkene species, e.g., ethene and propene.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of alpha-carotene is C40H56. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/BECAXXP1/2/.
+
+
kg m-3
@@ -8112,6 +8140,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for benzene is C6H6. Benzene is the simplest aromatic hydrocarbon and has a ring structure consisting of six carbon atoms joined by alternating single and double chemical bonds. Each carbon atom is additionally bonded to one hydrogen atom. There are standard names that refer to aromatic_compounds as a group, as well as those for individual species.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of beta-carotene is C40H56. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/BBCAXXP1/2/.
+
+
kg m-3
@@ -8217,6 +8252,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula of carbon tetrachloride is CCl4. The IUPAC name for carbon tetrachloride is tetrachloromethane.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". "Carotene" refers to the sum of all forms of the carotenoid pigment carotene. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/CAROXXXX/1/.
+
+
kg m-3
@@ -8287,6 +8329,41 @@
'Mass concentration' means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical or biological species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. Chlorophylls are the green pigments found in most plants, algae and cyanobacteria; their presence is essential for photosynthesis to take place. There are several different forms of chlorophyll that occur naturally. All contain a chlorin ring (chemical formula C20H16N4) which gives the green pigment and a side chain whose structure varies. The naturally occurring forms of chlorophyll contain between 35 and 55 carbon atoms. Chlorophyll-a is the most commonly occurring form of natural chlorophyll. The chemical formula of chlorophyll-a is C55H72O5N4Mg.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Chlorophylls are the green pigments found in most plants, algae and cyanobacteria; their presence is essential for photosynthesis to take place. There are several different forms of chlorophyll that occur naturally. All contain a chlorin ring (chemical formula C20H16N4) which gives the green pigment and a side chain whose structure varies. The naturally occurring forms of chlorophyll contain between 35 and 55 carbon atoms. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/CHLBXXPX/2/.
+
+
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Chlorophylls are the green pigments found in most plants, algae and cyanobacteria; their presence is essential for photosynthesis to take place. There are several different forms of chlorophyll that occur naturally. All contain a chlorin ring (chemical formula C20H16N4) which gives the green pigment and a side chain whose structure varies. The naturally occurring forms of chlorophyll contain between 35 and 55 carbon atoms. Chlorophyll c1c2 (sometimes written c1-c2 or c1+c2) means the sum of chlorophyll c1 and chlorophyll c2. The chemical formula of chlorophyll c1 is C35H30MgN4O5, and chlorophyll c2 is C35H28MgN4O5. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/CHLC12PX/3/.
+
+
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Chlorophylls are the green pigments found in most plants, algae and cyanobacteria; their presence is essential for photosynthesis to take place. There are several different forms of chlorophyll that occur naturally. All contain a chlorin ring (chemical formula C20H16N4) which gives the green pigment and a side chain whose structure varies. The naturally occurring forms of chlorophyll contain between 35 and 55 carbon atoms. The chemical formula of chlorophyll c3 is C36H44MgN4O7. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/CHLC03PX/2/.
+
+
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Chlorophylls are the green pigments found in most plants, algae and cyanobacteria; their presence is essential for photosynthesis to take place. There are several different forms of chlorophyll that occur naturally. All contain a chlorin ring (chemical formula C20H16N4) which gives the green pigment and a side chain whose structure varies. The naturally occurring forms of chlorophyll contain between 35 and 55 carbon atoms. Chlorophyll-c means chlorophyll c1+c2+c3. The chemical formula of chlorophyll c1 is C35H30MgN4O5, and chlorophyll c2 is C35H28MgN4O5. The chemical formula of chlorophyll c3 is C36H44MgN4O7.
+
+
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of chlorophyllide-a is C35H34MgN4O5.
+
+
kg m-3
@@ -8322,6 +8399,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. Condensed water means liquid and ice.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of diadinoxanthin is C40H54O3. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/DIADXXXX/2/.
+
+
kg m-3
@@ -8378,6 +8462,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for dinitrogen pentoxide is N2O5.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen".
+
+
kg m-3
@@ -8455,6 +8546,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for formic acid is HCOOH. The IUPAC name for formic acid is methanoic acid.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of fucoxanthin is C42H58O6. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/FUCXZZZZ/2/.
+
+
kg m-3
@@ -8637,6 +8735,13 @@
Mass concentration means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The mass concentration of liquid water takes into account all cloud droplets and liquid precipitation regardless of drop size or fall speed.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of lutein is C40H56O2.
+
+
kg m-3
@@ -8707,6 +8812,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for molecular hydrogen is H2.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen".
+
+
kg m-3
@@ -8833,6 +8945,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". "Aerosol" means the system of suspended liquid or solid particles in air (except cloud droplets) and their carrier gas, the air itself. Aerosol takes up ambient water (a process known as hygroscopic growth) depending on the relative humidity and the composition of the aerosol. "Dry aerosol particles" means aerosol particles without any water uptake. The term "particulate_organic_matter_dry_aerosol" means all particulate organic matter dry aerosol except elemental carbon. It is the sum of primary_particulate_organic_matter_dry_aerosol and secondary_particulate_organic_matter_dry_aerosol.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/PERDXXXX/2/.
+
+
kg m-3
@@ -8861,6 +8980,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. It means the ratio of the mass of X to the mass of Y (including X). A chemical species denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Petroleum hydrocarbons are compounds containing just carbon and hydrogen originating from the fossil fuel crude oil.
+
+ kg m-3
+
+
+ Concentration of phaeopigment per unit volume of the water body, where the filtration size or collection method is unspecified (equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/. "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Phaeopigments are a group of non-photosynthetic pigments that are the degradation product of algal chlorophyll pigments. Phaeopigments contain phaeophytin, which fluoresces in response to excitation light, and phaeophorbide, which is colorless and does not fluoresce (source: https://academic.oup.com/plankt/article/24/11/1221/1505482). Phaeopigment concentration commonly increases during the development phase of marine phytoplankton blooms, and declines in the post bloom stage (source: https://www.sciencedirect.com/science/article/pii/0967063793901018).
+
+
kg m-3
@@ -8931,6 +9057,13 @@
Mass concentration means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". "Aerosol" means the system of suspended liquid or solid particles in air (except cloud droplets) and their carrier gas, the air itself. Aerosol particles take up ambient water (a process known as hygroscopic growth) depending on the relative humidity and the composition of the particles. "Dry aerosol particles" means aerosol particles without any water uptake. "Pm2p5 aerosol" means atmospheric particulate compounds with an aerodynamic diameter of less than or equal to 2.5 micrometers.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of prasinoxanthin is C40H56O4. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/PXAPXXXX/2/.
+
+
kg m-3
@@ -9036,6 +9169,13 @@
"Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical or biological species denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula for toluene is C6H5CH3. Toluene has the same structure as benzene, except that one of the hydrogen atoms is replaced by a methyl group. The IUPAC name for toluene is methylbenzene.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of violaxanthin is C40H56O4.
+
+
kg m-3
@@ -9064,6 +9204,13 @@
Mass concentration means mass per unit volume and is used in the construction mass_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for xylene is C6H4C2H6. In chemistry, xylene is a generic term for a group of three isomers of dimethylbenzene. The IUPAC names for the isomers are 1,2-dimethylbenzene, 1,3-dimethylbenzene and 1,4-dimethylbenzene. Xylene is an aromatic hydrocarbon. There are standard names that refer to aromatic_compounds as a group, as well as those for individual species.
+
+ kg m-3
+
+
+ "Mass concentration" means mass per unit volume and is used in the construction "mass_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The chemical formula of zeaxanthin is C40H56O2. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/ZEAXXXXX/2/.
+
+
kg m-3
@@ -10737,6 +10884,13 @@
Mole concentration means number of moles per unit volume, also called "molarity", and is used in the construction mole_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for aceto-nitrile is CH3CN. The IUPAC name for aceto-nitrile is ethanenitrile.
+
+ mol m-3
+
+
+ "Mole concentration" means number of moles per unit volume, also called "molarity", and is used in the construction "mole_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/ATPXZZDZ/2/.
+
+
mol m-3
@@ -11185,6 +11339,13 @@
Mole concentration means number of moles per unit volume, also called "molarity", and is used in the construction mole_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical or biological species denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The concentration of any chemical species, whether particulate or dissolved, may vary with depth in the ocean. A depth profile may go through one or more local minima in concentration. The mole_concentration_of_molecular_oxygen_in_sea_water_at_shallowest_local_minimum_in_vertical_profile is the mole concentration of oxygen at the local minimum in the concentration profile that occurs closest to the sea surface. The chemical formula for molecular oxygen is O2.
+
+ mol m-3
+
+
+ "Mole concentration" means number of moles per unit volume, also called "molarity", and is used in the construction "mole_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". "Dissolved nitrogen" means the sum of all nitrogen in solution: inorganic nitrogen (nitrite, nitrate and ammonium) plus nitrogen in carbon compounds.
+
+
mol m-3
@@ -11199,6 +11360,20 @@
"Mole concentration" means number of moles per unit volume, also called "molarity", and is used in the construction "mole_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". "Dissolved organic nitrogen" describes the nitrogen held in carbon compounds in solution. These are mostly generated by plankton excretion and decay.
+
+ mol m-3
+
+
+ "Mole concentration" means number of moles per unit volume, also called "molarity", and is used in the construction "mole_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical or biological species denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". "Organic phosphorus" means phosphorus in carbon compounds. The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/ORGPDSZZ/4/.
+
+
+
+ mol m-3
+
+
+ "Mole concentration" means number of moles per unit volume, also called "molarity", and is used in the construction "mole_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". Phosphorus means phosphorus in all chemical forms, commonly referred to as "total phosphorus". The equivalent term in the NERC P01 Parameter Usage Vocabulary may be found at http://vocab.nerc.ac.uk/collection/P01/current/TPHSDSZZ/6/.
+
+
mol m-3
@@ -11626,6 +11801,13 @@
Mole concentration means number of moles per unit volume, also called "molarity", and is used in the construction mole_concentration_of_X_in_Y, where X is a material constituent of Y. A chemical species denoted by X may be described by a single term such as 'nitrogen' or a phrase such as 'nox_expressed_as_nitrogen'. The chemical formula for ozone is O3.
+
+ mol m-3
+
+
+ "Mole concentration" means number of moles per unit volume, also called "molarity", and is used in the construction "mole_concentration_of_X_in_Y", where X is a material constituent of Y. A chemical species or biological group denoted by X may be described by a single term such as "nitrogen" or a phrase such as "nox_expressed_as_nitrogen". The phrase "expressed_as" is used in the construction "A_expressed_as_B", where B is a chemical constituent of A. It means that the quantity indicated by the standard name is calculated solely with respect to the B contained in A, neglecting all other chemical constituents of A.
+
+
mol m-3
@@ -18595,21 +18777,21 @@
Pa
- "Sea surface wave radiation stress" describes the excess momentum flux caused by sea surface waves. Radiation stresses behave as a second-order tensor. "xx" indicates the component of the tensor along the grid x_ axis.
+ "Sea surface wave radiation stress" describes the excess momentum flux caused by sea surface waves. Radiation stresses behave as a second-order tensor. "xx" indicates the component of the tensor along the grid x_ axis.Pa
- "Sea surface wave radiation stress" describes the excess momentum flux caused by sea surface waves. Radiation stresses behave as a second-order tensor. "xy" indicates the lateral contributions to x_ and y_ components of the tensor.
+ "Sea surface wave radiation stress" describes the excess momentum flux caused by sea surface waves. Radiation stresses behave as a second-order tensor. "xy" indicates the lateral contributions to x_ and y_ components of the tensor.Pa
- "Sea surface wave radiation stress" describes the excess momentum flux caused by sea surface waves. Radiation stresses behave as a second-order tensor. "yy" indicates the component of the tensor along the grid y_ axis.
+ "Sea surface wave radiation stress" describes the excess momentum flux caused by sea surface waves. Radiation stresses behave as a second-order tensor. "yy" indicates the component of the tensor along the grid y_ axis.
@@ -31472,16 +31654,12 @@
-
- biological_taxon_lsid
-
-
temperature_in_ground
-
- surface_snow_density
+
+ biological_taxon_lsid
@@ -31516,14 +31694,18 @@
tendency_of_atmosphere_mass_content_of_water_vapor_due_to_sublimation_of_surface_snow_and_ice
-
- atmosphere_upward_absolute_vorticity
+
+ surface_snow_densityatmosphere_upward_relative_vorticity
+
+ atmosphere_upward_absolute_vorticity
+
+
area_type
@@ -31532,34 +31714,46 @@
area_type
-
- iron_growth_limitation_of_diazotrophic_phytoplankton
+
+ mass_fraction_of_liquid_precipitation_in_air
-
- growth_limitation_of_diazotrophic_phytoplankton_due_to_solar_irradiance
+
+ mass_fraction_of_liquid_precipitation_in_airtendency_of_mole_concentration_of_particulate_organic_matter_expressed_as_carbon_in_sea_water_due_to_net_primary_production_by_diazotrophic_phytoplankton
-
- mole_concentration_of_diazotrophic_phytoplankton_expressed_as_carbon_in_sea_water
+
+ nitrogen_growth_limitation_of_diazotrophic_phytoplankton
-
- mass_fraction_of_liquid_precipitation_in_air
+
+ net_primary_mole_productivity_of_biomass_expressed_as_carbon_by_diazotrophic_phytoplankton
-
- mass_fraction_of_liquid_precipitation_in_air
+
+ net_primary_mole_productivity_of_biomass_expressed_as_carbon_by_diazotrophic_phytoplankton
+
+
+
+ mole_concentration_of_diazotrophic_phytoplankton_expressed_as_carbon_in_sea_watermass_concentration_of_diazotrophic_phytoplankton_expressed_as_chlorophyll_in_sea_water
+
+ iron_growth_limitation_of_diazotrophic_phytoplankton
+
+
+
+ growth_limitation_of_diazotrophic_phytoplankton_due_to_solar_irradiance
+
+
air_pseudo_equivalent_potential_temperature
@@ -31576,64 +31770,300 @@
tendency_of_mass_fraction_of_stratiform_cloud_ice_in_air_due_to_riming_from_cloud_liquid_water
-
- nitrogen_growth_limitation_of_diazotrophic_phytoplankton
+
+ sea_water_velocity_from_direction
-
- net_primary_mole_productivity_of_biomass_expressed_as_carbon_by_diazotrophic_phytoplankton
+
+ sea_water_velocity_to_direction
-
- net_primary_mole_productivity_of_biomass_expressed_as_carbon_by_diazotrophic_phytoplankton
+
+ sea_water_velocity_to_direction
-
- air_pseudo_equivalent_temperature
+
+ integral_wrt_depth_of_product_of_salinity_and_sea_water_density
-
- air_equivalent_temperature
+
+ integral_wrt_depth_of_product_of_conservative_temperature_and_sea_water_density
-
- atmosphere_mass_content_of_convective_cloud_liquid_water
+
+ integral_wrt_depth_of_product_of_potential_temperature_and_sea_water_density
-
- effective_radius_of_cloud_liquid_water_particles_at_liquid_water_cloud_top
+
+ volume_fraction_of_condensed_water_in_soil_at_wilting_point
-
- northward_heat_flux_in_air_due_to_eddy_advection
+
+ volume_fraction_of_condensed_water_in_soil_at_field_capacity
-
- northward_eliassen_palm_flux_in_air
+
+ volume_fraction_of_condensed_water_in_soil_at_critical_point
-
- net_primary_productivity_of_biomass_expressed_as_carbon_accumulated_in_wood
+
+ volume_fraction_of_condensed_water_in_soil
-
- net_primary_productivity_of_biomass_expressed_as_carbon_accumulated_in_leaves
+
+ product_of_lagrangian_tendency_of_air_pressure_and_specific_humidity
-
- net_primary_productivity_of_biomass_expressed_as_carbon
+
+ product_of_lagrangian_tendency_of_air_pressure_and_specific_humidity
-
- mole_concentration_of_nitric_acid_trihydrate_ambient_aerosol_particles_in_air
+
+ product_of_lagrangian_tendency_of_air_pressure_and_geopotential_height
-
- mole_concentration_of_microzooplankton_expressed_as_nitrogen_in_sea_water
+
+ product_of_lagrangian_tendency_of_air_pressure_and_air_temperature
-
- mole_concentration_of_mesozooplankton_expressed_as_nitrogen_in_sea_water
+
+ product_of_lagrangian_tendency_of_air_pressure_and_air_temperature
+
+
+
+ tendency_of_sea_water_salinity_expressed_as_salt_content_due_to_parameterized_dianeutral_mixing
+
+
+
+ tendency_of_sea_water_potential_temperature_expressed_as_heat_content_due_to_parameterized_dianeutral_mixing
+
+
+
+ tendency_of_sea_water_conservative_temperature_expressed_as_heat_content_due_to_parameterized_dianeutral_mixing
+
+
+
+ effective_radius_of_stratiform_cloud_snow_particles
+
+
+
+ tendency_of_atmosphere_moles_of_cfc11
+
+
+
+ moles_of_cfc11_per_unit_mass_in_sea_water
+
+
+
+ atmosphere_moles_of_cfc11
+
+
+
+ tendency_of_atmosphere_moles_of_cfc113
+
+
+
+ atmosphere_moles_of_cfc113
+
+
+
+ tendency_of_atmosphere_moles_of_cfc114
+
+
+
+ atmosphere_moles_of_cfc114
+
+
+
+ tendency_of_atmosphere_moles_of_cfc115
+
+
+
+ atmosphere_moles_of_cfc115
+
+
+
+ tendency_of_atmosphere_moles_of_cfc12
+
+
+
+ atmosphere_moles_of_cfc12
+
+
+
+ tendency_of_atmosphere_moles_of_halon1202
+
+
+
+ atmosphere_moles_of_halon1202
+
+
+
+ tendency_of_atmosphere_moles_of_halon1211
+
+
+
+ atmosphere_moles_of_halon1211
+
+
+
+ tendency_of_atmosphere_moles_of_halon1301
+
+
+
+ atmosphere_moles_of_halon1301
+
+
+
+ tendency_of_atmosphere_moles_of_halon2402
+
+
+
+ atmosphere_moles_of_halon2402
+
+
+
+ tendency_of_atmosphere_moles_of_hcc140a
+
+
+
+ atmosphere_moles_of_hcc140a
+
+
+
+ tendency_of_troposphere_moles_of_hcc140a
+
+
+
+ tendency_of_middle_atmosphere_moles_of_hcc140a
+
+
+
+ tendency_of_troposphere_moles_of_hcfc22
+
+
+
+ tendency_of_atmosphere_moles_of_hcfc22
+
+
+
+ atmosphere_moles_of_hcfc22
+
+
+
+ tendency_of_atmosphere_number_content_of_aerosol_particles_due_to_turbulent_deposition
+
+
+
+ lagrangian_tendency_of_atmosphere_sigma_coordinate
+
+
+
+ lagrangian_tendency_of_atmosphere_sigma_coordinate
+
+
+
+ electrical_mobility_diameter_of_ambient_aerosol_particles
+
+
+
+ diameter_of_ambient_aerosol_particles
+
+
+
+ mass_concentration_of_biomass_burning_dry_aerosol_particles_in_air
+
+
+
+ effective_radius_of_stratiform_cloud_rain_particles
+
+
+
+ effective_radius_of_stratiform_cloud_ice_particles
+
+
+
+ effective_radius_of_stratiform_cloud_graupel_particles
+
+
+
+ effective_radius_of_convective_cloud_snow_particles
+
+
+
+ effective_radius_of_convective_cloud_rain_particles
+
+
+
+ effective_radius_of_convective_cloud_ice_particles
+
+
+
+ histogram_of_backscattering_ratio_in_air_over_height_above_reference_ellipsoid
+
+
+
+ backscattering_ratio_in_air
+
+
+
+ product_of_northward_wind_and_lagrangian_tendency_of_air_pressure
+
+
+
+ product_of_eastward_wind_and_lagrangian_tendency_of_air_pressure
+
+
+
+ carbon_mass_flux_into_litter_and_soil_due_to_anthropogenic_land_use_or_land_cover_change
+
+
+
+ floating_ice_shelf_area_fraction
+
+
+
+ atmosphere_moles_of_carbon_tetrachloride
+
+
+
+ mole_fraction_of_methylglyoxal_in_air
+
+
+
+ mole_fraction_of_dichlorine_peroxide_in_air
+
+
+
+ atmosphere_mass_content_of_convective_cloud_liquid_water
+
+
+
+ effective_radius_of_cloud_liquid_water_particles_at_liquid_water_cloud_top
+
+
+
+ air_equivalent_temperature
+
+
+
+ air_pseudo_equivalent_temperature
+
+
+
+ mass_content_of_cloud_liquid_water_in_atmosphere_layer
+
+
+
+ air_equivalent_potential_temperature
+
+
+
+ number_concentration_of_stratiform_cloud_liquid_water_particles_at_stratiform_liquid_water_cloud_top
+
+
+
+ number_concentration_of_convective_cloud_liquid_water_particles_at_convective_liquid_water_cloud_top
@@ -31660,360 +32090,104 @@
atmosphere_mass_content_of_cloud_liquid_water
-
- mass_fraction_of_sulfate_dry_aerosol_particles_in_air
-
-
-
- mass_fraction_of_nitric_acid_trihydrate_ambient_aerosol_particles_in_air
-
-
-
- mass_fraction_of_ammonium_dry_aerosol_particles_in_air
-
-
-
- tendency_of_mass_content_of_water_vapor_in_atmosphere_layer_due_to_shallow_convection
-
-
-
- tendency_of_mass_content_of_water_vapor_in_atmosphere_layer
-
-
-
- mass_content_of_cloud_ice_in_atmosphere_layer
-
-
-
- mass_concentration_of_secondary_particulate_organic_matter_dry_aerosol_particles_in_air
-
-
-
- mass_concentration_of_mercury_dry_aerosol_particles_in_air
-
-
-
- mass_concentration_of_coarse_mode_ambient_aerosol_particles_in_air
-
-
-
- sea_water_velocity_to_direction
-
-
-
- sea_water_velocity_to_direction
-
-
-
- gross_primary_productivity_of_biomass_expressed_as_carbon
-
-
-
- eastward_water_vapor_flux_in_air
-
-
-
- atmosphere_moles_of_nitric_acid_trihydrate_ambient_aerosol_particles
-
-
-
- tendency_of_middle_atmosphere_moles_of_carbon_monoxide
-
-
-
- tendency_of_atmosphere_mass_content_of_water_vapor_due_to_advection
-
-
-
- tendency_of_atmosphere_mass_content_of_water_vapor
-
-
-
- lwe_thickness_of_atmosphere_mass_content_of_water_vapor
-
-
-
- change_over_time_in_atmosphere_mass_content_of_water_due_to_advection
-
-
-
- change_over_time_in_atmosphere_mass_content_of_water_due_to_advection
-
-
-
- atmosphere_mass_content_of_water_vapor
-
-
-
- tendency_of_atmosphere_mass_content_of_sulfate_dry_aerosol_particles_expressed_as_sulfur_due_to_gravitational_settling
-
-
-
- tendency_of_atmosphere_mass_content_of_sulfate_dry_aerosol_particles_expressed_as_sulfur_due_to_gravitational_settling
-
-
-
- tendency_of_atmosphere_mass_content_of_sulfate_dry_aerosol_particles_expressed_as_sulfur_due_to_dry_deposition
-
-
-
- tendency_of_atmosphere_mass_content_of_sulfate_dry_aerosol_particles_expressed_as_sulfur_due_to_dry_deposition
-
-
-
- tendency_of_middle_atmosphere_moles_of_methyl_bromide
-
-
-
- atmosphere_mass_content_of_sulfate_dry_aerosol_particles_expressed_as_sulfur
-
-
-
- atmosphere_mass_content_of_sulfate_dry_aerosol_particles_expressed_as_sulfur
-
-
-
- atmosphere_mass_content_of_sulfate
-
-
-
- atmosphere_mass_content_of_sulfate
-
-
-
- tendency_of_atmosphere_mass_content_of_secondary_particulate_organic_matter_dry_aerosol_particles_due_to_wet_deposition
-
-
-
- tendency_of_atmosphere_mass_content_of_secondary_particulate_organic_matter_dry_aerosol_particles_due_to_net_chemical_production
-
-
-
- tendency_of_atmosphere_mass_content_of_secondary_particulate_organic_matter_dry_aerosol_particles_due_to_net_chemical_production
-
-
-
- tendency_of_atmosphere_mass_content_of_secondary_particulate_organic_matter_dry_aerosol_particles_due_to_dry_deposition
-
-
-
- atmosphere_mass_content_of_secondary_particulate_organic_matter_dry_aerosol_particles
-
-
-
- tendency_of_atmosphere_mass_content_of_water_vapor_due_to_deep_convection
-
-
-
- tendency_of_atmosphere_mass_content_of_water_vapor_due_to_convection
-
-
-
- atmosphere_mass_content_of_primary_particulate_organic_matter_dry_aerosol_particles
-
-
-
- mass_content_of_cloud_liquid_water_in_atmosphere_layer
-
-
-
- air_equivalent_potential_temperature
-
-
-
- number_concentration_of_stratiform_cloud_liquid_water_particles_at_stratiform_liquid_water_cloud_top
-
-
-
- number_concentration_of_convective_cloud_liquid_water_particles_at_convective_liquid_water_cloud_top
-
-
-
- wave_frequency
-
-
-
- upward_eastward_momentum_flux_in_air_due_to_nonorographic_eastward_gravity_waves
-
-
-
- tendency_of_troposphere_moles_of_carbon_monoxide
-
-
-
- tendency_of_atmosphere_moles_of_sulfate_dry_aerosol_particles
-
-
-
- tendency_of_atmosphere_mass_content_of_nitrate_dry_aerosol_particles_due_to_dry_deposition
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_expressed_as_carbon_due_to_emission_from_waste_treatment_and_disposal
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_expressed_as_carbon_due_to_emission_from_savanna_and_grassland_fires
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_expressed_as_carbon_due_to_emission_from_maritime_transport
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_expressed_as_carbon_due_to_emission_from_land_transport
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_expressed_as_carbon_due_to_emission_from_forest_fires
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_expressed_as_carbon_due_to_emission_from_agricultural_waste_burning
-
-
-
- tendency_of_atmosphere_mass_content_of_primary_particulate_organic_matter_dry_aerosol_particles_due_to_wet_deposition
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_due_to_wet_deposition
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_due_to_turbulent_deposition
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_due_to_net_chemical_production_and_emission
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_due_to_net_chemical_production_and_emission
-
-
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_due_to_gravitational_settling
+
+ mole_fraction_of_noy_expressed_as_nitrogen_in_air
-
- tendency_of_atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles_due_to_dry_deposition
+
+ tendency_of_atmosphere_moles_of_methane
-
- atmosphere_mass_content_of_particulate_organic_matter_dry_aerosol_particles
+
+ rate_of_hydroxyl_radical_destruction_due_to_reaction_with_nmvoc
-
- integral_wrt_depth_of_product_of_conservative_temperature_and_sea_water_density
+
+ net_primary_mole_productivity_of_biomass_expressed_as_carbon_by_miscellaneous_phytoplankton
-
- integral_wrt_depth_of_product_of_salinity_and_sea_water_density
+
+ mole_fraction_of_inorganic_bromine_in_air
-
- tendency_of_atmosphere_moles_of_methyl_bromide
+
+ water_vapor_saturation_deficit_in_air
-
- integral_wrt_depth_of_product_of_potential_temperature_and_sea_water_density
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_agricultural_waste_burning
-
- atmosphere_moles_of_methyl_bromide
+
+ tendency_of_atmosphere_moles_of_carbon_tetrachloride
-
- product_of_lagrangian_tendency_of_air_pressure_and_specific_humidity
+
+ tendency_of_atmosphere_moles_of_carbon_monoxide
-
- product_of_lagrangian_tendency_of_air_pressure_and_specific_humidity
+
+ platform_yaw
-
- tendency_of_sea_water_potential_temperature_expressed_as_heat_content_due_to_parameterized_dianeutral_mixing
+
+ platform_pitch
-
- tendency_of_sea_water_conservative_temperature_expressed_as_heat_content_due_to_parameterized_dianeutral_mixing
+
+ platform_roll
-
- volume_fraction_of_condensed_water_in_soil_at_wilting_point
+
+ tendency_of_specific_humidity_due_to_stratiform_precipitation
-
- volume_fraction_of_condensed_water_in_soil_at_field_capacity
+
+ tendency_of_air_temperature_due_to_stratiform_precipitation
-
- volume_fraction_of_condensed_water_in_soil_at_critical_point
+
+ stratiform_precipitation_flux
-
- volume_fraction_of_condensed_water_in_soil
+
+ stratiform_precipitation_amount
-
- product_of_lagrangian_tendency_of_air_pressure_and_geopotential_height
+
+ lwe_thickness_of_stratiform_precipitation_amount
-
- product_of_lagrangian_tendency_of_air_pressure_and_air_temperature
+
+ lwe_stratiform_precipitation_rate
-
- product_of_lagrangian_tendency_of_air_pressure_and_air_temperature
+
+ water_evaporation_amount_from_canopy
-
- tendency_of_sea_water_salinity_expressed_as_salt_content_due_to_parameterized_dianeutral_mixing
+
+ water_evaporation_flux_from_canopy
-
- atmosphere_moles_of_methane
+
+ precipitation_flux_onto_canopy
-
- electrical_mobility_diameter_of_ambient_aerosol_particles
+
+ outgoing_water_volume_transport_along_river_channel
-
- histogram_of_backscattering_ratio_in_air_over_height_above_reference_ellipsoid
+
+ tendency_of_sea_ice_amount_due_to_conversion_of_snow_to_sea_icetendency_of_atmosphere_mass_content_of_mercury_dry_aerosol_particles_due_to_emission
-
- effective_radius_of_stratiform_cloud_snow_particles
-
-
-
- mass_concentration_of_biomass_burning_dry_aerosol_particles_in_air
-
-
-
- atmosphere_mass_content_of_nitric_acid_trihydrate_ambient_aerosol_particles
-
-
-
- atmosphere_mass_content_of_nitrate_dry_aerosol_particles
-
-
-
- atmosphere_mass_content_of_mercury_dry_aerosol_particles
-
-
-
- backscattering_ratio_in_air
-
-
-
- product_of_northward_wind_and_lagrangian_tendency_of_air_pressure
+
+ mass_fraction_of_mercury_dry_aerosol_particles_in_air
@@ -32024,256 +32198,224 @@
tendency_of_atmosphere_mass_content_of_sulfate_dry_aerosol_particles_expressed_as_sulfur_due_to_wet_deposition
-
- tendency_of_atmosphere_moles_of_cfc11
-
-
-
- moles_of_cfc11_per_unit_mass_in_sea_water
-
-
-
- atmosphere_moles_of_cfc11
-
-
-
- tendency_of_atmosphere_moles_of_hcc140a
-
-
-
- effective_radius_of_convective_cloud_rain_particles
-
-
-
- tendency_of_troposphere_moles_of_hcc140a
-
-
-
- tendency_of_middle_atmosphere_moles_of_hcc140a
-
-
-
- tendency_of_troposphere_moles_of_hcfc22
-
-
-
- tendency_of_atmosphere_moles_of_hcfc22
+
+ stratiform_cloud_area_fraction
-
- atmosphere_moles_of_hcfc22
+
+ magnitude_of_sea_ice_displacement
-
- tendency_of_atmosphere_number_content_of_aerosol_particles_due_to_turbulent_deposition
+
+ surface_downwelling_spherical_irradiance_per_unit_wavelength_in_sea_water
-
- lagrangian_tendency_of_atmosphere_sigma_coordinate
+
+ surface_downwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
-
- lagrangian_tendency_of_atmosphere_sigma_coordinate
+
+ surface_downwelling_shortwave_flux_in_air_assuming_clear_sky
-
- diameter_of_ambient_aerosol_particles
+
+ surface_downwelling_shortwave_flux_in_air
-
- effective_radius_of_stratiform_cloud_ice_particles
+
+ surface_downwelling_radiative_flux_per_unit_wavelength_in_sea_water
-
- effective_radius_of_convective_cloud_ice_particles
+
+ surface_downwelling_radiative_flux_per_unit_wavelength_in_air
-
- effective_radius_of_stratiform_cloud_graupel_particles
+
+ surface_downwelling_radiance_per_unit_wavelength_in_sea_water
-
- effective_radius_of_stratiform_cloud_rain_particles
+
+ surface_downwelling_photon_spherical_irradiance_per_unit_wavelength_in_sea_water
-
- effective_radius_of_convective_cloud_snow_particles
+
+ surface_downwelling_photon_radiance_per_unit_wavelength_in_sea_water
-
- product_of_eastward_wind_and_lagrangian_tendency_of_air_pressure
+
+ surface_downwelling_photon_flux_per_unit_wavelength_in_sea_water
-
- carbon_mass_flux_into_litter_and_soil_due_to_anthropogenic_land_use_or_land_cover_change
+
+ surface_downwelling_longwave_flux_in_air
-
- stratiform_cloud_area_fraction
+
+ integral_wrt_time_of_surface_downwelling_shortwave_flux_in_air
-
- sea_water_velocity_from_direction
+
+ integral_wrt_time_of_surface_downwelling_longwave_flux_in_air
-
- thickness_of_stratiform_snowfall_amount
+
+ downwelling_spherical_irradiance_per_unit_wavelength_in_sea_water
-
- optical_thickness_of_atmosphere_layer_due_to_ambient_aerosol_particles
+
+ downwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
-
- optical_thickness_of_atmosphere_layer_due_to_ambient_aerosol_particles
+
+ downwelling_radiative_flux_per_unit_wavelength_in_sea_water
-
- lwe_thickness_of_stratiform_snowfall_amount
+
+ downwelling_radiative_flux_per_unit_wavelength_in_air
-
- equivalent_thickness_at_stp_of_atmosphere_ozone_content
+
+ downwelling_radiance_per_unit_wavelength_in_sea_water
-
- atmosphere_optical_thickness_due_to_water_in_ambient_aerosol_particles
+
+ downwelling_radiance_per_unit_wavelength_in_air
-
- atmosphere_optical_thickness_due_to_dust_dry_aerosol_particles
+
+ downwelling_photon_spherical_irradiance_per_unit_wavelength_in_sea_water
-
- atmosphere_optical_thickness_due_to_dust_ambient_aerosol_particles
+
+ downwelling_photon_radiance_per_unit_wavelength_in_sea_water
-
- atmosphere_optical_thickness_due_to_ambient_aerosol_particles
+
+ downwelling_photon_flux_per_unit_wavelength_in_sea_water
-
- atmosphere_optical_thickness_due_to_ambient_aerosol_particles
+
+ surface_upwelling_shortwave_flux_in_air_assuming_clear_sky
-
- atmosphere_net_upward_convective_mass_flux
+
+ surface_upwelling_longwave_flux_in_air_assuming_clear_sky
-
- mass_fraction_of_mercury_dry_aerosol_particles_in_air
+
+ upwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
-
- atmosphere_moles_of_hcc140a
+
+ upwelling_radiative_flux_per_unit_wavelength_in_sea_water
-
- floating_ice_shelf_area_fraction
+
+ upwelling_radiative_flux_per_unit_wavelength_in_air
-
- atmosphere_moles_of_carbon_tetrachloride
+
+ upwelling_radiance_per_unit_wavelength_in_air
-
- mole_fraction_of_methylglyoxal_in_air
+
+ surface_upwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
-
- mole_fraction_of_dichlorine_peroxide_in_air
+
+ surface_upwelling_shortwave_flux_in_air
-
- mole_fraction_of_noy_expressed_as_nitrogen_in_air
+
+ surface_upwelling_radiative_flux_per_unit_wavelength_in_sea_water
-
- net_primary_mole_productivity_of_biomass_expressed_as_carbon_by_miscellaneous_phytoplankton
+
+ surface_upwelling_radiative_flux_per_unit_wavelength_in_air
-
- mole_fraction_of_inorganic_bromine_in_air
+
+ surface_upwelling_radiance_per_unit_wavelength_in_sea_water
-
- water_vapor_saturation_deficit_in_air
+
+ volume_scattering_coefficient_of_radiative_flux_in_air_due_to_ambient_aerosol_particles
-
- tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_agricultural_waste_burning
+
+ volume_scattering_coefficient_of_radiative_flux_in_air_due_to_dried_aerosol_particles
-
- tendency_of_atmosphere_moles_of_carbon_tetrachloride
+
+ soil_mass_content_of_carbon
-
- tendency_of_atmosphere_moles_of_carbon_monoxide
+
+ slow_soil_pool_mass_content_of_carbon
-
- tendency_of_atmosphere_moles_of_cfc113
+
+ root_mass_content_of_carbon
-
- atmosphere_moles_of_cfc113
+
+ miscellaneous_living_matter_mass_content_of_carbon
-
- tendency_of_atmosphere_moles_of_cfc114
+
+ fast_soil_pool_mass_content_of_carbon
-
- atmosphere_moles_of_cfc114
+
+ medium_soil_pool_mass_content_of_carbon
-
- tendency_of_atmosphere_moles_of_cfc115
+
+ leaf_mass_content_of_carbon
-
- atmosphere_moles_of_cfc115
+
+ carbon_mass_content_of_forestry_and_agricultural_products
-
- tendency_of_atmosphere_moles_of_cfc12
+
+ carbon_mass_content_of_forestry_and_agricultural_products
-
- atmosphere_moles_of_cfc12
+
+ surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_plant_respiration_for_biomass_maintenance
-
- tendency_of_atmosphere_moles_of_halon1202
+
+ surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_plant_respiration_for_biomass_growth
-
- atmosphere_moles_of_halon1202
+
+ surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_plant_respiration
-
- tendency_of_atmosphere_moles_of_halon1211
+
+ surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_respiration_in_soil
-
- atmosphere_moles_of_halon1211
+
+ surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_heterotrophic_respiration
-
- tendency_of_atmosphere_moles_of_halon1301
+
+ northward_transformed_eulerian_mean_air_velocity
-
- atmosphere_moles_of_halon1301
+
+ eastward_transformed_eulerian_mean_air_velocity
-
- tendency_of_atmosphere_moles_of_halon2402
+
+ surface_litter_mass_content_of_carbon
-
- atmosphere_moles_of_halon2402
+
+ litter_mass_content_of_carbon
@@ -32308,14 +32450,14 @@
mole_concentration_of_diatoms_expressed_as_nitrogen_in_sea_water
-
- tendency_of_mole_concentration_of_dissolved_inorganic_phosphorus_in_sea_water_due_to_biological_processes
-
-
tendency_of_mole_concentration_of_dissolved_inorganic_silicon_in_sea_water_due_to_biological_processes
+
+ tendency_of_mole_concentration_of_dissolved_inorganic_phosphorus_in_sea_water_due_to_biological_processes
+
+
tendency_of_atmosphere_mole_concentration_of_carbon_monoxide_due_to_chemical_destruction
@@ -32324,56 +32466,64 @@
volume_extinction_coefficient_in_air_due_to_ambient_aerosol_particles
-
- atmosphere_mass_content_of_convective_cloud_condensed_water
+
+ water_vapor_partial_pressure_in_air
-
- water_evaporation_flux_from_canopy
+
+ platform_name
-
- precipitation_flux_onto_canopy
+
+ platform_id
-
- surface_downwelling_shortwave_flux_in_air_assuming_clear_sky
+
+ mass_flux_of_carbon_into_litter_from_vegetation
-
- surface_downwelling_radiance_per_unit_wavelength_in_sea_water
+
+ subsurface_litter_mass_content_of_carbon
-
- upwelling_radiative_flux_per_unit_wavelength_in_sea_water
+
+ stem_mass_content_of_carbon
-
- downwelling_photon_flux_per_unit_wavelength_in_sea_water
+
+ mole_concentration_of_dissolved_inorganic_14C_in_sea_water
-
- downwelling_radiance_per_unit_wavelength_in_sea_water
+
+ surface_downward_mass_flux_of_14C_dioxide_abiotic_analogue_expressed_as_carbon
-
- surface_downwelling_photon_radiance_per_unit_wavelength_in_sea_water
+
+ surface_downward_mass_flux_of_13C_dioxide_abiotic_analogue_expressed_as_13C
-
- surface_downwelling_spherical_irradiance_per_unit_wavelength_in_sea_water
+
+ mole_concentration_of_dissolved_inorganic_13C_in_sea_water
-
- surface_upwelling_radiative_flux_per_unit_wavelength_in_sea_water
+
+ surface_upwelling_radiance_per_unit_wavelength_in_air_reflected_by_sea_water
-
- surface_downwelling_shortwave_flux_in_air
+
+ surface_upwelling_radiance_per_unit_wavelength_in_air_emerging_from_sea_water
-
- tendency_of_sea_ice_amount_due_to_conversion_of_snow_to_sea_ice
+
+ surface_upwelling_radiance_per_unit_wavelength_in_air
+
+
+
+ surface_upwelling_longwave_flux_in_air
+
+
+
+ incoming_water_volume_transport_along_river_channel
@@ -32392,792 +32542,820 @@
sea_ice_temperature_expressed_as_heat_content
-
- outgoing_water_volume_transport_along_river_channel
+
+ water_evapotranspiration_flux
-
- lwe_thickness_of_stratiform_precipitation_amount
+
+ surface_water_evaporation_flux
-
- tendency_of_atmosphere_moles_of_methane
+
+ water_volume_transport_into_sea_water_from_rivers
-
- rate_of_hydroxyl_radical_destruction_due_to_reaction_with_nmvoc
+
+ stratiform_graupel_flux
-
- magnitude_of_sea_ice_displacement
+
+ wood_debris_mass_content_of_carbon
-
- surface_downwelling_radiative_flux_per_unit_wavelength_in_sea_water
+
+ toa_outgoing_shortwave_flux_assuming_clear_sky_and_no_aerosol
-
- surface_downwelling_radiative_flux_per_unit_wavelength_in_air
+
+ water_flux_into_sea_water_from_rivers
-
- surface_downwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
+
+ integral_wrt_height_of_product_of_northward_wind_and_specific_humidity
-
- surface_downwelling_photon_spherical_irradiance_per_unit_wavelength_in_sea_water
+
+ integral_wrt_height_of_product_of_eastward_wind_and_specific_humidity
-
- surface_downwelling_photon_flux_per_unit_wavelength_in_sea_water
+
+ integral_wrt_depth_of_sea_water_temperature
-
- surface_downwelling_longwave_flux_in_air
+
+ integral_wrt_depth_of_sea_water_temperature
-
- integral_wrt_time_of_surface_downwelling_shortwave_flux_in_air
+
+ integral_wrt_depth_of_sea_water_temperature
-
- integral_wrt_time_of_surface_downwelling_longwave_flux_in_air
+
+ integral_wrt_depth_of_sea_water_temperature
-
- downwelling_spherical_irradiance_per_unit_wavelength_in_sea_water
+
+ integral_wrt_depth_of_sea_water_practical_salinity
-
- downwelling_radiative_flux_per_unit_wavelength_in_sea_water
+
+ northward_ocean_heat_transport_due_to_parameterized_eddy_advection
-
- downwelling_radiative_flux_per_unit_wavelength_in_air
+
+ tendency_of_ocean_eddy_kinetic_energy_content_due_to_parameterized_eddy_advection
-
- downwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
+
+ ocean_tracer_laplacian_diffusivity_due_to_parameterized_mesoscale_eddy_advection
-
- downwelling_photon_spherical_irradiance_per_unit_wavelength_in_sea_water
+
+ ocean_tracer_biharmonic_diffusivity_due_to_parameterized_mesoscale_eddy_advection
-
- downwelling_radiance_per_unit_wavelength_in_air
+
+ upward_sea_water_velocity_due_to_parameterized_mesoscale_eddies
-
- downwelling_photon_radiance_per_unit_wavelength_in_sea_water
+
+ sea_water_y_velocity_due_to_parameterized_mesoscale_eddies
-
- surface_upwelling_shortwave_flux_in_air_assuming_clear_sky
+
+ sea_water_x_velocity_due_to_parameterized_mesoscale_eddies
-
- surface_upwelling_longwave_flux_in_air_assuming_clear_sky
+
+ eastward_sea_water_velocity_due_to_parameterized_mesoscale_eddies
-
- upwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
+
+ northward_sea_water_velocity_due_to_parameterized_mesoscale_eddies
-
- upwelling_radiative_flux_per_unit_wavelength_in_air
+
+ tendency_of_sea_water_temperature_due_to_parameterized_eddy_advection
-
- upwelling_radiance_per_unit_wavelength_in_air
+
+ tendency_of_sea_water_salinity_due_to_parameterized_eddy_advection
-
- surface_upwelling_shortwave_flux_in_air_assuming_clear_sky_and_no_aerosol
+
+ ocean_y_overturning_mass_streamfunction_due_to_parameterized_eddy_advection
-
- surface_upwelling_shortwave_flux_in_air
+
+ ocean_meridional_overturning_mass_streamfunction_due_to_parameterized_eddy_advection
-
- surface_upwelling_radiance_per_unit_wavelength_in_sea_water
+
+ ocean_mass_y_transport_due_to_advection_and_parameterized_eddy_advection
-
- incoming_water_volume_transport_along_river_channel
+
+ ocean_mass_x_transport_due_to_advection_and_parameterized_eddy_advection
-
- surface_upwelling_longwave_flux_in_air
+
+ ocean_heat_y_transport_due_to_parameterized_eddy_advection
-
- surface_upwelling_radiance_per_unit_wavelength_in_air_emerging_from_sea_water
+
+ ocean_heat_x_transport_due_to_parameterized_eddy_advection
-
- surface_upwelling_radiative_flux_per_unit_wavelength_in_air
+
+ northward_ocean_salt_transport_due_to_parameterized_eddy_advection
-
- surface_upwelling_radiance_per_unit_wavelength_in_air
+
+ northward_ocean_freshwater_transport_due_to_parameterized_eddy_advection
-
- surface_upwelling_radiance_per_unit_wavelength_in_air_reflected_by_sea_water
+
+ integral_wrt_time_of_toa_outgoing_longwave_flux
-
- wood_debris_mass_content_of_carbon
+
+ integral_wrt_time_of_toa_net_downward_shortwave_flux
-
- water_flux_into_sea_water_from_rivers
+
+ integral_wrt_time_of_surface_net_downward_shortwave_flux
-
- integral_wrt_depth_of_sea_water_temperature
+
+ integral_wrt_time_of_surface_net_downward_longwave_flux
-
- integral_wrt_depth_of_sea_water_temperature
+
+ integral_wrt_time_of_surface_downward_sensible_heat_flux
-
- integral_wrt_depth_of_sea_water_temperature
+
+ integral_wrt_time_of_surface_downward_latent_heat_flux
-
- integral_wrt_depth_of_sea_water_temperature
+
+ integral_wrt_time_of_air_temperature_excess
-
- volume_scattering_coefficient_of_radiative_flux_in_air_due_to_ambient_aerosol_particles
+
+ integral_wrt_time_of_air_temperature_deficit
-
- volume_scattering_coefficient_of_radiative_flux_in_air_due_to_dried_aerosol_particles
+
+ tendency_of_mass_concentration_of_elemental_carbon_dry_aerosol_particles_in_air_due_to_emission_from_aviation
-
- integral_wrt_height_of_product_of_northward_wind_and_specific_humidity
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_wet_deposition
-
- integral_wrt_depth_of_sea_water_practical_salinity
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_turbulent_deposition
-
- integral_wrt_height_of_product_of_eastward_wind_and_specific_humidity
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_gravitational_settling
-
- platform_yaw
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_waste_treatment_and_disposal
-
- platform_roll
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_savanna_and_grassland_fires
-
- water_vapor_partial_pressure_in_air
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_residential_and_commercial_combustion
-
- platform_name
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_maritime_transport
-
- platform_id
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_land_transport
-
- platform_pitch
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_industrial_processes_and_combustion
-
- tendency_of_specific_humidity_due_to_stratiform_precipitation
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_forest_fires
-
- tendency_of_air_temperature_due_to_stratiform_precipitation
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission_from_energy_production_and_distribution
-
- water_evaporation_amount_from_canopy
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_emission
-
- tendency_of_atmosphere_mass_content_of_dust_dry_aerosol_particles_due_to_turbulent_deposition
+
+ tendency_of_atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles_due_to_dry_deposition
-
- tendency_of_atmosphere_mass_content_of_dust_dry_aerosol_particles_due_to_gravitational_settling
+
+ mass_fraction_of_elemental_carbon_dry_aerosol_particles_in_air
-
- tendency_of_atmosphere_mass_content_of_dust_dry_aerosol_particles_due_to_emission
+
+ atmosphere_mass_content_of_elemental_carbon_dry_aerosol_particles
-
- atmosphere_mass_content_of_cloud_ice
+
+ mass_concentration_of_elemental_carbon_dry_aerosol_particles_in_air
-
- stratiform_precipitation_amount
+
+ lagrangian_tendency_of_air_pressure
-
- tendency_of_atmosphere_moles_of_nitrous_oxide
+
+ lagrangian_tendency_of_air_pressure
-
- tendency_of_atmosphere_mass_content_of_dust_dry_aerosol_particles_due_to_dry_deposition
+
+ air_pressure_at_mean_sea_level
-
- medium_soil_pool_mass_content_of_carbon
+
+ sea_floor_depth_below_geoid
-
- surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_plant_respiration
+
+ sea_surface_height_above_geoid
-
- surface_downward_mass_flux_of_14C_dioxide_abiotic_analogue_expressed_as_carbon
+
+ sea_surface_height_above_geoid
-
- mole_concentration_of_dissolved_inorganic_13C_in_sea_water
+
+ tendency_of_atmosphere_mass_content_of_sea_salt_dry_aerosol_particles_due_to_emission
-
- surface_litter_mass_content_of_carbon
+
+ tendency_of_atmosphere_mass_content_of_sea_salt_dry_aerosol_particles_due_to_emission
-
- surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_heterotrophic_respiration
+
+ atmosphere_absorption_optical_thickness_due_to_sea_salt_ambient_aerosol_particles
-
- fast_soil_pool_mass_content_of_carbon
+
+ atmosphere_absorption_optical_thickness_due_to_sea_salt_ambient_aerosol_particles
-
- soil_mass_content_of_carbon
+
+ tendency_of_atmosphere_mass_content_of_nitrogen_compounds_expressed_as_nitrogen_due_to_deposition
-
- slow_soil_pool_mass_content_of_carbon
+
+ tendency_of_atmosphere_mass_content_of_nitrogen_compounds_expressed_as_nitrogen_due_to_dry_deposition
-
- root_mass_content_of_carbon
+
+ surface_geostrophic_eastward_sea_water_velocity
-
- miscellaneous_living_matter_mass_content_of_carbon
+
+ surface_geostrophic_northward_sea_water_velocity
-
- carbon_mass_content_of_forestry_and_agricultural_products
+
+ tendency_of_sea_surface_height_above_mean_sea_level
+
+
+
+ surface_geostrophic_sea_water_y_velocity_assuming_mean_sea_level_for_geoid
+
+
+
+ surface_geostrophic_sea_water_x_velocity_assuming_mean_sea_level_for_geoid
+
+
+
+ surface_geostrophic_northward_sea_water_velocity_assuming_mean_sea_level_for_geoid
+
+
+
+ surface_geostrophic_northward_sea_water_velocity_assuming_mean_sea_level_for_geoid
+
+
+
+ surface_geostrophic_eastward_sea_water_velocity_assuming_mean_sea_level_for_geoid
+
+
+
+ surface_geostrophic_eastward_sea_water_velocity_assuming_mean_sea_level_for_geoid
+
+
+
+ sea_surface_height_above_mean_sea_level
-
- carbon_mass_content_of_forestry_and_agricultural_products
+
+ sea_surface_height_above_mean_sea_level
-
- surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_plant_respiration_for_biomass_maintenance
+
+ sea_floor_depth_below_mean_sea_level
-
- surface_upward_mass_flux_of_carbon_dioxide_expressed_as_carbon_due_to_plant_respiration_for_biomass_growth
+
+