Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add code coverage to CI #106

Merged
merged 2 commits into from
Jul 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .conda/benchcab-dev.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,5 @@ dependencies:
- f90nml
- netcdf4
- numpy
- pytest
- pytest-cov
- pyyaml
7 changes: 6 additions & 1 deletion .github/workflows/pytest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,9 @@ jobs:
environment-file: .conda/benchcab-dev.yaml
- name: Test with pytest
run: |
TMPDIR=${{ runner.temp }} pytest
pytest --cov=./ --cov-report=xml
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.xml
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# CABLE benchmarking

[![codecov](https://codecov.io/gh/CABLE-LSM/benchcab/branch/master/graph/badge.svg?token=JJYE1YZDXQ)](https://codecov.io/gh/CABLE-LSM/benchcab)

Repository to benchmark CABLE. The benchmark will run the exact same configurations on two CABLE branches specified by the user, e.g. a user branch (with personal changes) against the head of the trunk. The results should be attached with all new [tickets](https://trac.nci.org.au/trac/cable/report/1).

The code will: (i) check out; (ii) build; and (iii) run branches across N standard science configurations. It is possible to produce some plots locally. But the outputs should be uploaded to [the modelevaluation website](https://modelevaluation.org/) for further benchmarking and evaluation.
Expand Down
4 changes: 2 additions & 2 deletions tests/common.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
"""Helper functions for `pytest`."""

import os
import tempfile
from pathlib import Path

TMP_DIR = Path(os.environ["TMPDIR"], "benchcab_tests")
TMP_DIR = Path(tempfile.mkdtemp(prefix="benchcab_tests"))


def make_barebones_config() -> dict:
Expand Down