Skip to content

Commit

Permalink
Merge branch 'develop' into doc_sacess
Browse files Browse the repository at this point in the history
  • Loading branch information
dweindl committed Jan 4, 2024
2 parents e724389 + c350f65 commit 4424082
Show file tree
Hide file tree
Showing 19 changed files with 327 additions and 157 deletions.
26 changes: 14 additions & 12 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ on:
push:
branches:
- main
- develop
pull_request:
workflow_dispatch:
schedule:
Expand All @@ -14,6 +15,8 @@ on:
env:
# use all available cores for compiling amici models
AMICI_PARALLEL_COMPILE: ""
# non-interactive backend for matplotlib
MPLBACKEND: "agg"

# jobs
jobs:
Expand All @@ -36,7 +39,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -75,9 +78,8 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
~/Library/Caches/Homebrew
key: ${{ runner.os }}-${{ matrix.python-version }}-ci

- name: Install dependencies
Expand Down Expand Up @@ -145,7 +147,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -189,7 +191,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -239,7 +241,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -275,7 +277,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -311,7 +313,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -347,7 +349,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -379,7 +381,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -412,7 +414,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down Expand Up @@ -442,7 +444,7 @@ jobs:
uses: actions/cache@v3
with:
path: |
~/.cache
~/.cache/pip
.tox/
key: ${{ runner.os }}-${{ matrix.python-version }}-ci-${{ github.job }}

Expand Down
31 changes: 31 additions & 0 deletions .github/workflows/clear-cache.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Save cache space by deleting cache entries from a PR after it was merged.
# https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#force-deleting-cache-entries
name: Delete after PR merge
on:
pull_request:
types:
- closed

jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH -L 100 | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
REPO: ${{ github.repository }}
BRANCH: refs/pull/${{ github.event.pull_request.number }}/merge
4 changes: 2 additions & 2 deletions .github/workflows/install_deps.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ pip install wheel setuptools
pip install tox

# Update package lists
if [ "$(uname)" == "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
# MacOS
:
else
Expand All @@ -28,7 +28,7 @@ for par in "$@"; do

amici)
# for amici
if [ "$(uname)" == "Darwin" ]; then
if [ "$(uname)" = "Darwin" ]; then
brew install swig hdf5 libomp
else
sudo apt-get install \
Expand Down
44 changes: 41 additions & 3 deletions doc/example/hierarchical.ipynb
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -57,6 +58,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -87,10 +89,22 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The PEtab observable table contains placeholders for scaling parameters $s$ (`observableParameter1_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel}`), offsets $b$ (`observableParameter2_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel}`), and noise parameters $\\sigma^2$ (`noiseParameter1_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel}`) that are overridden by the `{observable,noise}Parameters` column in the measurement table. When using hierarchical optimization, the nine overriding parameters `{offset,scaling,sd}_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel}` are to be estimated in the inner problem."
"To convert a non-hierarchical PEtab model to a hierarchical one, the observable_df, the measurement_df and the parameter_df have to be changed accordingly.\n",
"The PEtab **observable table contains** placeholders for scaling parameters $s$ (`observableParameter1_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel}`), offsets $b$ (`observableParameter2_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel}`), and noise parameters $\\sigma^2$ (`noiseParameter1_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel}`) that are overridden by the `{observable,noise}Parameters` column in the **measurement table**.\n",
"\n",
"N.B.: in general, the inner parameters can appear in observable formulae directly. For example, the first observable formula in this table could be changed from `observableParameter2_pSTAT5A_rel + observableParameter1_pSTAT5A_rel * (100 * pApB + 200 * pApA * specC17) / (pApB + STAT5A * specC17 + 2 * pApA * specC17)` to `offset_pSTAT5A_rel + scaling_pSTAT5A_rel * (100 * pApB + 200 * pApA * specC17) / (pApB + STAT5A * specC17 + 2 * pApA * specC17)`."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Observable DF:"
]
},
{
Expand Down Expand Up @@ -197,14 +211,35 @@
"from pandas import option_context\n",
"\n",
"with option_context('display.max_colwidth', 400):\n",
" display(petab_problem.observable_df)"
" display(petab_problem_hierarchical.observable_df)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Measurement DF:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pandas import option_context\n",
"\n",
"with option_context('display.max_colwidth', 400):\n",
" display(petab_problem_hierarchical.measurement_df)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Parameters to be optimized in the inner problem are selected via the PEtab parameter table by setting a value in the non-standard column `parameterType` (`offset` for offset parameters, `scaling` for scaling parameters, and `sigma` for sigma parameters):"
"Parameters to be optimized in the inner problem are specified via the PEtab parameter table by setting a value in the non-standard column `parameterType` (`offset` for offset parameters, `scaling` for scaling parameters, and `sigma` for sigma parameters). When using hierarchical optimization, the nine overriding parameters {offset,scaling,sd}_{pSTAT5A_rel,pSTAT5B_rel,rSTAT5A_rel} are to estimated in the inner problem."
]
},
{
Expand Down Expand Up @@ -529,6 +564,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -688,6 +724,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -788,6 +825,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down
6 changes: 5 additions & 1 deletion pypesto/hierarchical/calculator.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ def __call__(
inner_result[HESS] = np.full(
shape=(dim, dim), fill_value=np.nan
)
inner_result[INNER_PARAMETERS] = None
return inner_result

inner_parameters = self.inner_solver.solve(
Expand Down Expand Up @@ -155,7 +156,10 @@ def __call__(
parameter_mapping=parameter_mapping,
fim_for_hess=fim_for_hess,
)
result[INNER_PARAMETERS] = inner_parameters
# Return inner parameters in order of inner problem x_ids
result[INNER_PARAMETERS] = np.array(
[inner_parameters[x_id] for x_id in self.inner_problem.get_x_ids()]
)
result[INNER_RDATAS] = inner_rdatas

return result
10 changes: 7 additions & 3 deletions pypesto/hierarchical/inner_calculator_collector.py
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ def __call__(
sensi_orders, mode, dim
)
all_inner_pars = {}
interpretable_inner_pars = {}
interpretable_inner_pars = []

# set order in solver
sensi_order = 0
Expand Down Expand Up @@ -385,7 +385,7 @@ def __call__(

all_inner_pars.update(inner_result[X_INNER_OPT])
if INNER_PARAMETERS in inner_result:
interpretable_inner_pars.update(inner_result[INNER_PARAMETERS])
interpretable_inner_pars.extend(inner_result[INNER_PARAMETERS])

# add result for quantitative data
if self.quantitative_data_mask is not None:
Expand Down Expand Up @@ -418,7 +418,11 @@ def __call__(
# only if the objective value improved.
if ret[FVAL] < self.best_fval:
ret[X_INNER_OPT] = all_inner_pars
ret[INNER_PARAMETERS] = interpretable_inner_pars
ret[INNER_PARAMETERS] = (
interpretable_inner_pars
if len(interpretable_inner_pars) > 0
else None
)
self.best_fval = ret[FVAL]

return filter_return_dict(ret)
Expand Down
6 changes: 6 additions & 0 deletions pypesto/hierarchical/problem.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,12 @@ def is_empty(self) -> bool:
"""
return len(self.xs) == 0

def get_bounds(self) -> Tuple[List[float], List[float]]:
"""Get bounds of inner parameters."""
lb = [x.lb for x in self.xs.values()]
ub = [x.ub for x in self.xs.values()]
return lb, ub


class AmiciInnerProblem(InnerProblem):
"""
Expand Down
2 changes: 1 addition & 1 deletion pypesto/hierarchical/solver.py
Original file line number Diff line number Diff line change
Expand Up @@ -338,7 +338,7 @@ def fun(x):
)
):
raise RuntimeError(
f"An optimal inner parameter is on the defualt dummy bound of numerical optimization. "
f"An optimal inner parameter is on the default dummy bound of numerical optimization. "
f"This means the optimal inner parameter is either extremely large (>={self.dummy_ub})"
f"or extremely small (<={self.dummy_lb}). Consider changing the inner parameter bounds."
)
Expand Down
2 changes: 1 addition & 1 deletion pypesto/hierarchical/spline_approximation/calculator.py
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ def __call__(

inner_result[
INNER_PARAMETERS
] = self.inner_problem.get_inner_noise_parameter_dictionary()
] = self.inner_problem.get_inner_noise_parameters()

# Calculate analytical gradients if requested
if sensi_order > 0:
Expand Down
11 changes: 5 additions & 6 deletions pypesto/hierarchical/spline_approximation/problem.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,12 +184,11 @@ def get_inner_parameter_dictionary(self) -> Dict:
inner_par_dict[x_id] = x.value
return inner_par_dict

def get_inner_noise_parameter_dictionary(self) -> Dict:
"""Get a dictionary with all noise inner parameter ids and their values."""
inner_par_dict = {}
for x in self.get_xs_for_type(InnerParameterType.SIGMA):
inner_par_dict[x.inner_parameter_id] = x.value
return inner_par_dict
def get_inner_noise_parameters(self) -> list[float]:
"""Get a list with all noise parameter values."""
return [
x.value for x in self.get_xs_for_type(InnerParameterType.SIGMA)
]

def get_measurements_for_group(self, gr) -> np.ndarray:
"""Get measurements for a group."""
Expand Down
4 changes: 2 additions & 2 deletions pypesto/objective/amici/amici.py
Original file line number Diff line number Diff line change
Expand Up @@ -215,7 +215,7 @@ def __init__(
self.custom_timepoints = None

# Initialize the dictionary for saving of inner parameters.
self.inner_parameters: Dict[str, float] = {}
self.inner_parameters: list[float] = None

def get_config(self) -> dict:
"""Return basic information of the objective configuration."""
Expand Down Expand Up @@ -456,7 +456,7 @@ def call_unprocessed(

nllh = ret[FVAL]
rdatas = ret[RDATAS]
if INNER_PARAMETERS in ret and ret[INNER_PARAMETERS]:
if ret.get(INNER_PARAMETERS, None) is not None:
self.inner_parameters = ret[INNER_PARAMETERS]

# check whether we should update data for preequilibration guesses
Expand Down
Loading

0 comments on commit 4424082

Please sign in to comment.