Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing #21

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing #21

Workflow file for this run

name: nm nightly
run-name: ${{ github.actor }} triggered nightly on ${{ github.ref }}
on:
schedule:
# * is a special character in YAML so you have to quote this string
- cron: '0 1 * * 1-6' # nightly run (Mon-Sat)
workflow_dispatch:
inputs:
push_benchmark_results_to_gh_pages:
description: "When set to true, the workflow pushes all benchmarking results to gh-pages UI "
type: choice
options:
- 'true'
- 'false'
default: 'false'
jobs:
PYTHON-3-8:
uses: ./.github/workflows/nm-build-test.yml
with:
wf_category: NIGHTLY
python: 3.8.17
gitref: ${{ github.ref }}
test_label_solo: gcp-k8s-l4-solo
test_label_multi: ignore
test_timeout: 480
test_skip_env_vars: neuralmagic/tests/test_skip_env_vars/full.txt
benchmark_label: gcp-k8s-l4-solo
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"
lm_eval_label_solo: gcp-k8s-l4-solo
lm_eval_label_mulit: gcp-k8s-l4-duo

Check failure on line 38 in .github/workflows/nm-nightly.yml

View workflow run for this annotation

GitHub Actions / nm nightly

Invalid workflow file

The workflow is not valid. .github/workflows/nm-nightly.yml (Line: 38, Col: 34): Invalid input, lm_eval_label_mulit is not defined in the referenced workflow.
lm_eval_configuration: ./.github/lm-eval-configs/full-small-models.yaml
lm_eval_timeout: 60
secrets: inherit
PYTHON-3-9:
uses: ./.github/workflows/nm-build-test.yml
with:
wf_category: NIGHTLY
python: 3.9.17
gitref: ${{ github.ref }}
test_label_solo: gcp-k8s-l4-solo
test_label_multi: ignore
test_timeout: 480
test_skip_env_vars: neuralmagic/tests/test_skip_env_vars/full.txt
benchmark_label: gcp-k8s-l4-solo
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"
lm_eval_label_solo: gcp-k8s-l4-solo
lm_eval_label_mulit: gcp-k8s-l4-duo
lm_eval_configuration: ./.github/lm-eval-configs/full-small-models.yaml
lm_eval_timeout: 60
secrets: inherit
PYTHON-3-10:
uses: ./.github/workflows/nm-build-test.yml
with:
wf_category: NIGHTLY
python: 3.10.12
gitref: ${{ github.ref }}
test_label_solo: gcp-k8s-l4-solo
test_label_multi: ignore
test_timeout: 480
test_skip_env_vars: neuralmagic/tests/test_skip_env_vars/full.txt
benchmark_label: gcp-k8s-l4-solo
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"
lm_eval_label_solo: gcp-k8s-l4-solo
lm_eval_label_mulit: gcp-k8s-l4-duo
lm_eval_configuration: ./.github/lm-eval-configs/full-small-models.yaml
lm_eval_timeout: 60
secrets: inherit
PYTHON-3-11:
uses: ./.github/workflows/nm-build-test.yml
with:
wf_category: NIGHTLY
python: 3.11.4
gitref: ${{ github.ref }}
test_label_solo: gcp-k8s-l4-solo
test_label_multi: ignore
test_timeout: 480
test_skip_env_vars: neuralmagic/tests/test_skip_env_vars/full.txt
benchmark_label: gcp-k8s-l4-solo
benchmark_config_list_file: ./.github/data/nm_benchmark_remote_push_configs_list.txt
benchmark_timeout: 720
push_benchmark_results_to_gh_pages: "${{ github.event_name == 'schedule' || inputs.push_benchmark_results_to_gh_pages }}"
lm_eval_label_solo: gcp-k8s-l4-solo
lm_eval_label_mulit: gcp-k8s-l4-duo
lm_eval_configuration: ./.github/lm-eval-configs/full-small-models.yaml
lm_eval_timeout: 60
secrets: inherit