Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

best-practices devops: add nightly_gpu_tests pipeline #823

Merged
merged 5 commits into from
Jun 12, 2019
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 108 additions & 0 deletions tests/ci/nightly_gpu.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think you will have to change the name to not get a conflict?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the pipeline name is the conflict not the yml.
When the PR is included in staging, then the pipeline name is changed to bp-nightly_gpu and the yml file remains the same.

# Licensed under the MIT License.
#
variables:
test: 'tests/ci/run_pytest.py'
maxnodes : 4
reponame : 'Recommenders'
branch : 'azure-pipelines-bz'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be in staging (or master, when it is computed in master), right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, it's only a string that is tagged in an AML experiment for correlation. Eventually, adding code to use the actual pipeline would be useful.

I changed it so something more generic for now.

rg : 'recommender'
wsname : 'RecoWS'
# GPU
vmsize : 'STANDARD_NC6'
dockerproc : 'gpu'

trigger: none

pr:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nightly tests should be triggered every day, not when some does a PR (as opposite to unit tests)

Copy link
Collaborator Author

@bethz bethz Jun 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pr trigger removed from yml.
The nightly trigger will be set in the devops pipeline trigger setting after the PR is in staging.

- staging
- master

jobs:
- job : Smoke
displayName : 'Smoke: Nightly_gpu_test'
pool:
vmImage: 'ubuntu-16.04'

# vars specific to this job
variables:
- group: AzureKeyVaultSecrets
- name : 'testfolder'
value : './tests/smoke'
- name : 'testmarkers'
value : '"smoke and not spark and gpu"'
- name : 'junitxml'
value : 'reports/test-smoke.xml'
- name : 'clustername'
value : 'reco-nightly-gpu'
- name : 'expname'
value : 'nightly_smoke_gpu'

steps:

- script: |
az login --service-principal -u $(ClientID) -p $(ClientSecret) --tenant $(TenantID)
displayName: 'Login to Azure'

- template: env-setup.yml # template reference

- script:
python scripts/generate_conda_file.py --gpu --name reco
displayName: ' generate_conda_file.py'

- script:
python tests/ci/submit_azureml_pytest.py --subid $(SubscriptionID) --testfolder $(testfolder) --testmarkers $(testmarkers) --clustername $(clustername) --expname $(expname) --dockerproc $(dockerproc) --junitxml $(junitxml) --reponame $(reponame) --branch $(branch)
displayName: 'submit_azureml_pytest'

- task: PublishTestResults@2
displayName: 'Publish Test Results **/test-*.xml'
inputs:
testResultsFiles: '**/test-*.xml'
failTaskOnFailedTests: true
condition: succeededOrFailed()

- job : Integration
dependsOn: Smoke
condition: succeeded('Smoke')
timeoutInMinutes: 90
displayName : 'Integration: Nightly_gpu_test'

pool:
vmImage: 'ubuntu-16.04'

# vars specific to this job
variables:
- group: AzureKeyVaultSecrets
- name : 'testfolder'
value : './tests/integration'
- name : 'testmarkers'
value : '"integration and not spark and gpu"'
- name : 'junitxml'
value : 'reports/test-integration.xml'
- name : 'clustername'
value : 'reco-nightly-gpu'
- name : 'expname'
value : 'nightly_integration_gpu'

steps:

- script: |
az login --service-principal -u $(ClientID) -p $(ClientSecret) --tenant $(TenantID)
displayName: 'Login to Azure'

- template: env-setup.yml # template reference

- script:
python scripts/generate_conda_file.py --gpu --name reco
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no need to reinstall the libraries again. The smoke and integration tests in the pipeline that we have in devops use the same environment

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Smoke and integration are currently set up as separate jobs for devops which means if there is a bug in one of the integration tests, this set up makes it easy to rerun just the integration tests without needing to run smoke. Smoke tests take about 22m and integration takes just under 1.5 hours. The lib install is about 1.5 min.

I'm currently testing a version where I remove those from the integration tests. I'll update the pr once it runs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The integration test and smoke test are in separate jobs in the same pipeline. The smoke test must run successfully before the integration job will run but the integration job does not share setup with the smoke test as they are in separate jobs. I removed the installs in the integration test and ran the integration test after the smoke test. The integration test failed when the env-setup.yml was not run. I tested this once. The az login was also required.

displayName: ' generate_conda_file.py'

- script:
python tests/ci/submit_azureml_pytest.py --subid $(SubscriptionID) --testfolder $(testfolder) --testmarkers $(testmarkers) --clustername $(clustername) --expname $(expname) --dockerproc $(dockerproc) --junitxml $(junitxml) --reponame $(reponame) --branch $(branch)
displayName: 'submit_azureml_pytest'

- task: PublishTestResults@2
displayName: 'Publish Test Results **/test-*.xml'
inputs:
testResultsFiles: '**/test-*.xml'
failTaskOnFailedTests: true
condition: succeededOrFailed()