Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add fp16 flag to test runner to check models quantized to fp16 #2182

Merged
merged 3 commits into from
Sep 15, 2023

Conversation

pfultz2
Copy link
Collaborator

@pfultz2 pfultz2 commented Sep 13, 2023

Also, added flags to adjust the tolerance.

@pfultz2 pfultz2 requested a review from umangyadav September 13, 2023 18:56
@codecov
Copy link

codecov bot commented Sep 13, 2023

Codecov Report

Merging #2182 (e9a9af6) into develop (752f13c) will not change coverage.
Report is 1 commits behind head on develop.
The diff coverage is n/a.

❗ Current head e9a9af6 differs from pull request most recent head 702f12f. Consider uploading reports for the commit 702f12f to get more accurate results

@@           Coverage Diff            @@
##           develop    #2182   +/-   ##
========================================
  Coverage    91.48%   91.48%           
========================================
  Files          426      426           
  Lines        15927    15927           
========================================
  Hits         14571    14571           
  Misses        1356     1356           

@pfultz2 pfultz2 changed the title Add fp16 flag to test runenr to check models quantized to fp16 Add fp16 flag to test runner to check models quantized to fp16 Sep 13, 2023
@TedThemistokleous TedThemistokleous added the Continous Integration Pull request updates parts of continous integration pipeline label Sep 13, 2023
@TedThemistokleous
Copy link
Collaborator

Is this linked to a ticket in our project board?

@migraphx-bot
Copy link
Collaborator

Test Batch Rate new
e24f19
Rate old
28fad2
Diff Compare
torchvision-resnet50 64 2,281.15 2,282.65 -0.07%
torchvision-resnet50_fp16 64 5,337.43 5,356.46 -0.36%
torchvision-densenet121 32 1,823.03 1,829.72 -0.37%
torchvision-densenet121_fp16 32 3,392.16 3,385.91 0.18%
torchvision-inceptionv3 32 1,341.89 1,342.12 -0.02%
torchvision-inceptionv3_fp16 32 2,591.37 2,584.17 0.28%
cadene-inceptionv4 16 678.92 680.65 -0.25%
cadene-resnext64x4 16 589.85 591.36 -0.25%
slim-mobilenet 64 7,218.86 7,223.75 -0.07%
slim-nasnetalarge 64 236.77 237.11 -0.14%
slim-resnet50v2 64 2,525.54 2,527.89 -0.09%
bert-mrpc-onnx 8 720.90 721.16 -0.04%
bert-mrpc-tf 1 390.88 389.70 0.30%
pytorch-examples-wlang-gru 1 302.51 301.30 0.40%
pytorch-examples-wlang-lstm 1 307.49 315.88 -2.66%
torchvision-resnet50_1 1 556.33 555.32 0.18%
torchvision-inceptionv3_1 1 306.90 308.88 -0.64%
cadene-dpn92_1 1 350.19 352.69 -0.71%
cadene-resnext101_1 1 220.51 220.74 -0.10%
slim-vgg16_1 1 224.26 224.38 -0.05%
slim-mobilenet_1 1 1,462.52 1,495.73 -2.22%
slim-inceptionv4_1 1 220.69 220.61 0.04%
onnx-taau-downsample 1 323.08 322.60 0.15%
dlrm-criteoterabyte 1 21.67 21.68 -0.05%
dlrm-criteoterabyte_fp16 1 40.61 40.45 0.38%
agentmodel 1 5,830.50 5,812.23 0.31%
unet_fp16 2 55.06 55.07 -0.01%

This build is OK for merge ✅

@migraphx-bot
Copy link
Collaborator


    :white_check_mark:bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

    :white_check_mark:bert-mrpc-tf: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

    :white_check_mark:pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

    :white_check_mark:torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

🔴torchvision-inceptionv3_1: FAILED: MIGraphX is not within tolerance - check verbose output


🔴cadene-dpn92_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:cadene-resnext101_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-vgg16_1: PASSED: MIGraphX meets tolerance

    :white_check_mark:slim-mobilenet_1: PASSED: MIGraphX meets tolerance

🔴slim-inceptionv4_1: FAILED: MIGraphX is not within tolerance - check verbose output


    :white_check_mark:dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

    :white_check_mark:agentmodel: PASSED: MIGraphX meets tolerance

    :white_check_mark:unet: PASSED: MIGraphX meets tolerance

@causten causten merged commit 74ba964 into develop Sep 15, 2023
@causten causten deleted the test-runner-fp16 branch September 15, 2023 13:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Continous Integration Pull request updates parts of continous integration pipeline high priority A PR with high priority for review and merging.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

When run verification of resnet50 model with fp16 quantization option, the verification fails
5 participants