Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[IE CLDNN] Fallback to FP16 for non-quantized layers in quantized FP16+INT8 IR #941

Conversation

vladimir-paramuzov
Copy link
Contributor

No description provided.

@vladimir-paramuzov vladimir-paramuzov added the category: GPU OpenVINO GPU plugin label Jun 15, 2020
@vladimir-paramuzov vladimir-paramuzov self-assigned this Jun 15, 2020
@vladimir-paramuzov vladimir-paramuzov added this to the 2021.1 milestone Jun 16, 2020
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch from 774c6c4 to 40d75eb Compare June 16, 2020 15:49
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch 3 times, most recently from e04bff8 to 9eb46f5 Compare July 2, 2020 08:08
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch from 9eb46f5 to 0f4294f Compare July 2, 2020 15:08
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch from 0f4294f to 1930e06 Compare July 14, 2020 11:35
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch 2 times, most recently from 662cc9b to aaee4ea Compare July 27, 2020 06:59
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch 2 times, most recently from 25b7ea8 to 4e8be43 Compare August 11, 2020 06:46
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch from 4e8be43 to 06e82de Compare August 18, 2020 06:30
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch 3 times, most recently from f6f4486 to d4696ac Compare August 24, 2020 10:33
@vladimir-paramuzov vladimir-paramuzov marked this pull request as ready for review August 25, 2020 15:15
@vladimir-paramuzov vladimir-paramuzov requested review from a team as code owners August 25, 2020 15:15
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch from d4696ac to aa96de5 Compare August 26, 2020 14:13
@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch from aa96de5 to 4c85a1d Compare September 2, 2020 06:42
Copy link
Contributor

@sshlyapn sshlyapn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@vladimir-paramuzov vladimir-paramuzov force-pushed the private/vparamuz/fp32_fp16_quantized_models branch from 4c85a1d to ddbba22 Compare September 3, 2020 08:20
@vladimir-paramuzov vladimir-paramuzov merged commit b976782 into openvinotoolkit:master Sep 3, 2020
@vladimir-paramuzov vladimir-paramuzov deleted the private/vparamuz/fp32_fp16_quantized_models branch September 3, 2020 15:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: GPU OpenVINO GPU plugin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants