Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy average #166

Merged
merged 23 commits into from
Apr 16, 2021
Merged

Accuracy average #166

merged 23 commits into from
Apr 16, 2021

Conversation

arv-77
Copy link
Contributor

@arv-77 arv-77 commented Apr 9, 2021

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

Fixes #99
Closes #155
I have added the average argument with all other required arguments. For subset_accuracy I have retained some sections of the previous code.

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@pep8speaks
Copy link

pep8speaks commented Apr 9, 2021

Hello @arvindmuralie77! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-04-16 13:28:34 UTC

@Borda
Copy link
Member

Borda commented Apr 10, 2021

@arvindmuralie77 how is it going here, do you think we can squeeze it to 0.3.0?

@arv-77
Copy link
Contributor Author

arv-77 commented Apr 10, 2021

do you think we can squeeze it to 0.3.0?

Dear @Borda , what exactly mean by this?

@arv-77 arv-77 closed this Apr 10, 2021
@Borda
Copy link
Member

Borda commented Apr 10, 2021

@arvindmuralie77 I was just asking what is you estimate to finish this PR :]

@Borda Borda reopened this Apr 10, 2021
@codecov
Copy link

codecov bot commented Apr 11, 2021

Codecov Report

Merging #166 (7457f54) into master (953b621) will decrease coverage by 0.17%.
The diff coverage is 97.72%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #166      +/-   ##
==========================================
- Coverage   96.18%   96.01%   -0.18%     
==========================================
  Files         180       90      -90     
  Lines        5582     2859    -2723     
==========================================
- Hits         5369     2745    -2624     
+ Misses        213      114      -99     
Flag Coverage Δ
Linux 79.18% <68.18%> (-0.61%) ⬇️
Windows 79.18% <68.18%> (-0.61%) ⬇️
cpu 96.01% <97.72%> (-0.16%) ⬇️
gpu ?
macOS 96.01% <97.72%> (-0.16%) ⬇️
pytest 96.01% <97.72%> (-0.18%) ⬇️
python3.6 95.99% <97.72%> (-0.16%) ⬇️
python3.8 96.01% <97.72%> (-0.16%) ⬇️
python3.9 95.90% <97.72%> (-0.16%) ⬇️
torch1.3.1 94.94% <97.72%> (-0.13%) ⬇️
torch1.4.0 95.06% <97.72%> (-0.14%) ⬇️
torch1.8.1 95.90% <97.72%> (-0.16%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
torchmetrics/utilities/checks.py 92.43% <80.00%> (-0.35%) ⬇️
torchmetrics/functional/classification/accuracy.py 93.93% <97.95%> (-6.07%) ⬇️
torchmetrics/classification/accuracy.py 96.15% <100.00%> (-3.85%) ⬇️
...torchmetrics/functional/classification/accuracy.py
...etrics/functional/regression/mean_squared_error.py
.../1/s/torchmetrics/retrieval/retrieval_precision.py
...s/torchmetrics/classification/average_precision.py
...trics/functional/regression/mean_absolute_error.py
...rics/functional/classification/confusion_matrix.py
... and 84 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 953b621...7457f54. Read the comment docs.

@arv-77
Copy link
Contributor Author

arv-77 commented Apr 11, 2021

Dear @Borda , I didn't know there was a deadline for 0.3.0. Everything is complete, if default value of mdmc_average is set as global, then the code is passing on all the previously existing test cases. Now only new test cases need to be created since new parameters have been added to accuracy.

@Borda
Copy link
Member

Borda commented Apr 11, 2021

I didn't know there was a deadline for 0.3.0

there is no deadline, I just thought that it would be nice to have it there, so just asking how do you feel about it :]

arv-77 added 2 commits April 13, 2021 02:48
…o the accuracy class, created a new function in utilities for squeezing tensors
@arv-77
Copy link
Contributor Author

arv-77 commented Apr 12, 2021

Hey @Borda , I have added some new test cases for the updated class. I was having some issues understanding the multi-dimension-multilabel class. Apart from that, I think I have covered all other cases. Should I convert this draft to a PR so that you can review it.

@Borda Borda requested a review from ananyahjha93 April 13, 2021 17:36
@Borda Borda marked this pull request as ready for review April 13, 2021 17:36
@Borda Borda self-assigned this Apr 13, 2021
Copy link
Member

@SkafteNicki SkafteNicki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smaller comments

tests/classification/test_accuracy.py Outdated Show resolved Hide resolved
torchmetrics/classification/accuracy.py Show resolved Hide resolved
torchmetrics/classification/accuracy.py Show resolved Hide resolved
@arv-77
Copy link
Contributor Author

arv-77 commented Apr 14, 2021

@SkafteNicki , I have corrected the mentioned issues.

@SkafteNicki SkafteNicki added enhancement New feature or request ready labels Apr 15, 2021
@arv-77
Copy link
Contributor Author

arv-77 commented Apr 15, 2021

Done @Borda :]

Copy link
Member

@Borda Borda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a few more tests please and we are ready to land :]

torchmetrics/classification/accuracy.py Show resolved Hide resolved
torchmetrics/classification/accuracy.py Show resolved Hide resolved
torchmetrics/classification/accuracy.py Show resolved Hide resolved
torchmetrics/classification/accuracy.py Show resolved Hide resolved
@arv-77
Copy link
Contributor Author

arv-77 commented Apr 16, 2021

I have added more test cases as mentioned

@Borda Borda merged commit a31d619 into Lightning-AI:master Apr 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ready
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow Accuracy to return metric per class
4 participants