-
Notifications
You must be signed in to change notification settings - Fork 391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document default trainer metrics #1914
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fantastic, maybe @robmarkcole can review?
Should we do this for all of our LightningModules? Basically just document the default metrics we use.
I feel this is a particularly nuanced issue so have expanded slightly:
|
@jdilger do you think you can produce a similar level of detailed descriptions for the accuracy metrics used in other trainers? If not maybe @robmarkcole and I can help. |
Thanks @robmarkcole, the more nuanced version looks great. @adamjstewart Sure thing. I can take a first pass at it. |
We're likely going to release v0.5.2 tomorrow or Saturday, so if you can add these today that would be great. If not, we may merge this anyway and then we can add more details for more trainers in a follow-up PR. Thanks again for your help! |
LGTM. Only a minor suggestion for consistency Sets up segmentation accuracy metrics >> Initialize the metrics |
I have time to work on this today, but I went ahead and pushed the suggested change from robmarkcole. I noticed that the metrics setup for the classification and regression trainers are set up with dicts while segmentation and detection use lists. I'm not sure it needs to be addressed now, but defining some standard names might be helpful. For instance, in the segmentation trainer it's using @microsoft-github-policy-service agree |
There are some formatting issues with lists that are causing the documentation to fail to build. We should be using the https://sublime-and-sphinx-guide.readthedocs.io/en/latest/lists.html#unordered-lists format and indenting lines that stretch across multiple lines. Also, we could be using a note: https://sublime-and-sphinx-guide.readthedocs.io/en/latest/notes_warnings.html. |
Also I'm happy to handle the formatting stuff myself if you want to focus on adding something to the other trainers. For better or for worse, I know most of the quirks of Sphinx/rST now. |
@adamjstewart I think I figured it out, but do double-check 😄 . If there are still issues I'm happy to let you take it over. I don't have too much experience with Sphinx. Side note: In |
Hmm, quotes shouldn't be required for pip. What error message did you get? What shell are you using, bash? |
Ah, you're 100% right. I'm using zsh which gave the error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formatting looks good now. If we can do this for all other trainers by tomorrow, great. If not, we'll still merge and add more details after the release.
Added a first pass for the other configuration metrics. The formatting should be correct but please double-check the content :). |
It looks like zsh uses square brackets for pattern matching. Feel free to open a PR to add quotes around all use of square brackets in Installation and Contributing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@robmarkcole last chance to review
For the classification metrics, are they per pixel or per image? |
Classification is per-image. I'm not actually sure if micro and macro are different under that context. |
I think we expect them to be different for imbalanced datasets. The metrics will not compute at the pixel level but at the image level - the terms total true positives, false negatives, and false positives should be used for classification |
Co-authored-by: Adam J. Stewart <[email protected]>
96e62e2
to
11cc51e
Compare
Want to squeeze this in to the next release, but feel free to open another PR to update these docs or change the default metrics we use after the release! |
* Expand metrics documentation * typo * Update documentation following robmarkcole suggestion * Update summary to be consistent with other trainers * Move to unordered lists, fix indentation, use note * Update configure metrics for other trainers * typo * Update torchgeo/trainers/classification.py Co-authored-by: Adam J. Stewart <[email protected]> * Add detail on wanted values, reword macro note. * Remove redundant paragraph * Add acronyms, clarify regression metrics --------- Co-authored-by: Adam J. Stewart <[email protected]>
Expanded description of averaging reduction method for Segmentation metrics. Let me know if you'd like me to change anything, or if I should add the details on macro accuracy.
Closes #1874