-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model Verification in Trainer #1237
Comments
Hi! thanks for your contribution!, great first issue! |
I like the idea. There is a btw your second bullet point can be done in PL with the |
That sounds like a great place for it, thank you for pointing me there! I will see if I can start by integrating it there first. My only concern is that since fast_dev_run isn't be on by default, this may cause many people who are not aware of fast_dev_run to continue running code that doesn't respect the batch dimension. Would it be better to add another flag to Trainer, e.g. check_batch_dimension: Optional[int] = 0 (default batch dim is 0 as a default by PyTorch convention, None disables the check and warnings entirely) If I am correct, to prevent breaking changes, this would not be an assertion, but rather a loud warning? |
@TylerYep would love a PR for this! |
@TylerYep how is it going here? 🐰 |
Struggling a lot to understand the codebase and figure out how to fit this feature in. I tried to fit it into evaluation_loop.py 's _evaluate() function, however I wasn't sure how to proceed - calling evaluation_forward() doesn't seem to contain the model outputs for the batch, and as written I'm not sure how to set the requires grad for the batch and disable it afterwards without creating a completely separate copy of _evaluate(). If you would like, I can make a in-progress PR, but I haven't made it very far, unfortunately. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
@TylerYep can we help you? or wait a bit after we finish the refactoring... |
I have actually implemented this in a separate class myself to verify my models and used it many times. It is a great sanity test. Maybe I can send a PR or Google Colab and @TylerYep can help me test it. We can also come up with more verification tests. |
this is prime for a callback |
Yeah, I would love to help test it! I haven't had the chance to work on this for a while, but if someone with more experience can lead the effort, that would be great :) |
Draft here in this repo |
@awaelchli looks like the repo is private |
Thanks, I changed it to public now! |
@awaelchli keep this open? do we want to include your callback in lightning? |
@edenlightning It would be great. I made an issue in bolts Lightning-Universe/lightning-bolts#194 |
What's the distinction between callbacks in bolts and callbacks in the main repo? Optimistically, a lot of these checks (e.g. batch verification) will fit well in the majority of existing lightning workflows, whereas bolts seems like a better fit for utilities that are a bit more niche or application-specific. Thoughts? |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
🚀 Feature
Verifies that the provided model code does not mix up data across the batch dimension. We do this by setting the loss to be something trivial (e.g. the sum of all outputs of example i), running the backward pass all the way to the input, and ensuring that we only get a non-zero gradient on the i-th input.
Motivation
First of all, I would like to say thank you for the fantastic work being done on this project. Recently, I was working on a side project that has almost the exact same goal as this one, which I used as motivation to learn more about PyTorch and how to make Deep Learning easier. Clearly, this project is a lot more thought-out than mine :^), but I wanted to see if there were any ideas I developed independently that might be useful in this project.
One of the most useful utils I've implemented is a verification step before the model runs. In my project, this verification step performs checks such as:
Since I am very new to this project, I thought that the first bullet point might be a good place to start.
Pitch
Given the introductory example in the documention, assume we had written some poor tensor operations in our model like so:
When we start to train our model, everything begins training smoothly. However, this code is clearly wrong - we are crossing image data from separate datapoints in our batch.
It would be helpful if Lightning gave us a warning if this has happened. For example:
This function verifies that only a single datapoint in the batch should have a nonzero gradient. This check has saved me countless times from running a poorly written model. :)
Implementation-wise, I am looking for any advice on whether this is a useful effort, whether it fits into the intended goals of Lightning, and what are possible difficulties that may arise.
Alternatives
It is clear that the feature as it stands will not work for all models, as some variants of LSTMs and such use a different dimension as its batch dimension (maybe this can be a parameter). There also might be issues if the batch is split up somewhere - I'm not quite certain how everything in this project works, particularly around gradient accumulation.
However, I would expect that this would be useful in almost all models. I advocate this being a default warning, but also allowing well-intentioned users to simply pass some sort of flag to disable this verification step.
I also realize there needs to be some cleanup after this step to reset the model to its previous state. Any insights here would be great as well.
Additional context
None
The text was updated successfully, but these errors were encountered: