Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: Allow not using AMP even if it's available #90

Closed
carbocation opened this issue Jan 12, 2024 · 5 comments
Closed

Request: Allow not using AMP even if it's available #90

carbocation opened this issue Jan 12, 2024 · 5 comments

Comments

@carbocation
Copy link

I'd like to be able to use this without AMP, even when AMP is available. From what I can tell, if AMP is available this library uses it. It would be nice to have an override in the constructor where I can choose to disable AMP even if AMP is available on my system.

@NaleRaphael
Copy link
Contributor

If you are in a hurry, you might need to rewrite the following lines with IS_AMP_AVAILABLE = False

try:
from apex import amp
IS_AMP_AVAILABLE = True
except ImportError:
IS_AMP_AVAILABLE = False

I'll summit a PR in next few days, because I need to take some items into consideration, including the earlier issue #67 in which you also commented. Sorry for the inactivity and inconvenience.

@NaleRaphael
Copy link
Contributor

Hi @carbocation. Just come up with a question, are you trying to use LRFinder with AMP disabled while wrapping the model and optimizer with apex.amp.initialize()?
I'm curious about this use case. Because apex.amp.scale_loss() should do nothing if the optimizer is not wrapped by apex.amp.initialize() (see also code below).

# Backward pass
if IS_AMP_AVAILABLE and hasattr(self.optimizer, "_amp_stash"):
# For minor performance optimization, see also:
# https://nvidia.github.io/apex/advanced.html#gradient-accumulation-across-iterations
delay_unscale = ((i + 1) % accumulation_steps) != 0

Anyway, I'm working on integrating torch.amp and apex.amp, so that we can determine which one to use by explicitly specifying the backend. And thanks for bringing this issue up.

@carbocation
Copy link
Author

Apologies, I might have misunderstood the code. I am not wrapping my model or optimizer with any apex.amp initializer. (I use the built-in pytorch AMP sometimes, but that does not require wrapping the model.) So perhaps I interpreted this fully backwards: I thought AMP was always enabled by default and I was hoping for a way to disable that, but perhaps it was never enabled for me since I don't use apex.

@NaleRaphael
Copy link
Contributor

It's okay, hope this helps clarify something if you ran into problems while using AMP.

@davidtvs
Copy link
Owner

davidtvs commented Feb 3, 2024

Closing, I believe #91 addressed this issue

@davidtvs davidtvs closed this as completed Feb 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants