-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird logging to console behavior. #4621
Comments
from @awaelchli : this might be a platform issue. Behaves differently on Linux vs windows. |
We couldn't find any culprit do far. |
Even more minimal reproduction: import logging
import pytorch_lightning as pl
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class Model(pl.LightningModule):
def __init__(self, in_size, hid_size, out_size):
super().__init__()
self.fc1 = nn.Linear(in_size, hid_size)
self.fc2 = nn.Linear(hid_size, out_size)
def forward(self, x):
return self.fc2(self.fc1(x))
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.cross_entropy(logits, y)
return loss
def configure_optimizers(self):
return optim.Adam(self.parameters(), lr=1e-3)
def main(bug):
logging.getLogger("lightning").setLevel(logging.INFO)
if bug:
logging.info("Hoi")
model = Model(5, 10, 2)
trainer = pl.Trainer()
if __name__ == "__main__":
main(bug=False) Setting GPU available: True, used: False
INFO:lightning:GPU available: True, used: False
TPU available: None, using: 0 TPU cores
INFO:lightning:TPU available: None, using: 0 TPU cores while leaving GPU available: True, used: False
TPU available: None, using: 0 TPU cores Output of
Also leads to duplicate logging with Hydra |
Same issue on Ubuntu 20.04. |
Also I didn't find any way to suppress internal logging such as:
|
@toliz I think you have to do my_logger = logging.getLogger("lightning")
my_logger.setLevel(logging.INFO)
my_logger.info("Hoi") I have spent some time on this problem of "duplicated logging" a few weeks ago but the problem is that it behaves differently on different platforms, which can drive a human crazy. I will try again with your sample and env and see if I can go any further this time. Thanks for reporting |
@awaelchli For this to suppress Trainers logging I have to set the logging level to What if you switch the following to
|
Minimal example: import pytorch_lightning as pl
import logging
logging.info("I'm not getting logged")
pl.seed_everything(1234) # but this gets logged twice
# console output:
# Global seed set to 1234
# INFO:lightning:Global seed set to 1234 |
In |
Yes! Works for me. |
Alright, I will finalize the PR. I need to see that this change doesn't affect any existing logging. |
@awaelchli sorry if it's a basic question but how can I test it? Do I just add this line in my local PL package? |
a few ways :) a) you can modify the pytorch lightning source code directly as I did in the linked PR. pl_logger = logging.getLogger("lightning")
pl_logger.propagate = False before anything else runs |
I have duplicated logs with
Before
and I cannot just comment it out or import my internal modules later (as suggested here). What should I do in order to fix logging, @awaelchli? |
π Bug
Logging to console prints some stuff twice, and does not output my custom logging. Verbose EarlyStopping does also not output to console:
To Reproduce
Here is my training code:
Expected behavior
Environment
The text was updated successfully, but these errors were encountered: