-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proper way to use loguru.logger
with multiprocessing
( on Mac M1 / Windows)
#912
Comments
loguru.logger
with multiprocessing
on Mac M1loguru.logger
with multiprocessing
( on Mac M1)
Small UpdateI got the solution from #818 of my question
I have the same reasons as @PakitoSec that this solution is not the best.
|
Hi. There aren't that many more examples than those on the documentation page you've shared. I'm not a Mac user and therefore I can't help you much with the issues you're facing. Using loguru/tests/test_multiprocessing.py Lines 223 to 238 in 0b50708
|
@Delgan Thanks for you replay 👍 What about the second question?
Although I can’t find that issue yet, I’m pretty sure you mentioned in an issue before that developer should avoid creating logger objects repeatedly in different processes as much as possible. This move may lead to unexpected errors. Therefore I try to solved my needs without using
Or can I just re-implement it wtih |
I want to make my project runnable crossing different platforms, including Mac M1, Mac Intel and Windows also for sure. Therefore I can't use Here is my current solution for my needs, which is not good enought for me:
Is there any way to make it more readable? I wish I can logging via executing |
Indeed, the best way to share the However, there is no way currently to globally update the A possible workaround would be to use some kind of wrapper like that: # _loguru.py
from loguru import logger as _logger
class LoggerDelegator:
def __init__(self, logger):
self._logger = logger
def update_logger(self, logger):
self._logger = logger
def __getattr__(self, attr):
return getattr(self._logger, attr)
logger = LoggerDelegator(_logger) |
UpdateThe error below occured because I passed @Delgan Thanks for your reply, your solution helps me a lot. @Delgan seems something went wrong with
I have copied your code and add type-hints into it like this: # _loguru.py
from __future__ import annotations
import loguru
from loguru import logger as _logger
class LoggerDelegator:
def __init__(self, logger: loguru.Logger):
self._logger: loguru.Logger = logger
def update_logger(self, logger: loguru.Logger):
self._logger = logger
def __getattr__(self, attr):
return getattr(self._logger, attr)
logger = LoggerDelegator(_logger) |
loguru.logger
with multiprocessing
( on Mac M1)loguru.logger
with multiprocessing
( on Mac M1 / Windows)
My env
Question
Is there more example of using
loguru.logger
withmultiprocessing
or other multi-process library and shared the configuration ofloguru.logger
? I have read Compatibility with multiprocessing using enqueue argument in Code snippets and recipes for loguru but many error occured on my project after I followed that.Can I pass the
loguru.logger
instance through methods as argument and reload/reset/overwrite theloguru.logger
in sub-process which is created by "spawning" instead of "forking"(I got many warning message when I tried to use "forking" on Mac M1)?The error mentioned in
1.
,2.
The setting of
loguru.logger
missed after my sub-process is created:I have tried to set
on the beginning of my project to avoid
loguru.logger
being recreated due to the spawning/forking differences mentioned in #908 (comment):But I got the error message below due to the answer in Multiprocessing causes Python to crash and gives an error may have been in progress in another thread when fork() was called:
And I got more error message after I follow the tips and fixed the problem above although my project can barely run already :
I am not sure whether these warning messages were came from
loguru
since I have not used@logger.catch
. Confusing now and have no idea about what happened now 😢The text was updated successfully, but these errors were encountered: