-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rotating Error in multi-process (randomly happened) #916
Comments
BTW, what would happened if the error occur on
How would |
Hi @changchiyou. Are you using the custom Also, can you please give me more information about your logger configuration? |
Yes, I would show the detail below.
Sure 👍 My logger configuration:
# config_vars.ROTATION = "00:00"
# config_vars.UTC = 8
# LOG_FORMAT = "<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green> <level>{level: <8}</level> [<cyan>{extra[mark]}</cyan>] <cyan>{file}</cyan> - <cyan>{function}</cyan>() : <level>{extra[message_no_colors]}</level>"
# CONSOLE_FORMAT = "<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green> <level>{level: <8}</level> [<cyan>{extra[mark]}</cyan>] <cyan>{file}</cyan> - <cyan>{function}</cyan>() : <level>{message}</level>"
hour, minute = [int(_time) for _time in str(config_vars.ROTATION).split(":")]
_logger.add(
f"{config_setting.args.LOG_PATH}/{config_vars.BASE_LOG_FILE_NAME}" # .../record.log
if log_path is None
else log_path,
level=log_level,
# bug report from https://github.com/Delgan/loguru/issues/894
# rotation=config_vars.ROTATION if rotation is None else rotation,
rotation=datetime.time(
hour=hour,
minute=minute,
tzinfo=datetime.timezone(datetime.timedelta(hours=config_vars.UTC)),
)
if rotation is None
else rotation,
compression=rename if compression is None else compression,
format=config_vars.LOG_FORMAT if log_format is None else log_format,
enqueue=enqueue,
colorize=False,
filter=remove_color_syntax,
)
_logger.add(
console_sink,
level=console_level,
colorize=True,
format=config_vars.CONSOLE_FORMAT if console_format is None else console_format,
enqueue=enqueue,
)
loguru_set_mark(mark)
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
# RENAME_TIME_FORMAT = "%Y-%m-%d"
# BASE_LOG_FILE_NAME = "record.log"
def rename(filepath) -> None:
today = datetime.datetime.now().strftime(
config_setting.get_variable(__file__).RENAME_TIME_FORMAT
)
dirname = os.path.dirname(filepath)
basename = config_setting.get_variable(__file__).BASE_LOG_FILE_NAME
current_log_filepath = os.path.normpath(os.path.join(dirname, basename))
os.rename(
filepath,
dynamic_insert_string(str(current_log_filepath), today),
)
def remove_color_syntax(record: loguru.Record) -> bool:
# https://zhuanlan.zhihu.com/p/70680488
# `re.compile` is no need in python but it's a good development habits
ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
# remove color-syntax from message
record["extra"]["message_no_colors"] = ansi_escape.sub("", record["message"])
return True
def loguru_set_mark(
mark: str, _logger: LoggerDelegator | loguru.Logger = logger
) -> None:
_logger.configure(extra={"mark": mark})
|
Thanks for the details. I assume |
@Delgan Update for more detail info.
Yes. Sorry forgot to provide this important information. set_loguru(
enqueue=True,
console_level=logging.DEBUG
if config_setting.args.DEBUG is True
else logging.INFO,
) def set_loguru( # pylint: disable=too-many-arguments
_logger: LoggerDelegator | loguru.Logger = logger,
log_path: str | None = None,
rotation: str | None = None,
log_level: int = logging.DEBUG,
console_level: int = logging.INFO,
log_format: str | None = None,
console_format: str | None = None,
mark: str = "origin",
enqueue: bool = False,
compression=None,
console_sink=sys.stderr,
) -> None:
|
@Delgan Small info update: To vertify my question before
I run my project for another day(start from
|
Sorry @changchiyou but the configuration is a bit too complex for me to fully reproduce it locally. From what I tested, it seems you could be facing the same issue as #894, right? The easiest way to find out if it's this bug would be to use Loguru v0.6.0 instead of v0.7.0 (the bug was introduced between these two releases). |
@Delgan Thanks for you reply, I would likt to try your answer
but as I mentined above
I don't know if this solution is only temporarily useful, therefore I want to write a unit-test to test it, which is according to your test script https://github.com/Delgan/loguru/blob/master/tests/test_filesink_rotation.py : from loguru import logger
from facepcs.utils import set_loguru, get_relative_path
from facepcs.config import config_setting
import logging
import multiprocessing
from ..conftest import check_dir
MSG = "test"
def message(logger):
from facepcs.utils import logger as _logger
_logger.update_logger(logger)
_logger.debug(f"{MSG}")
def test_multi_process_renaming(freeze_time, tmp_path):
config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
set_loguru(enqueue=True, console_level=logging.DEBUG,
log_path = tmp_path / config_setting.get_variable("logging_enhance.py").BASE_LOG_FILE_NAME)
with freeze_time("2020-01-01 23:59:59") as frozen:
workers = []
for _ in range(2):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
frozen.tick()
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("file.2020-01-01.log", ""),
("file.log", "a\n"),
],
)
After I execute Full Error Message
Any advice for this unit test? BTW, I have tried |
UpdateAfter reinstall I run my project crossing day again, here is the logs: Althought today is
I would try to finish the unit-test #916 (comment) for testing this problem more quickly locally.
BTW, @Delgan seems not. I can still reproduce problem #916 (comment) with |
Hi @changchiyou. I created two unit tests based on the one you provided. They can be put in def test_bug_916_v1(freeze_time, tmp_path):
import multiprocessing
context = multiprocessing.get_context("fork")
def message(logger_):
logger_.debug("Message")
with freeze_time("2020-01-01 12:00:00") as frozen:
logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")
workers = []
for _ in range(2):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
# Rotation is expected to create a new file after this point.
frozen.tick(delta=datetime.timedelta(hours=24))
for _ in range(2):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
# No rotation should occurs here.
frozen.tick(delta=datetime.timedelta(hours=1))
for _ in range(2):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.2020-01-01_12-00-00_000000.log", "Message\n" * 2),
("record.log", "Message\n" * 4),
],
)
def test_bug_916_v2(freeze_time, tmp_path):
import multiprocessing
context = multiprocessing.get_context("fork")
def message(logger_):
logger_.debug("Message")
with freeze_time("2020-01-01 12:00:00") as frozen:
logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")
workers = []
for _ in range(2):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
# Rotation is expected to create a new file after this point.
frozen.tick(delta=datetime.timedelta(hours=24))
logger.remove()
logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")
for _ in range(2):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
# No rotation should occurs here.
frozen.tick(delta=datetime.timedelta(hours=1))
logger.remove()
logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")
for _ in range(2):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.2020-01-01_12-00-00_000000.log", "Message\n" * 2),
("record.log", "Message\n" * 4),
],
) However, both tests are successfully executed. I think I'm missing some part of you |
@Delgan Thanks for your reply, I correct my unit test from #916 (comment) to current version based on your unit-test showcase 👍 . BTW, I use
Sure. I created 4 unit tests, two of which use Note. My full `pytest` script(`loguru==0.7.0`)from loguru import logger
import datetime
import logging
import os
import re
import sys
from facepcs.utils import set_loguru, get_relative_path
from facepcs.config import config_setting
from ..conftest import check_dir
MSG = "test"
def rename(filepath) -> None:
today = datetime.datetime.now().strftime(r"%Y-%m-%d")
dirname = os.path.dirname(filepath)
basename = "record.log"
new_filepath = os.path.normpath(os.path.join(dirname, basename))
os.rename(
filepath,
dynamic_insert_string(str(new_filepath), today),
)
def dynamic_insert_string(base: str, insert: str) -> str:
parts = base.split(".")
if len(parts) > 1:
base_filename = ".".join(parts[:-1])
extension = parts[-1]
result = f"{base_filename}.{insert}.{extension}"
else:
result = f"{base}.{insert}"
return result
def remove_color_syntax(record) -> bool:
ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
record["extra"]["message_no_colors"] = ansi_escape.sub("", record["message"])
return True
class InterceptHandler(logging.Handler):
def emit(
self,
record: logging.LogRecord,
_logger = logger,
) -> None:
try:
level = _logger.level(record.levelname).name
except ValueError:
level = record.levelno
frame, depth = sys._getframe(6), 6
while frame is not None and frame.f_code.co_filename == logging.__file__:
frame = frame.f_back
depth += 1
_logger.opt(depth=depth, exception=record.exc_info).log(
level, record.getMessage()
)
def set_loguru_pure(tmp_path):
import logging
from loguru import logger
import sys
import datetime
logger.remove()
logger.add(
f"{str(tmp_path)}/record.log",
level=logging.DEBUG,
rotation=datetime.time(
hour=0,
minute=0,
tzinfo=datetime.timezone(datetime.timedelta(hours=8)),
),
compression=rename,
format="{message}",
enqueue=True,
colorize=False,
filter=remove_color_syntax,
)
logger.add(
sys.stderr,
level=logging.DEBUG,
colorize=True,
format="{message}",
enqueue=True,
)
logger.configure(extra={"mark": "origin"})
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
def test_single_process_pure_renaming(freeze_time, tmp_path):
with freeze_time("2020-01-01 12:00:00") as frozen:
set_loguru_pure(tmp_path)
for _ in range(2):
logger.debug(f"{MSG}")
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
frozen.tick(delta=datetime.timedelta(hours=24))
for _ in range(4):
logger.debug(f"{MSG}")
check_dir(
tmp_path,
files=[
("record.2020-01-01.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
)
def test_multi_process_pure_renaming(freeze_time, tmp_path):
import multiprocessing
with freeze_time("2020-01-01 12:00:00") as frozen:
set_loguru_pure(tmp_path)
workers = []
for _ in range(2):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
frozen.tick(delta=datetime.timedelta(hours=24))
workers = []
for _ in range(4):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.2020-01-01.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
)
def test_single_process_renaming(freeze_time, tmp_path):
with freeze_time("2020-01-01 12:00:00") as frozen:
config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
config_setting.init_args({"log_path": str(tmp_path)})
set_loguru(enqueue=True, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")
for _ in range(2):
logger.debug(f"{MSG}")
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
frozen.tick(delta=datetime.timedelta(hours=24))
for _ in range(4):
logger.debug(f"{MSG}")
check_dir(
tmp_path,
files=[
("record.2020-01-01.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
)
def message(logger):
from facepcs.utils import logger as _logger
_logger.update_logger(logger)
_logger.debug(f"{MSG}")
def test_multi_process_renaming(freeze_time, tmp_path):
import multiprocessing
with freeze_time("2020-01-01 12:00:00") as frozen:
config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
config_setting.init_args({"log_path": str(tmp_path)})
set_loguru(enqueue=True, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")
workers = []
for _ in range(2):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
frozen.tick(delta=datetime.timedelta(hours=24))
workers = []
for _ in range(4):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.2020-01-01.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
) FAILURES
|
@Delgan I believe I found a more precise error when I try to use build-in solution #899 (comment) . (the test below execute with I comment # compression=rename if compression is None else compression, in my methods loguru/tests/test_filesink_rotation.py Lines 293 to 319 in 2a35b87
to: @pytest.mark.parametrize("enqueue", [True, False])
@pytest.mark.parametrize("rotation", ["daily", "00:00"])
def test_daily_rotation_with_different_rotation(freeze_time, tmp_path, enqueue, rotation):
with freeze_time("2018-10-27 00:00:00") as frozen:
config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
config_setting.init_args({"log_path": str(tmp_path)})
set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}", rotation=rotation, log_path= tmp_path/"record.{time:YYYY-MM-DD}.log")
logger.debug("First")
frozen.tick(delta=datetime.timedelta(hours=23, minutes=30))
logger.debug("Second")
frozen.tick(delta=datetime.timedelta(hours=1))
logger.debug("Third")
frozen.tick(delta=datetime.timedelta(hours=24))
logger.debug("Fourth")
logger.remove()
check_dir(
tmp_path,
files=[
("record.2018-10-27.log", "First\nSecond\n"),
("record.2018-10-28.log", "Third\n"),
("record.2018-10-29.log", "Fourth\n"),
],
) with
Haven't found any description about " |
I suspect that using the After I reviewd the previous issue #899, according to my comment #899 (comment), I found that def rename(filepath):
today = datetime.datetime.now().strftime("%Y-%m-%d")
os.rename(filepath, f"record.{today}.log")
logger.add("record.log", rotation="00:00", compression=rename) to def rename(filepath):
time_string_regex = r"\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}_\d{6}"
time_string_format = r"%Y-%m-%d_%H-%M-%S_%f"
target_format = r"%Y-%m-%d"
date_pattern = re.compile(time_string_regex)
date_match = re.search(date_pattern, str(filepath))
if date_match:
date_string = date_match.group()
date = datetime.datetime.strptime(date_string, time_string_format)
formatted_date = date.strftime(target_format)
new_filepath = re.sub(date_pattern, formatted_date, str(filepath))
os.rename(filepath, new_filepath)
else:
raise ValueError(
f"can't find any time string in '{filepath}' matched '{time_string_format}'"
) since the input argument
I don't have to rename the log file with specific time string actually. I use the native setting windows in Mac M1 to adjust date time ![]() , and after I switched from but I failed the unit-test: @pytest.mark.parametrize("enqueue", [True])
@pytest.mark.parametrize("rotation", ["daily", "00:00"])
def test_multi_process_renaming(freeze_time, tmp_path, enqueue, rotation):
import multiprocessing
with freeze_time("2020-01-01 12:00:00") as frozen:
config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
config_setting.init_args({"log_path": str(tmp_path)})
set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}", rotation=rotation)
workers = []
for _ in range(2):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
frozen.tick(delta=datetime.timedelta(hours=24))
workers = []
for _ in range(4):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.2020-01-01.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
)
|
UpdateAfter leaving the comments below, I thought I finally solve this issue, but seems not.
I ran my project for another day, it worked perfect with rotation at midnight with: But after I RESTART my project at the morning, it rotate again and now I have: I hope that no matter if the project is restarted at any time of the day except midnight, the new log should only be written to the end of "record.log". I don't want to rename it except for changing the date. I'm guessing my expectations didn't match up with my |
Here is how
Is there any possibility that |
@Delgan BTW, aren't the first 2 of my 4 unit-test in #916 (comment) clear enough? Let me know if you want more information/details, I am willing to add more. 👍
|
@Delgan I'm considering deprecate the project's support for windows(use I replace all @pytest.mark.parametrize("enqueue", [True])
@pytest.mark.parametrize("rotation", ["daily", "00:00"])
def test_multi_process_renaming(freeze_time, tmp_path, enqueue, rotation):
import multiprocessing
context = multiprocessing.get_context("fork")
with freeze_time("2020-01-01 12:00:00") as frozen:
config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
config_setting.init_args({"log_path": str(tmp_path)})
set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}", rotation=rotation)
workers = []
for _ in range(2):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
frozen.tick(delta=datetime.timedelta(hours=24))
workers = []
for _ in range(4):
worker = context.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
check_dir(
tmp_path,
files=[
("record.2020-01-01.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
) and it just passed without any error 🤣
Another small question: I refer to your unit-tests above #916 (comment), but I can't pass def patched_open(filepath, *args, **kwargs):
if not os.path.exists(filepath):
tz = datetime.timezone(datetime.timedelta(seconds=fakes["offset"]), name=fakes["zone"])
ctimes[filepath] = datetime.datetime.now().replace(tzinfo=tz).timestamp()
> return builtins_open(filepath, *args, **kwargs)
E TypeError: 'context' is an invalid keyword argument for open()
tests/conftest.py:80: TypeError I can't find any document about this arguement. In https://loguru.readthedocs.io/en/stable/resources/recipes.html#compatibility-with-multiprocessing-using-enqueue-argument you didn't pass the |
@Delgan I'm sorry for bothering you for days 😭 but here's my final conclusion of this issue:
|
No worries. Thank you for your patience. I appreciate that you haven't condemned Loguru and are trying to find a solution. I welcome your work and detailed reports about what you observe during the tests. It's just that there's a lot of information out there and it's sometimes difficult for me to understand and reproduce the problem you're facing. But I hope we'll find a solution. I don't like the idea of you deprecating your own project on a platform because of Loguru... :) Regarding your comment here: #916 (comment)
Those tests are failing because logger.debug("First")
# Wait for the first message to be fully processed by the background thread.
logger.complete()
# If complete() were not called, this line could be executed while first message is still pending.
frozen.tick(delta=datetime.timedelta(hours=23, minutes=30))
# If complete() were not called, it's possible that both first and second message would be
# processed after the actual time change, leading to test failure.
logger.debug("Second") Regarding your comment here: #916 (comment)
Thanks for providing the tests. I could run them on my computer, but the failure reason was different than the one you reported. Looking at check_dir(
tmp_path,
files=[
("record.2020-01-01.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
) The test is started at the fake date of Maybe you actually want to subtract In addition, there are also missing calls to
Edit: On further reflection, I'm no longer sure of this explanation. The message are handled in the main process, therefore Regarding your comment here: #916 (comment)
Sorry, this is a new argument which exists on It's not a requirement. It mainly useful to isolate each test cases instead of calling Regarding your comment here: #916 (comment) With the explanations I've given above, I don't think we can conclude that there's a bug in Loguru for now. I arranged the two unit tests you shared as follow:
The test cases work fine now, regardless of from loguru import logger
import datetime
import logging
import os
import re
import sys
import multiprocessing
import time
from ..conftest import check_dir
multiprocessing.set_start_method("spawn", force=True)
MSG = "test"
def message(logger_):
logger_.debug(f"{MSG}")
def rename(filepath) -> None:
today = datetime.datetime.now().strftime(r"%Y-%m-%d_%H-%M-%S")
dirname = os.path.dirname(filepath)
basename = "record.log"
new_filepath = os.path.normpath(os.path.join(dirname, basename))
os.rename(
filepath,
dynamic_insert_string(str(new_filepath), today),
)
def dynamic_insert_string(base: str, insert: str) -> str:
parts = base.split(".")
if len(parts) > 1:
base_filename = ".".join(parts[:-1])
extension = parts[-1]
result = f"{base_filename}.{insert}.{extension}"
else:
result = f"{base}.{insert}"
return result
def remove_color_syntax(record) -> bool:
ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
record["extra"]["message_no_colors"] = ansi_escape.sub("", record["message"])
return True
class InterceptHandler(logging.Handler):
def emit(
self,
record: logging.LogRecord,
_logger = logger,
) -> None:
try:
level = _logger.level(record.levelname).name
except ValueError:
level = record.levelno
frame, depth = sys._getframe(6), 6
while frame is not None and frame.f_code.co_filename == logging.__file__:
frame = frame.f_back
depth += 1
_logger.opt(depth=depth, exception=record.exc_info).log(
level, record.getMessage()
)
def set_loguru_pure(tmp_path, rotation=None):
import logging
from loguru import logger
import sys
import datetime
logger.remove()
logger.add(
f"{str(tmp_path)}/record.log",
level=logging.DEBUG,
rotation=datetime.time(
hour=0,
minute=0,
tzinfo=datetime.timezone(datetime.timedelta(hours=8)),
) if rotation is None else rotation,
compression=rename,
format="{message}",
enqueue=True,
colorize=False,
filter=remove_color_syntax,
)
logger.add(
sys.stderr,
level=logging.DEBUG,
colorize=True,
format="{message}",
enqueue=True,
)
logger.configure(extra={"mark": "origin"})
logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
def test_single_process_pure_renaming(freeze_time, tmp_path):
with freeze_time("2020-01-01 12:00:00") as frozen:
set_loguru_pure(tmp_path)
for _ in range(2):
logger.debug(f"{MSG}")
logger.complete()
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
frozen.tick(delta=datetime.timedelta(hours=24))
for _ in range(4):
logger.debug(f"{MSG}")
logger.complete()
check_dir(
tmp_path,
files=[
("record.2020-01-02_12-00-00.log", f"{MSG}\n" * 2),
("record.log", f"{MSG}\n" * 4),
],
)
def test_multi_process_pure_renaming(tmp_path):
in_2_seconds = (datetime.datetime.now() + datetime.timedelta(seconds=2)).time()
set_loguru_pure(tmp_path, rotation=in_2_seconds)
workers = []
for _ in range(2):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
logger.complete()
check_dir(
tmp_path,
files=[
("record.log", f"{MSG}\n" * 2),
],
)
time.sleep(5)
workers = []
for _ in range(4):
worker = multiprocessing.Process(target=message, args=(logger,))
worker.start()
workers.append(worker)
for worker in workers:
worker.join()
logger.complete()
files = sorted(tmp_path.iterdir())
assert len(files) == 2
first_file, second_file = files
assert second_file.name == "record.log"
assert second_file.read_text() == f"{MSG}\n" * 4
assert first_file.name.startswith("record.2023") # The exact name is not deterministic
assert first_file.read_text() == f"{MSG}\n" * 2 |
@Delgan Thanks for you super helpful reply ❤️ 👍 . I spend entire days to understand your suggestion and try to understand
Can I understand that if I add the 🤔 Is it possible to implement a new feature? :
I have already noticed this problem in #916 (comment) and fixed it successfully(It passed my unit-tests before with I haven't finished the refactoring of my project, so I would like to leave this issue without closing it, until I completely fix this issue 😄 . |
This is not required by default. You only should call However, this is required for the unit tests you shared, as the listed expectations cannot be met if the messages have not been handled. |
@Delgan You are right, it's the problem of timezone. I add Lines 118 to 123 in 9fc929a
and I got:
The variable I did solve the problem of rotating at wrong time with your suggestion in #916 (comment) (this is a composite bug consisting of #894 and #899, therefore I can't solve this bug before by downgrading
but I am currently have no idea whether the problem is completely solved because:
I would close this issue after I have confirmed everything via unit-tests. 👍 |
Final conclusion I guess, this is a composite bug consisting of:
There are 2 way to solve problem
|
Hi @changchiyou. The root cause of your problem (#894) has been fixed on I'm closing this issue consequently, as it should be solved. If you need any further help, I remain available. |
Hi again 🤣 , got a new error from my multi-process project, which was designed with:
print
andlogging
messages from third-party modules tologuru
? #901loguru.logger
withmultiprocessing
( on Mac M1 / Windows) #912The problem is, when I try to restart my project on
2023/07/07
with existed log files :loguru
would renamerecord.log
torecord.2023-07-07.log
and put all the logs into it, then generate a newrecord.log
:and I am not sure how to reprocude this error because I tried 3 times after the error occured, but
loguru
works well then.The text was updated successfully, but these errors were encountered: