Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rotating Error in multi-process (randomly happened) #916

Closed
changchiyou opened this issue Jul 7, 2023 · 25 comments
Closed

Rotating Error in multi-process (randomly happened) #916

changchiyou opened this issue Jul 7, 2023 · 25 comments
Labels
question Further information is requested

Comments

@changchiyou
Copy link

Hi again 🤣 , got a new error from my multi-process project, which was designed with:


The problem is, when I try to restart my project on 2023/07/07 with existed log files :

record.2023-07-06.log
record.log # contains logs of 2023/07/07

loguru would rename record.log to record.2023-07-07.log and put all the logs into it, then generate a new record.log:

record.2023-07-06.log
record.2023-07-07.log # contains logs from `record.log`
record.log 

and I am not sure how to reprocude this error because I tried 3 times after the error occured, but loguru works well then.

@changchiyou
Copy link
Author

BTW, what would happened if the error occur on 2023/07/07 with existed log files?

record.2023-07-06.log
record.2023-07-07.log
record.log 

How would loguru rotate? Rewrite or append record.2023-07-07.log with the content of record.log?

@Delgan
Copy link
Owner

Delgan commented Jul 7, 2023

Hi @changchiyou.

Are you using the custom compression function we discussed in others tickets?

Also, can you please give me more information about your logger configuration?

@changchiyou
Copy link
Author

Are you using the custom compression function we discussed in others tickets?

Yes, I would show the detail below.

Also, can you please give me more information about your logger configuration?

Sure 👍


My logger configuration:

  1. execute my costomize loguru setting methods
    # config_vars.ROTATION = "00:00"
    # config_vars.UTC = 8
    # LOG_FORMAT = "<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green> <level>{level: <8}</level> [<cyan>{extra[mark]}</cyan>] <cyan>{file}</cyan> - <cyan>{function}</cyan>() : <level>{extra[message_no_colors]}</level>"
    # CONSOLE_FORMAT = "<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green> <level>{level: <8}</level> [<cyan>{extra[mark]}</cyan>] <cyan>{file}</cyan> - <cyan>{function}</cyan>() : <level>{message}</level>"

    hour, minute = [int(_time) for _time in str(config_vars.ROTATION).split(":")]

    _logger.add(
        f"{config_setting.args.LOG_PATH}/{config_vars.BASE_LOG_FILE_NAME}" # .../record.log
        if log_path is None
        else log_path,
        level=log_level,
        # bug report from https://github.com/Delgan/loguru/issues/894
        # rotation=config_vars.ROTATION if rotation is None else rotation,
        rotation=datetime.time(
            hour=hour,
            minute=minute,
            tzinfo=datetime.timezone(datetime.timedelta(hours=config_vars.UTC)),
        )
        if rotation is None
        else rotation,
        compression=rename if compression is None else compression,
        format=config_vars.LOG_FORMAT if log_format is None else log_format,
        enqueue=enqueue,
        colorize=False,
        filter=remove_color_syntax,
    )
    _logger.add(
        console_sink,
        level=console_level,
        colorize=True,
        format=config_vars.CONSOLE_FORMAT if console_format is None else console_format,
        enqueue=enqueue,
    )

    loguru_set_mark(mark)

    logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)
  • rename method in compression=rename if compression is None else compression,:
# RENAME_TIME_FORMAT = "%Y-%m-%d"
# BASE_LOG_FILE_NAME = "record.log"

def rename(filepath) -> None:
    today = datetime.datetime.now().strftime(
        config_setting.get_variable(__file__).RENAME_TIME_FORMAT
    )
    dirname = os.path.dirname(filepath)
    basename = config_setting.get_variable(__file__).BASE_LOG_FILE_NAME
    current_log_filepath = os.path.normpath(os.path.join(dirname, basename))

    os.rename(
        filepath,
        dynamic_insert_string(str(current_log_filepath), today),
    )
  • remove_color_syntax in filter=remove_color_syntax,:
def remove_color_syntax(record: loguru.Record) -> bool:
    # https://zhuanlan.zhihu.com/p/70680488
    # `re.compile` is no need in python but it's a good development habits
    ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
    # remove color-syntax from message
    record["extra"]["message_no_colors"] = ansi_escape.sub("", record["message"])

    return True
  • Set extra.mark for <cyan>{extra[mark]}</cyan> in *_FORMAT:
def loguru_set_mark(
    mark: str, _logger: LoggerDelegator | loguru.Logger = logger
) -> None:
    _logger.configure(extra={"mark": mark})
  1. Pass the loguru.logger instance to sub-process

@Delgan
Copy link
Owner

Delgan commented Jul 7, 2023

Thanks for the details. I assume enqueue value is True?

@changchiyou
Copy link
Author

changchiyou commented Jul 7, 2023

@Delgan Update for more detail info.


Thanks for the details. I assume enqueue value is True?

Yes. Sorry forgot to provide this important information.

        set_loguru(
            enqueue=True,
            console_level=logging.DEBUG
            if config_setting.args.DEBUG is True
            else logging.INFO,
        )
def set_loguru(  # pylint: disable=too-many-arguments
    _logger: LoggerDelegator | loguru.Logger = logger,
    log_path: str | None = None,
    rotation: str | None = None,
    log_level: int = logging.DEBUG,
    console_level: int = logging.INFO,
    log_format: str | None = None,
    console_format: str | None = None,
    mark: str = "origin",
    enqueue: bool = False,
    compression=None,
    console_sink=sys.stderr,
) -> None:

set_loguru methods execute the code below inside it:

    # config_vars.ROTATION = "00:00"
    # config_vars.UTC = 8
    # LOG_FORMAT = "<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green> <level>{level: <8}</level> [<cyan>{extra[mark]}</cyan>] <cyan>{file}</cyan> - <cyan>{function}</cyan>() : <level>{extra[message_no_colors]}</level>"
    # CONSOLE_FORMAT = "<green>{time:YYYY-MM-DD HH:mm:ss,SSS}</green> <level>{level: <8}</level> [<cyan>{extra[mark]}</cyan>] <cyan>{file}</cyan> - <cyan>{function}</cyan>() : <level>{message}</level>"

    hour, minute = [int(_time) for _time in str(config_vars.ROTATION).split(":")]

    _logger.add(
        f"{config_setting.args.LOG_PATH}/{config_vars.BASE_LOG_FILE_NAME}" # .../record.log

@changchiyou
Copy link
Author

@Delgan Small info update:

To vertify my question before

BTW, what would happened if the error occur on 2023/07/07 with existed log files?

record.2023-07-06.log
record.2023-07-07.log
record.log 

How would loguru rotate? Rewrite or append record.2023-07-07.log with the content of record.log?

I run my project for another day(start from 2023/07/07, and now is 2023/07/08), loguru rotate the log files from

record.2023-07-06.log
record.2023-07-07.log # contains some logs of 2023/07/07
record.log 
record.2023-07-06.log
record.2023-07-07.log # contains some logs of 2023/07/07
record.2023-07-08.log # contains the REST logs of 2023/07/07
record.log # contains logs of 2023/07/08

@Delgan
Copy link
Owner

Delgan commented Jul 8, 2023

Sorry @changchiyou but the configuration is a bit too complex for me to fully reproduce it locally.

From what I tested, it seems you could be facing the same issue as #894, right?
I suggested to use datetime.time() and I can see that you tried to implement this workaround. I have not been very clear about which timezone should be used, and as I didn't investigate and fix the bug yet I unfortunately can't advise you very well.

The easiest way to find out if it's this bug would be to use Loguru v0.6.0 instead of v0.7.0 (the bug was introduced between these two releases).

@changchiyou
Copy link
Author

@Delgan Thanks for you reply, I would likt to try your answer

The easiest way to find out if it's this bug would be to use Loguru v0.6.0 instead of v0.7.0 (the bug was introduced between these two releases).

but as I mentined above

and I am not sure how to reprocude this error because I tried 3 times after the error occured, but loguru works well then.

I don't know if this solution is only temporarily useful, therefore I want to write a unit-test to test it, which is according to your test script https://github.com/Delgan/loguru/blob/master/tests/test_filesink_rotation.py :

from loguru import logger
from facepcs.utils import set_loguru, get_relative_path
from facepcs.config import config_setting

import logging
import multiprocessing

from ..conftest import check_dir

MSG = "test"

def message(logger):
    from facepcs.utils import logger as _logger
    _logger.update_logger(logger)

    _logger.debug(f"{MSG}")

def test_multi_process_renaming(freeze_time, tmp_path):
    config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
    set_loguru(enqueue=True, console_level=logging.DEBUG,
               log_path = tmp_path / config_setting.get_variable("logging_enhance.py").BASE_LOG_FILE_NAME)

    with freeze_time("2020-01-01 23:59:59") as frozen:

        workers = []
        for _ in range(2):
            worker = multiprocessing.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)
            frozen.tick()

        for worker in workers:
            worker.join()

    check_dir(
            tmp_path,
            files=[
                ("file.2020-01-01.log", ""),
                ("file.log", "a\n"),
            ],
    )

Execute check_dir for checking tmp_path only:

    check_dir(
            tmp_path,
            files=[
                ("file.2020-01-01.log", ""),
                ("file.log", "a\n"),
            ],
    )

After I execute pytest, I got the error message below:

Full Error Message
=================================================================================================== FAILURES ===================================================================================================
_________________________________________________________________________________________ test_multi_process_renaming __________________________________________________________________________________________

freeze_time = <function freeze_time.<locals>.freeze_time at 0x17ecbd2d0>
tmp_path = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-72/test_multi_process_renaming0')

    def test_multi_process_renaming(freeze_time, tmp_path):
        config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
        set_loguru(enqueue=True, console_level=logging.DEBUG,
                   log_path = tmp_path / config_setting.get_variable("logging_enhance.py").BASE_LOG_FILE_NAME)
    
        with freeze_time("2020-01-01 23:59:59") as frozen:
    
            workers = []
            for _ in range(2):
                worker = multiprocessing.Process(target=message, args=(logger,))
                worker.start()
                workers.append(worker)
                frozen.tick()
    
            for worker in workers:
                worker.join()
    
>       check_dir(
                tmp_path,
                files=[
                    ("file.2020-01-01.log", ""),
                    ("file.log", "a\n"),
                ],
        )

tests/utils/test_logging_enhance.py:35: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

dir = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-72/test_multi_process_renaming0')

    def check_dir(dir, *, files=None, size=None):
        actual_files = set(dir.iterdir())
        seen = set()
        if size is not None:
            assert len(actual_files) == size
        if files is not None:
>           assert len(actual_files) == len(files)
E           AssertionError: assert 1 == 2
E            +  where 1 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-72/test_multi_process_renaming0/record.log')})
E            +  and   2 = len([('file.2020-01-01.log', ''), ('file.log', 'a\n')])

tests/conftest.py:20: AssertionError
--------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------
2023-07-09 21:48:27,360 DEBUG    [origin] test_logging_enhance.py - message() : test
2023-07-09 21:48:27,360 DEBUG    [origin] test_logging_enhance.py - message() : test
--- Logging error in Loguru Handler #1 ---
Record was: {'elapsed': datetime.timedelta(seconds=3, microseconds=169863), 'exception': None, 'extra': {'mark': 'origin', 'message_no_colors': 'test'}, 'file': (name='test_logging_enhance.py', path='/Users/christopherchang/Code/FacePCS/tests/utils/test_logging_enhance.py'), 'function': 'message', 'level': (name='DEBUG', no=10, icon='🐞'), 'line': 16, 'message': 'test', 'module': 'test_logging_enhance', 'name': 'tests.utils.test_logging_enhance', 'process': (id=91783, name='Process-2'), 'thread': (id=8345960768, name='MainThread'), 'time': datetime(2023, 7, 9, 21, 48, 27, 360026, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), 'CST'))}
Traceback (most recent call last):
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/loguru/_handler.py", line 306, in _queued_writer
    self._sink.write(message)
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/loguru/_file_sink.py", line 206, in write
    if self._rotation_function is not None and self._rotation_function(message, self._file):
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/loguru/_file_sink.py", line 119, in __call__
    creation_time = get_ctime(filepath)
KeyError: '/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-72/test_multi_process_renaming0/record.log'
--- End of logging error ---
--- Logging error in Loguru Handler #1 ---
Record was: {'elapsed': datetime.timedelta(seconds=3, microseconds=169511), 'exception': None, 'extra': {'mark': 'origin', 'message_no_colors': 'test'}, 'file': (name='test_logging_enhance.py', path='/Users/christopherchang/Code/FacePCS/tests/utils/test_logging_enhance.py'), 'function': 'message', 'level': (name='DEBUG', no=10, icon='🐞'), 'line': 16, 'message': 'test', 'module': 'test_logging_enhance', 'name': 'tests.utils.test_logging_enhance', 'process': (id=91782, name='Process-1'), 'thread': (id=8345960768, name='MainThread'), 'time': datetime(2023, 7, 9, 21, 48, 27, 360068, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), 'CST'))}
Traceback (most recent call last):
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/loguru/_handler.py", line 306, in _queued_writer
    self._sink.write(message)
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/loguru/_file_sink.py", line 206, in write
    if self._rotation_function is not None and self._rotation_function(message, self._file):
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/loguru/_file_sink.py", line 119, in __call__
    creation_time = get_ctime(filepath)
KeyError: '/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-72/test_multi_process_renaming0/record.log'
--- End of logging error ---
=========================================================================================== short test summary info ============================================================================================
FAILED tests/utils/test_logging_enhance.py::test_multi_process_renaming - AssertionError: assert 1 == 2
======================================================================================== 1 failed, 38 passed in 14.31s =========================================================================================
  • pytest error log:

    >           assert len(actual_files) == len(files)
    E           AssertionError: assert 1 == 2
    E            +  where 1 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-72/test_multi_process_renaming0/record.log')})
    E            +  and   2 = len([('file.2020-01-01.log', ''), ('file.log', 'a\n')])
    

    I know that it's impossible to pass this test because I havn't costomize the assertion test content. But I still can get the info about tmp_path's information via the assertions in check_dir. It should exists "record.log" and "record.2020-01-01.log" but it didn't. This may be caused by Logging error in Loguru Handler #1

  • Captured stderr call Logging error in Loguru Handler #1:

      File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/loguru/_file_sink.py", line 119, in __call__
        creation_time = get_ctime(filepath)
    KeyError: '/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-72/test_multi_process_renaming0/record.log'
    --- End of logging error ---
    

Any advice for this unit test? BTW, I have tried loguru with 0.6.0 and 0.7.0, both of them occured same error from executing pytest.

@changchiyou
Copy link
Author

changchiyou commented Jul 9, 2023

Update

After reinstall loguru with 0.6.0 instead of 0.7.0( #916 (comment) ), and set rotation="00:00" ( #899 (comment) ), the problem still exists:

I run my project crossing day again, here is the logs:

image

Althought today is 2023/07/10, loguru still rename the "record.log" to "record.2023-07-10.log" when rotating. Here is the content of log files:

  • "record.log"(contains the logs of 2023/07/10):

image

  • "record.2023-07-10"(contains the logs of 2023/07/09):

image

I would try to finish the unit-test #916 (comment) for testing this problem more quickly locally.


From what I tested, it seems you could be facing the same issue as #894, right?

BTW, @Delgan seems not. I can still reproduce problem #916 (comment) with loguru==0.6.0. 😢

@Delgan
Copy link
Owner

Delgan commented Jul 9, 2023

Hi @changchiyou.

I created two unit tests based on the one you provided. They can be put in test_filesink_rotation.py file if you want to test.

def test_bug_916_v1(freeze_time, tmp_path):
    import multiprocessing

    context = multiprocessing.get_context("fork")

    def message(logger_):
        logger_.debug("Message")

    with freeze_time("2020-01-01 12:00:00") as frozen:
        logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")

        workers = []

        for _ in range(2):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        # Rotation is expected to create a new file after this point.
        frozen.tick(delta=datetime.timedelta(hours=24))

        for _ in range(2):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        # No rotation should occurs here.
        frozen.tick(delta=datetime.timedelta(hours=1))

        for _ in range(2):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

    check_dir(
            tmp_path,
            files=[
                ("record.2020-01-01_12-00-00_000000.log", "Message\n" * 2),
                ("record.log", "Message\n" * 4),
            ],
    )


def test_bug_916_v2(freeze_time, tmp_path):
    import multiprocessing

    context = multiprocessing.get_context("fork")

    def message(logger_):
        logger_.debug("Message")

    with freeze_time("2020-01-01 12:00:00") as frozen:
        logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")

        workers = []

        for _ in range(2):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        # Rotation is expected to create a new file after this point.
        frozen.tick(delta=datetime.timedelta(hours=24))

        logger.remove()
        logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")

        for _ in range(2):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        # No rotation should occurs here.
        frozen.tick(delta=datetime.timedelta(hours=1))
        logger.remove()
        logger.add(tmp_path / "record.log", context=context, format="{message}", enqueue=True, rotation="00:00")


        for _ in range(2):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

    check_dir(
            tmp_path,
            files=[
                ("record.2020-01-01_12-00-00_000000.log", "Message\n" * 2),
                ("record.log", "Message\n" * 4),
            ],
    )

However, both tests are successfully executed.

I think I'm missing some part of you set_loguru() configuration. Can you please provide additional details to extend the tests while keeping them as minimal as possible?

@changchiyou
Copy link
Author

changchiyou commented Jul 10, 2023

@Delgan Thanks for your reply, I correct my unit test from #916 (comment) to current version based on your unit-test showcase 👍 .

BTW, I use spawn instead of fork.


I think I'm missing some part of you set_loguru() configuration. Can you please provide additional details to extend the tests while keeping them as minimal as possible?

Sure. I created 4 unit tests, two of which use loguru with the local method set_loguru_pure(). This method is pre-processed by replacing configuration variables with constant variables. The remaining two unit tests call set_loguru() directly.

Note. from facepcs.utils import logger as _logger is designed with #912 (comment)

My full `pytest` script(`loguru==0.7.0`)
from loguru import logger
import datetime
import logging
import os
import re
import sys

from facepcs.utils import set_loguru, get_relative_path
from facepcs.config import config_setting

from ..conftest import check_dir

MSG = "test"

def rename(filepath) -> None:
  today = datetime.datetime.now().strftime(r"%Y-%m-%d")
  dirname = os.path.dirname(filepath)
  basename = "record.log"
  new_filepath = os.path.normpath(os.path.join(dirname, basename))

  os.rename(
      filepath,
      dynamic_insert_string(str(new_filepath), today),
  )

def dynamic_insert_string(base: str, insert: str) -> str:
  parts = base.split(".")

  if len(parts) > 1:
      base_filename = ".".join(parts[:-1])
      extension = parts[-1]
      result = f"{base_filename}.{insert}.{extension}"
  else:
      result = f"{base}.{insert}"

  return result

def remove_color_syntax(record) -> bool:
  ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
  record["extra"]["message_no_colors"] = ansi_escape.sub("", record["message"])

  return True

class InterceptHandler(logging.Handler):
  def emit(
      self,
      record: logging.LogRecord,
      _logger = logger,
  ) -> None:
      try:
          level = _logger.level(record.levelname).name
      except ValueError:
          level = record.levelno

      frame, depth = sys._getframe(6), 6
      while frame is not None and frame.f_code.co_filename == logging.__file__:
          frame = frame.f_back
          depth += 1

      _logger.opt(depth=depth, exception=record.exc_info).log(
          level, record.getMessage()
      )

def set_loguru_pure(tmp_path):
  import logging
  from loguru import logger
  import sys
  import datetime

  logger.remove()

  logger.add(
      f"{str(tmp_path)}/record.log",
      level=logging.DEBUG,
      rotation=datetime.time(
          hour=0,
          minute=0,
          tzinfo=datetime.timezone(datetime.timedelta(hours=8)),
      ),
      compression=rename,
      format="{message}",
      enqueue=True,
      colorize=False,
      filter=remove_color_syntax,
  )
  logger.add(
      sys.stderr,
      level=logging.DEBUG,
      colorize=True,
      format="{message}",
      enqueue=True,
  )

  logger.configure(extra={"mark": "origin"})

  logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)

def test_single_process_pure_renaming(freeze_time, tmp_path):
  with freeze_time("2020-01-01 12:00:00") as frozen:
      set_loguru_pure(tmp_path)

      for _ in range(2):
          logger.debug(f"{MSG}")

      check_dir(
          tmp_path,
          files=[
              ("record.log", f"{MSG}\n" * 2),
          ],
      )

      frozen.tick(delta=datetime.timedelta(hours=24))

      for _ in range(4):
          logger.debug(f"{MSG}")

      check_dir(
          tmp_path,
          files=[
              ("record.2020-01-01.log", f"{MSG}\n" * 2),
              ("record.log", f"{MSG}\n" * 4),
          ],
      )

def test_multi_process_pure_renaming(freeze_time, tmp_path):
  import multiprocessing

  with freeze_time("2020-01-01 12:00:00") as frozen:
      set_loguru_pure(tmp_path)

      workers = []
      for _ in range(2):
          worker = multiprocessing.Process(target=message, args=(logger,))
          worker.start()
          workers.append(worker)

      for worker in workers:
          worker.join()

      check_dir(
          tmp_path,
          files=[
              ("record.log", f"{MSG}\n" * 2),
          ],
      )

      frozen.tick(delta=datetime.timedelta(hours=24))

      workers = []
      for _ in range(4):
          worker = multiprocessing.Process(target=message, args=(logger,))
          worker.start()
          workers.append(worker)

      for worker in workers:
          worker.join()

      check_dir(
          tmp_path,
          files=[
              ("record.2020-01-01.log", f"{MSG}\n" * 2),
              ("record.log", f"{MSG}\n" * 4),
          ],
      )

def test_single_process_renaming(freeze_time, tmp_path):
  with freeze_time("2020-01-01 12:00:00") as frozen:
      config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
      config_setting.init_args({"log_path": str(tmp_path)})
      set_loguru(enqueue=True, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")

      for _ in range(2):
          logger.debug(f"{MSG}")

      check_dir(
          tmp_path,
          files=[
              ("record.log", f"{MSG}\n" * 2),
          ],
      )

      frozen.tick(delta=datetime.timedelta(hours=24))

      for _ in range(4):
          logger.debug(f"{MSG}")

      check_dir(
          tmp_path,
          files=[
              ("record.2020-01-01.log", f"{MSG}\n" * 2),
              ("record.log", f"{MSG}\n" * 4),
          ],
      )

def message(logger):
      from facepcs.utils import logger as _logger
      _logger.update_logger(logger)

      _logger.debug(f"{MSG}")

def test_multi_process_renaming(freeze_time, tmp_path):
  import multiprocessing

  with freeze_time("2020-01-01 12:00:00") as frozen:
      config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
      config_setting.init_args({"log_path": str(tmp_path)})
      set_loguru(enqueue=True, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")

      workers = []
      for _ in range(2):
          worker = multiprocessing.Process(target=message, args=(logger,))
          worker.start()
          workers.append(worker)

      for worker in workers:
          worker.join()

      check_dir(
          tmp_path,
          files=[
              ("record.log", f"{MSG}\n" * 2),
          ],
      )

      frozen.tick(delta=datetime.timedelta(hours=24))

      workers = []
      for _ in range(4):
          worker = multiprocessing.Process(target=message, args=(logger,))
          worker.start()
          workers.append(worker)

      for worker in workers:
          worker.join()

      check_dir(
          tmp_path,
          files=[
              ("record.2020-01-01.log", f"{MSG}\n" * 2),
              ("record.log", f"{MSG}\n" * 4),
          ],
      )
FAILURES
============================================================================================= test session starts ==============================================================================================
platform darwin -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /Users/christopherchang/miniconda3/envs/facepcs/bin/python
cachedir: .pytest_cache
rootdir: /Users/christopherchang/Code/FacePCS
collected 42 items / 38 deselected / 4 selected                                                                                                                                                                
run-last-failure: rerun previous 4 failures (skipped 2 files)

tests/utils/test_logging_enhance.py::test_single_process_pure_renaming FAILED                                                                                                                            [ 25%]
tests/utils/test_logging_enhance.py::test_multi_process_pure_renaming FAILED                                                                                                                             [ 50%]
tests/utils/test_logging_enhance.py::test_single_process_renaming FAILED                                                                                                                                 [ 75%]
tests/utils/test_logging_enhance.py::test_multi_process_renaming FAILED                                                                                                                                  [100%]

=================================================================================================== FAILURES ===================================================================================================
______________________________________________________________________________________ test_single_process_pure_renaming _______________________________________________________________________________________

freeze_time = <function freeze_time.<locals>.freeze_time at 0x17cdb8040>
tmp_path = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_single_process_pure_renam0')

  def test_single_process_pure_renaming(freeze_time, tmp_path):
      with freeze_time("2020-01-01 12:00:00") as frozen:
          set_loguru_pure(tmp_path)
  
          for _ in range(2):
              logger.debug(f"{MSG}")
  
          check_dir(
              tmp_path,
              files=[
                  ("record.log", f"{MSG}\n" * 2),
              ],
          )
  
          frozen.tick(delta=datetime.timedelta(hours=24))
  
          for _ in range(4):
              logger.debug(f"{MSG}")
  
>           check_dir(
              tmp_path,
              files=[
                  ("record.2020-01-01.log", f"{MSG}\n" * 2),
                  ("record.log", f"{MSG}\n" * 4),
              ],
          )

tests/utils/test_logging_enhance.py:116: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

dir = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_single_process_pure_renam0')

  def check_dir(dir, *, files=None, size=None):
      actual_files = set(dir.iterdir())
      seen = set()
      if size is not None:
          assert len(actual_files) == size
      if files is not None:
>           assert len(actual_files) == len(files)
E           AssertionError: assert 1 == 2
E            +  where 1 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_single_process_pure_renam0/record.log')})
E            +  and   2 = len([('record.2020-01-01.log', 'test\ntest\n'), ('record.log', 'test\ntest\ntest\ntest\n')])

tests/conftest.py:20: AssertionError
--------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------
test
test
test
test
test
test
_______________________________________________________________________________________ test_multi_process_pure_renaming _______________________________________________________________________________________

freeze_time = <function freeze_time.<locals>.freeze_time at 0x16f07f010>
tmp_path = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_pure_renami0')

  def test_multi_process_pure_renaming(freeze_time, tmp_path):
      import multiprocessing
  
      with freeze_time("2020-01-01 12:00:00") as frozen:
          set_loguru_pure(tmp_path)
  
          workers = []
          for _ in range(2):
              worker = multiprocessing.Process(target=message, args=(logger,))
              worker.start()
              workers.append(worker)
  
          for worker in workers:
              worker.join()
  
>           check_dir(
              tmp_path,
              files=[
                  ("record.log", f"{MSG}\n" * 2),
              ],
          )

tests/utils/test_logging_enhance.py:139: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

dir = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_pure_renami0')

  def check_dir(dir, *, files=None, size=None):
      actual_files = set(dir.iterdir())
      seen = set()
      if size is not None:
          assert len(actual_files) == size
      if files is not None:
>           assert len(actual_files) == len(files)
E           AssertionError: assert 2 == 1
E            +  where 2 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_pure_renami0/record.2020-01-01.log'), PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_pure_renami0/record.log')})
E            +  and   1 = len([('record.log', 'test\ntest\n')])

tests/conftest.py:20: AssertionError
--------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------
test
test
_________________________________________________________________________________________ test_single_process_renaming _________________________________________________________________________________________

freeze_time = <function freeze_time.<locals>.freeze_time at 0x16f07fe20>
tmp_path = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_single_process_renaming0')

  def test_single_process_renaming(freeze_time, tmp_path):
      with freeze_time("2020-01-01 12:00:00") as frozen:
          config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
          config_setting.init_args({"log_path": str(tmp_path)})
          set_loguru(enqueue=True, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")
  
          for _ in range(2):
              logger.debug(f"{MSG}")
  
          check_dir(
              tmp_path,
              files=[
                  ("record.log", f"{MSG}\n" * 2),
              ],
          )
  
          frozen.tick(delta=datetime.timedelta(hours=24))
  
          for _ in range(4):
              logger.debug(f"{MSG}")
  
>           check_dir(
              tmp_path,
              files=[
                  ("record.2020-01-01.log", f"{MSG}\n" * 2),
                  ("record.log", f"{MSG}\n" * 4),
              ],
          )

tests/utils/test_logging_enhance.py:186: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

dir = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_single_process_renaming0')

  def check_dir(dir, *, files=None, size=None):
      actual_files = set(dir.iterdir())
      seen = set()
      if size is not None:
          assert len(actual_files) == size
      if files is not None:
>           assert len(actual_files) == len(files)
E           AssertionError: assert 1 == 2
E            +  where 1 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_single_process_renaming0/record.log')})
E            +  and   2 = len([('record.2020-01-01.log', 'test\ntest\n'), ('record.log', 'test\ntest\ntest\ntest\n')])

tests/conftest.py:20: AssertionError
--------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------
test
test
test
test
test
test
_________________________________________________________________________________________ test_multi_process_renaming __________________________________________________________________________________________

freeze_time = <function freeze_time.<locals>.freeze_time at 0x17cde0310>
tmp_path = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_renaming0')

  def test_multi_process_renaming(freeze_time, tmp_path):
      import multiprocessing
  
      with freeze_time("2020-01-01 12:00:00") as frozen:
          config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
          config_setting.init_args({"log_path": str(tmp_path)})
          set_loguru(enqueue=True, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")
  
          workers = []
          for _ in range(2):
              worker = multiprocessing.Process(target=message, args=(logger,))
              worker.start()
              workers.append(worker)
  
          for worker in workers:
              worker.join()
  
>           check_dir(
              tmp_path,
              files=[
                  ("record.log", f"{MSG}\n" * 2),
              ],
          )

tests/utils/test_logging_enhance.py:217: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

dir = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_renaming0')

  def check_dir(dir, *, files=None, size=None):
      actual_files = set(dir.iterdir())
      seen = set()
      if size is not None:
          assert len(actual_files) == size
      if files is not None:
>           assert len(actual_files) == len(files)
E           AssertionError: assert 2 == 1
E            +  where 2 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_renaming0/record.2020-01-01.log'), PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-100/test_multi_process_renaming0/record.log')})
E            +  and   1 = len([('record.log', 'test\ntest\n')])

tests/conftest.py:20: AssertionError
--------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------
test
test
=========================================================================================== short test summary info ============================================================================================
FAILED tests/utils/test_logging_enhance.py::test_single_process_pure_renaming - AssertionError: assert 1 == 2
FAILED tests/utils/test_logging_enhance.py::test_multi_process_pure_renaming - AssertionError: assert 2 == 1
FAILED tests/utils/test_logging_enhance.py::test_single_process_renaming - AssertionError: assert 1 == 2
FAILED tests/utils/test_logging_enhance.py::test_multi_process_renaming - AssertionError: assert 2 == 1
======================================================================================= 4 failed, 38 deselected in 8.54s =======================================================================================

@changchiyou
Copy link
Author

changchiyou commented Jul 10, 2023

@Delgan I believe I found a more precise error when I try to use build-in solution #899 (comment) .

(the test below execute with loguru==0.7.0)

I comment

# compression=rename if compression is None else compression,

in my methods set_loguru and rewrite your unit test from

@pytest.mark.parametrize("offset", [-3600, 0, 3600])
def test_daily_rotation_with_different_timezone(freeze_time, tmp_path, offset):
with freeze_time("2018-10-27 00:00:00", ("A", offset)) as frozen:
i = logger.add(
tmp_path / "test_{time:YYYY-MM-DD}.log",
format="{message}",
rotation="daily",
)
logger.debug("First")
frozen.tick(delta=datetime.timedelta(hours=23, minutes=30))
logger.debug("Second")
frozen.tick(delta=datetime.timedelta(hours=1))
logger.debug("Third")
frozen.tick(delta=datetime.timedelta(hours=24))
logger.debug("Fourth")
logger.remove(i)
check_dir(
tmp_path,
files=[
("test_2018-10-27.log", "First\nSecond\n"),
("test_2018-10-28.log", "Third\n"),
("test_2018-10-29.log", "Fourth\n"),
],
)

to:

@pytest.mark.parametrize("enqueue", [True, False])
@pytest.mark.parametrize("rotation", ["daily", "00:00"])
def test_daily_rotation_with_different_rotation(freeze_time, tmp_path, enqueue, rotation):
    with freeze_time("2018-10-27 00:00:00") as frozen:
        config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
        config_setting.init_args({"log_path": str(tmp_path)})
        set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}", rotation=rotation, log_path= tmp_path/"record.{time:YYYY-MM-DD}.log")

        logger.debug("First")
        frozen.tick(delta=datetime.timedelta(hours=23, minutes=30))
        logger.debug("Second")
        frozen.tick(delta=datetime.timedelta(hours=1))
        logger.debug("Third")
        frozen.tick(delta=datetime.timedelta(hours=24))
        logger.debug("Fourth")

    logger.remove()

    check_dir(
        tmp_path,
        files=[
            ("record.2018-10-27.log", "First\nSecond\n"),
            ("record.2018-10-28.log", "Third\n"),
            ("record.2018-10-29.log", "Fourth\n"),
        ],
    )

with enqueue=True, unit-test failed:

tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[daily-True] FAILED
tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[daily-False] PASSED
tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[00:00-True] FAILED
tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[00:00-False] PASSED
>               assert filepath in actual_files
E               AssertionError: assert PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-153/test_daily_rotation_with_diffe0/record.2018-10-28.log') in {PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-153/test_daily_rotation_with_diffe0/record.2018-10-27.log'), PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-153/test_daily_rotation_with_diffe0/record.2023-07-10.2023-07-10_16-07-37_851761.log'), PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-153/test_daily_rotation_with_diffe0/record.2023-07-10.log')}

FAILED tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[daily-True] - AssertionError: assert PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-153/test_daily_rotation_with_diffe0/record.2018-10-28.log') in {PosixPath('/pr...

---

>           assert len(actual_files) == len(files)
E           AssertionError: assert 1 == 3
E            +  where 1 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-153/test_daily_rotation_with_diffe2/record.2018-10-27.log')})
E            +  and   3 = len([('record.2018-10-27.log', 'First\nSecond\n'), ('record.2018-10-28.log', 'Third\n'), ('record.2018-10-29.log', 'Fourth\n')])

FAILED tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[00:00-True] - AssertionError: assert 1 == 3

Haven't found any description about "enqueue=True is invalid for single-process". Or is there actually one, but I haven't noticed?

@changchiyou
Copy link
Author

I suspect that using the multiprocessing module with the spawning context behaves strangely when combined with freezegun. Because after I made some changing to my procject, it works locally on my Mac but failed with pytest.


After I reviewd the previous issue #899, according to my comment #899 (comment), I found that rename methods can be enhanced from

def rename(filepath):
    today = datetime.datetime.now().strftime("%Y-%m-%d")
    os.rename(filepath, f"record.{today}.log")

logger.add("record.log", rotation="00:00", compression=rename)

to

def rename(filepath):
    time_string_regex = r"\d{4}-\d{2}-\d{2}_\d{2}-\d{2}-\d{2}_\d{6}"
    time_string_format = r"%Y-%m-%d_%H-%M-%S_%f"
    target_format = r"%Y-%m-%d"

    date_pattern = re.compile(time_string_regex)
    date_match = re.search(date_pattern, str(filepath))

    if date_match:
        date_string = date_match.group()
        date = datetime.datetime.strptime(date_string, time_string_format)
        formatted_date = date.strftime(target_format)

        new_filepath = re.sub(date_pattern, formatted_date, str(filepath))
        os.rename(filepath, new_filepath)
    else:
        raise ValueError(
            f"can't find any time string in '{filepath}' matched '{time_string_format}'"
        )

since the input argument filepath has already renamed by loguru

If I set loguru.logger via logger.add("record.log", ...) the filepath would be like

/[abs_paht]/record.2023-07-06_08-56-33_328410.log

instead of (what I thought before)

/[abs_paht]/record..log

I don't have to rename the log file with specific time string actually.


I use the native setting windows in Mac M1 to adjust date time

image

, and after I switched from 2023/07/10 to 2023/07/12 day by day and execute my project manually, it worked(for sure I checked the content inside the log files)

image

but I failed the unit-test:

@pytest.mark.parametrize("enqueue", [True])
@pytest.mark.parametrize("rotation", ["daily", "00:00"])
def test_multi_process_renaming(freeze_time, tmp_path, enqueue, rotation):
    import multiprocessing

    with freeze_time("2020-01-01 12:00:00") as frozen:
        config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
        config_setting.init_args({"log_path": str(tmp_path)})
        set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}", rotation=rotation)


        workers = []
        for _ in range(2):
            worker = multiprocessing.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        check_dir(
            tmp_path,
            files=[
                ("record.log", f"{MSG}\n" * 2),
            ],
        )

        frozen.tick(delta=datetime.timedelta(hours=24))

        workers = []
        for _ in range(4):
            worker = multiprocessing.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        check_dir(
            tmp_path,
            files=[
                ("record.2020-01-01.log", f"{MSG}\n" * 2),
                ("record.log", f"{MSG}\n" * 4),
            ],
        )
>           assert len(actual_files) == len(files)
E           AssertionError: assert 2 == 1
E            +  where 2 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-162/test_multi_process_renaming_da0/record.2020-01-01.log'), PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-162/test_multi_process_renaming_da0/record.log')})
E            +  and   1 = len([('record.log', 'test\ntest\n')])

FAILED tests/utils/test_logging_enhance.py::test_multi_process_renaming[daily-True] - AssertionError: assert 2 == 1

---
>           assert len(actual_files) == len(files)
E           AssertionError: assert 2 == 1
E            +  where 2 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-162/test_multi_process_renaming_000/record.2020-01-01.log'), PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-162/test_multi_process_renaming_000/record.log')})
E            +  and   1 = len([('record.log', 'test\ntest\n')])

FAILED tests/utils/test_logging_enhance.py::test_multi_process_renaming[00:00-True] - AssertionError: assert 2 == 1

@changchiyou
Copy link
Author

Update

After leaving the comments below, I thought I finally solve this issue, but seems not.

I ran my project for another day, it worked perfect with rotation at midnight with:

  1. record.2023-07-10.log:

    image
  2. record.log:

    image

But after I RESTART my project at the morning, it rotate again and now I have:

  1. record.2023-07-10.log:

    image
  2. record.2023-07-11.log:

    image
  3. record.log:

    image

I hope that no matter if the project is restarted at any time of the day except midnight, the new log should only be written to the end of "record.log". I don't want to rename it except for changing the date.

I'm guessing my expectations didn't match up with my loguru setup? 😢

@Delgan
Copy link
Owner

Delgan commented Jul 10, 2023

I hope that no matter if the project is restarted at any time of the day except midnight, the new log should only be written to the end of "record.log". I don't want to rename it except for changing the date.

Here is how compression works:

  • if rotation is not None, the compression argument (i.e. your rename() function) is executed only when rotation occurs (i.e. at midnight in your case)
  • if rotation is None, the compression argument is executed only when the application is stopped (or when the handler is removed)

Is there any possibility that logger.add() is called multiple times in your application because of multiprocessing? The enqueue=True argument should work fine regardless of the number of processes. However, if the logger.add() is not guarded by if __name__ == "__main__" then unexpected behavior can occur because of duplicated handlers.

@changchiyou
Copy link
Author

changchiyou commented Jul 10, 2023

Is there any possibility that logger.add() is called multiple times in your application because of multiprocessing? The enqueue=True argument should work fine regardless of the number of processes.

I think no. I only execute set_loguru() once for each CLI action

  • realtime.py
    def __init__(self, args: Namespace):
        args_dict = vars(args)

        self.config_path: str = args_dict["config_path"]
        self.args_dict: dict = args_dict

        config_setting.init_config_dir(args_dict["config_path"])
        config_setting.init_args(args=args_dict)

        set_loguru(                                                                                   < - here
            enqueue=True,
            console_level=logging.DEBUG
            if config_setting.args.DEBUG is True
            else logging.INFO,
        )

    def run(self):
        workers = []
        for section_name, mark in config_setting.get_stream_sections():
            worker = multiprocessing.Process(
                target=realtime,
                args=(mark, section_name, self.config_path, self.args_dict, logger),
                daemon=True,
            )
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

(export.py has been deprecated for days, and it would not be executed in the same process with realtime.py before)

image image

However, if the logger.add() is not guarded by if __name__ == "__main__" then unexpected behavior can occur because of duplicated handlers.

My project doesn't execute the script directly. I set

entry_points={"console_scripts": ["facepcs-cli=facepcs.commands.facepcs_cli:main"]},

in setup.py, and execute facepcs-cli args ... to run my project. Would this cause unexpected behavior?

@changchiyou
Copy link
Author

Is there any possibility that logger.add() is called multiple times in your application because of multiprocessing? The enqueue=True argument should work fine regardless of the number of processes. However, if the logger.add() is not guarded by if name == "main" then unexpected behavior can occur because of duplicated handlers.

@Delgan BTW, aren't the first 2 of my 4 unit-test in #916 (comment) clear enough? Let me know if you want more information/details, I am willing to add more. 👍

Can you please provide additional details to extend the tests while keeping them as minimal as possible?

from loguru import logger
import datetime
import logging
import os
import re
import sys

from facepcs.utils import set_loguru, get_relative_path
from facepcs.config import config_setting

from ..conftest import check_dir

MSG = "test"

def rename(filepath) -> None:
  today = datetime.datetime.now().strftime(r"%Y-%m-%d")
  dirname = os.path.dirname(filepath)
  basename = "record.log"
  new_filepath = os.path.normpath(os.path.join(dirname, basename))

  os.rename(
      filepath,
      dynamic_insert_string(str(new_filepath), today),
  )

def dynamic_insert_string(base: str, insert: str) -> str:
  parts = base.split(".")

  if len(parts) > 1:
      base_filename = ".".join(parts[:-1])
      extension = parts[-1]
      result = f"{base_filename}.{insert}.{extension}"
  else:
      result = f"{base}.{insert}"

  return result

def remove_color_syntax(record) -> bool:
  ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
  record["extra"]["message_no_colors"] = ansi_escape.sub("", record["message"])

  return True

class InterceptHandler(logging.Handler):
  def emit(
      self,
      record: logging.LogRecord,
      _logger = logger,
  ) -> None:
      try:
          level = _logger.level(record.levelname).name
      except ValueError:
          level = record.levelno

      frame, depth = sys._getframe(6), 6
      while frame is not None and frame.f_code.co_filename == logging.__file__:
          frame = frame.f_back
          depth += 1

      _logger.opt(depth=depth, exception=record.exc_info).log(
          level, record.getMessage()
      )

def set_loguru_pure(tmp_path):
  import logging
  from loguru import logger
  import sys
  import datetime

  logger.remove()

  logger.add(
      f"{str(tmp_path)}/record.log",
      level=logging.DEBUG,
      rotation=datetime.time(
          hour=0,
          minute=0,
          tzinfo=datetime.timezone(datetime.timedelta(hours=8)),
      ),
      compression=rename,
      format="{message}",
      enqueue=True,
      colorize=False,
      filter=remove_color_syntax,
  )
  logger.add(
      sys.stderr,
      level=logging.DEBUG,
      colorize=True,
      format="{message}",
      enqueue=True,
  )

  logger.configure(extra={"mark": "origin"})

  logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)

def test_single_process_pure_renaming(freeze_time, tmp_path):
  with freeze_time("2020-01-01 12:00:00") as frozen:
      set_loguru_pure(tmp_path)

      for _ in range(2):
          logger.debug(f"{MSG}")

      check_dir(
          tmp_path,
          files=[
              ("record.log", f"{MSG}\n" * 2),
          ],
      )

      frozen.tick(delta=datetime.timedelta(hours=24))

      for _ in range(4):
          logger.debug(f"{MSG}")

      check_dir(
          tmp_path,
          files=[
              ("record.2020-01-01.log", f"{MSG}\n" * 2),
              ("record.log", f"{MSG}\n" * 4),
          ],
      )

def test_multi_process_pure_renaming(freeze_time, tmp_path):
  import multiprocessing

  with freeze_time("2020-01-01 12:00:00") as frozen:
      set_loguru_pure(tmp_path)

      workers = []
      for _ in range(2):
          worker = multiprocessing.Process(target=message, args=(logger,))
          worker.start()
          workers.append(worker)

      for worker in workers:
          worker.join()

      check_dir(
          tmp_path,
          files=[
              ("record.log", f"{MSG}\n" * 2),
          ],
      )

      frozen.tick(delta=datetime.timedelta(hours=24))

      workers = []
      for _ in range(4):
          worker = multiprocessing.Process(target=message, args=(logger,))
          worker.start()
          workers.append(worker)

      for worker in workers:
          worker.join()

      check_dir(
          tmp_path,
          files=[
              ("record.2020-01-01.log", f"{MSG}\n" * 2),
              ("record.log", f"{MSG}\n" * 4),
          ],
      )

@changchiyou
Copy link
Author

@Delgan I'm considering deprecate the project's support for windows(use fork instead of spawn).

I replace all from _loguru import logger in my project with from loguru import logger and execute pytest with unit-test:

@pytest.mark.parametrize("enqueue", [True])
@pytest.mark.parametrize("rotation", ["daily", "00:00"])
def test_multi_process_renaming(freeze_time, tmp_path, enqueue, rotation):
    import multiprocessing

    context = multiprocessing.get_context("fork")

    with freeze_time("2020-01-01 12:00:00") as frozen:
        config_setting.init_config_dir(get_relative_path("../../python-package/facepcs/storage/config"))
        config_setting.init_args({"log_path": str(tmp_path)})
        set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}", rotation=rotation)


        workers = []
        for _ in range(2):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        check_dir(
            tmp_path,
            files=[
                ("record.log", f"{MSG}\n" * 2),
            ],
        )

        frozen.tick(delta=datetime.timedelta(hours=24))

        workers = []
        for _ in range(4):
            worker = context.Process(target=message, args=(logger,))
            worker.start()
            workers.append(worker)

        for worker in workers:
            worker.join()

        check_dir(
            tmp_path,
            files=[
                ("record.2020-01-01.log", f"{MSG}\n" * 2),
                ("record.log", f"{MSG}\n" * 4),
            ],
        )

and it just passed without any error 🤣

tests/utils/test_logging_enhance.py::test_multi_process_renaming[daily-True] PASSED                                                                                                                      [ 88%]
tests/utils/test_logging_enhance.py::test_multi_process_renaming[00:00-True] PASSED                                                                                                                      [ 91%]

Another small question: I refer to your unit-tests above #916 (comment), but I can't pass context = multiprocessing.get_context("fork") as argument into set_loguru(), does it really matter for loguru?

    def patched_open(filepath, *args, **kwargs):
        if not os.path.exists(filepath):
            tz = datetime.timezone(datetime.timedelta(seconds=fakes["offset"]), name=fakes["zone"])
            ctimes[filepath] = datetime.datetime.now().replace(tzinfo=tz).timestamp()
>       return builtins_open(filepath, *args, **kwargs)
E       TypeError: 'context' is an invalid keyword argument for open()

tests/conftest.py:80: TypeError

I can't find any document about this arguement.

In https://loguru.readthedocs.io/en/stable/resources/recipes.html#compatibility-with-multiprocessing-using-enqueue-argument you didn't pass the context argument to logger.add() either.

@changchiyou
Copy link
Author

@Delgan I'm sorry for bothering you for days 😭 but here's my final conclusion of this issue:

  1. Seems to be an unexpected bug that only occurs when using the "spawn" method to create a process: when "rename files when changing days" is set, it may cause the file to be renamed at an unexpected time due to restart.

  2. For the use of enqueue=True, the README.MD or readthedocs needs to be updated: when enqueue=True is set in a single-process, the action that should be renamed when changing the date will not be executed.

    • Rotating Error in multi-process (randomly happened) #916 (comment)

      I suspect that using the multiprocessing module with the spawning context behaves strangely when combined with freezegun. Because after I made some changing to my procject, it works locally on my Mac but failed with pytest.

      No, it didn't pass the pytest becuase I set enqueue=True in single-process.

    Here is my unit-test as showcase:

    full unit-test script
    from loguru import logger
    import datetime
    import logging
    import pytest
    
    from facepcs.utils import set_loguru, get_relative_path
    from facepcs.config import config_setting
    
    from ...conftest import check_dir
    
    MSG = "test"
    CONFIG_PATH = get_relative_path("../../../python-package/facepcs/storage/config")
    
    @pytest.mark.parametrize("enqueue", [True, False])
    def test_single_process_daily(freeze_time, tmp_path, enqueue):
        with freeze_time("2020-01-01 12:00:00") as frozen:
            config_setting.init_config_dir(CONFIG_PATH)
            config_setting.init_args({"log_path": str(tmp_path)})
            set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")
    
            for _ in range(2):
                logger.debug(f"{MSG}")
    
            check_dir(
                tmp_path,
                files=[
                    ("record.log", f"{MSG}\n" * 2),
                ],
            )
    
            frozen.tick(delta=datetime.timedelta(hours=24))
    
            for _ in range(4):
                logger.debug(f"{MSG}")
    
            check_dir(
                tmp_path,
                files=[
                    ("record.2020-01-01.log", f"{MSG}\n" * 2),
                    ("record.log", f"{MSG}\n" * 4),
                ],
            )
        logger.remove()
    full pytest result Note. I removed unrelated unit tests
     ```
     ============================================================================================= test session starts ==============================================================================================
     platform darwin -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0 -- /Users/christopherchang/miniconda3/envs/facepcs/bin/python
     cachedir: .pytest_cache
     rootdir: /Users/christopherchang/Code/FacePCS
     collected 43 items                                                                                                                                                                                             
    
     tests/config/test_list_config_file_path.py::test_list_config_file_path_2 PASSED                                                                                                                          [ 88%]
     tests/utils/logging_enhance/test_renaming_while_rotation.py::test_single_process_daily[True] FAILED                                                                                                      [ 90%]
    
     =================================================================================================== FAILURES ===================================================================================================
     _______________________________________________________________________________________ test_single_process_daily[True] ________________________________________________________________________________________
    
     freeze_time = <function freeze_time.<locals>.freeze_time at 0x280ca16c0>
     tmp_path = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-193/test_single_process_daily_True0'), enqueue = True
    
         @pytest.mark.parametrize("enqueue", [True, False])
         def test_single_process_daily(freeze_time, tmp_path, enqueue):
             with freeze_time("2020-01-01 12:00:00") as frozen:
                 config_setting.init_config_dir(CONFIG_PATH)
                 config_setting.init_args({"log_path": str(tmp_path)})
                 set_loguru(enqueue=enqueue, console_level=logging.DEBUG, console_format=r"{message}", log_format=r"{message}")
         
                 for _ in range(2):
                     logger.debug(f"{MSG}")
         
                 check_dir(
                     tmp_path,
                     files=[
                         ("record.log", f"{MSG}\n" * 2),
                     ],
                 )
         
                 frozen.tick(delta=datetime.timedelta(hours=24))
         
                 for _ in range(4):
                     logger.debug(f"{MSG}")
         
     >           check_dir(
                     tmp_path,
                     files=[
                         ("record.2020-01-01.log", f"{MSG}\n" * 2),
                         ("record.log", f"{MSG}\n" * 4),
                     ],
                 )
    
     tests/utils/logging_enhance/test_renaming_while_rotation.py:38: 
     _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
    
     dir = PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-193/test_single_process_daily_True0')
    
         def check_dir(dir, *, files=None, size=None):
             actual_files = set(dir.iterdir())
             seen = set()
             if size is not None:
                 assert len(actual_files) == size
             if files is not None:
     >           assert len(actual_files) == len(files)
     E           AssertionError: assert 1 == 2
     E            +  where 1 = len({PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-193/test_single_process_daily_True0/record.log')})
     E            +  and   2 = len([('record.2020-01-01.log', 'test\ntest\n'), ('record.log', 'test\ntest\ntest\ntest\n')])
    
     tests/conftest.py:20: AssertionError
     --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------
     test
     test
     test
     test
     test
     test
     =========================================================================================== short test summary info ============================================================================================
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_single_process_daily[True] - AssertionError: assert 1 == 2
     ========================================================================================= 1 failed, 42 passed in 9.97s =========================================================================================
     ```
    

@Delgan
Copy link
Owner

Delgan commented Jul 11, 2023

I'm sorry for bothering you for days 😭

No worries. Thank you for your patience. I appreciate that you haven't condemned Loguru and are trying to find a solution. I welcome your work and detailed reports about what you observe during the tests. It's just that there's a lot of information out there and it's sometimes difficult for me to understand and reproduce the problem you're facing.

But I hope we'll find a solution. I don't like the idea of you deprecating your own project on a platform because of Loguru... :)


Regarding your comment here: #916 (comment)

with enqueue=True, unit-test failed:

tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[daily-True] FAILED
tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[daily-False] PASSED
tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[00:00-True] FAILED
tests/utils/test_logging_enhance.py::test_daily_rotation_with_different_rotation[00:00-False] PASSED

Those tests are failing because enqueue=True implies the logged messages are not processed synchronously. They are handled in a background thread, which means the main thread will execute the next instruction likely before the message is actually logged. This is incompatible with the current implementation of test_daily_rotation_with_different_rotation(). It's crucial in such situation to call logger.complete() to make sure all messages have been processed, otherwise the test expectations can't be guaranteed.

logger.debug("First")

# Wait for the first message to be fully processed by the background thread.
logger.complete()  

 # If complete() were not called, this line could be executed while first message is still pending.
frozen.tick(delta=datetime.timedelta(hours=23, minutes=30))

# If complete() were not called, it's possible that both first and second message would be
# processed after the actual time change, leading to test failure.
logger.debug("Second")

Regarding your comment here: #916 (comment)

BTW, aren't the first 2 of my 4 unit-test in #916 (comment) clear enough? Let me know if you want more information/details, I am willing to add more. +1

Thanks for providing the tests. I could run them on my computer, but the failure reason was different than the one you reported.

Looking at test_single_process_pure_renaming(), the expectations you listed do not seems appropriate to me.

        check_dir(
            tmp_path,
            files=[
                ("record.2020-01-01.log", f"{MSG}\n" * 2),
                ("record.log", f"{MSG}\n" * 4),
            ],
        )

The test is started at the fake date of "2020-01-01 12:00:00". When the first message is logged, the "record.log" file should be created, which is fine. However, you are then moving the clock forward by 24 hours. New date is "2020-01-02 12:00:00". Remember that the compression i.e. rename function is executed on rotation, which is triggered by the first logged message after the rotation="00:00" threshold. That is, when frozen.tick(delta=datetime.timedelta(hours=24)), nothing happens within Loguru. However, after that, the first logger.debug() call will start the rotation process as Loguru will detect that the "00:00" limit is exceeded. This causes your rename() function to be called, and your rename function is implemented using today = datetime.datetime.now().strftime(r"%Y-%m-%d"). Therefore, the file is naturally renamed to "record.2020-01-02.log" and not "record.2020-01-01.log" as you seem to expect.

Maybe you actually want to subtract datetime.timedelta(days=1) as we previously discussed in #899?

In addition, there are also missing calls to logger.complete() as I explained above. It must be called after the for loop containing the logger.debug() calls. Otherwise, the results will be indeterminate. This is very important.

The case of test_multi_process_pure_renaming() is even more tricky. Basically, you're right that freeze_time() does not work well with "spawn" method. This is because Loguru rotation mechanism relies on file creation timestamp, but that means the filesystem needs to be faked during tests. The freeze_time() helper maintains a structure of file paths associated with fake timestamp. However, when "spawn" is used, the content of this structure is lost in the child process, and the timestamp can't be faked. The "fork" method copies the structure, therefore it's fine. This explains why you face unexpected results during tests, but in production it shouldn't be a problem.

Edit: On further reflection, I'm no longer sure of this explanation. The message are handled in the main process, therefore freeze_time should work correctly. I can't explain the differences between "fork" and "spawn" for now.


Regarding your comment here: #916 (comment)

Another small question: I refer to your unit-tests above #916 (comment), but I can't pass context = multiprocessing.get_context("fork") as argument into set_loguru(), does it really matter for loguru?

Sorry, this is a new argument which exists on master branch but is not available in the most recent release (0.7.0). I'd forgotten about that.

It's not a requirement. It mainly useful to isolate each test cases instead of calling multiprocessing.set_start_method() globally.


Regarding your comment here: #916 (comment)

With the explanations I've given above, I don't think we can conclude that there's a bug in Loguru for now.

I arranged the two unit tests you shared as follow:

  • Make sure logger.complete() is called adequately
  • Change expectations regarding the file name in test_single_process_pure_renaming()
  • Replace usage of freeze_time() with actual time.sleep() in test_multi_process_pure_renaming()

The test cases work fine now, regardless of "spawn" or "fork" being used:

from loguru import logger
import datetime
import logging
import os
import re
import sys
import multiprocessing
import time
from ..conftest import check_dir

multiprocessing.set_start_method("spawn", force=True)


MSG = "test"

def message(logger_):
   logger_.debug(f"{MSG}")


def rename(filepath) -> None:
  today = datetime.datetime.now().strftime(r"%Y-%m-%d_%H-%M-%S")
  dirname = os.path.dirname(filepath)
  basename = "record.log"
  new_filepath = os.path.normpath(os.path.join(dirname, basename))

  os.rename(
      filepath,
      dynamic_insert_string(str(new_filepath), today),
  )

def dynamic_insert_string(base: str, insert: str) -> str:
  parts = base.split(".")

  if len(parts) > 1:
      base_filename = ".".join(parts[:-1])
      extension = parts[-1]
      result = f"{base_filename}.{insert}.{extension}"
  else:
      result = f"{base}.{insert}"

  return result

def remove_color_syntax(record) -> bool:
  ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
  record["extra"]["message_no_colors"] = ansi_escape.sub("", record["message"])

  return True

class InterceptHandler(logging.Handler):
  def emit(
      self,
      record: logging.LogRecord,
      _logger = logger,
  ) -> None:
      try:
          level = _logger.level(record.levelname).name
      except ValueError:
          level = record.levelno

      frame, depth = sys._getframe(6), 6
      while frame is not None and frame.f_code.co_filename == logging.__file__:
          frame = frame.f_back
          depth += 1

      _logger.opt(depth=depth, exception=record.exc_info).log(
          level, record.getMessage()
      )

def set_loguru_pure(tmp_path, rotation=None):
  import logging
  from loguru import logger
  import sys
  import datetime

  logger.remove()

  logger.add(
      f"{str(tmp_path)}/record.log",
      level=logging.DEBUG,
      rotation=datetime.time(
          hour=0,
          minute=0,
          tzinfo=datetime.timezone(datetime.timedelta(hours=8)),
      ) if rotation is None else rotation,
      compression=rename,
      format="{message}",
      enqueue=True,
      colorize=False,
      filter=remove_color_syntax,
  )
  logger.add(
      sys.stderr,
      level=logging.DEBUG,
      colorize=True,
      format="{message}",
      enqueue=True,
  )

  logger.configure(extra={"mark": "origin"})

  logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)

def test_single_process_pure_renaming(freeze_time, tmp_path):
  with freeze_time("2020-01-01 12:00:00") as frozen:
      set_loguru_pure(tmp_path)

      for _ in range(2):
          logger.debug(f"{MSG}")

      logger.complete()

      check_dir(
          tmp_path,
          files=[
              ("record.log", f"{MSG}\n" * 2),
          ],
      )

      frozen.tick(delta=datetime.timedelta(hours=24))

      for _ in range(4):
          logger.debug(f"{MSG}")

      logger.complete()

      check_dir(
          tmp_path,
          files=[
              ("record.2020-01-02_12-00-00.log", f"{MSG}\n" * 2),
              ("record.log", f"{MSG}\n" * 4),
          ],
      )

def test_multi_process_pure_renaming(tmp_path):
    in_2_seconds = (datetime.datetime.now() + datetime.timedelta(seconds=2)).time()

    set_loguru_pure(tmp_path, rotation=in_2_seconds)

    workers = []
    for _ in range(2):
        worker = multiprocessing.Process(target=message, args=(logger,))
        worker.start()
        workers.append(worker)

    for worker in workers:
        worker.join()

    logger.complete()

    check_dir(
        tmp_path,
        files=[
            ("record.log", f"{MSG}\n" * 2),
        ],
    )

    time.sleep(5)

    workers = []
    for _ in range(4):
        worker = multiprocessing.Process(target=message, args=(logger,))
        worker.start()
        workers.append(worker)

    for worker in workers:
        worker.join()

    logger.complete()

    files = sorted(tmp_path.iterdir())

    assert len(files) == 2

    first_file, second_file = files

    assert second_file.name == "record.log"
    assert second_file.read_text() == f"{MSG}\n" * 4

    assert first_file.name.startswith("record.2023")  # The exact name is not deterministic
    assert first_file.read_text() == f"{MSG}\n" * 2

@changchiyou
Copy link
Author

@Delgan Thanks for you super helpful reply ❤️ 👍 . I spend entire days to understand your suggestion and try to understand freezegun which used with "spawn" multi-process method(but I failed, it's too complex for me :P).


Those tests are failing because enqueue=True implies the logged messages are not processed synchronously. They are handled in a background thread, which means the main thread will execute the next instruction likely before the message is actually logged. This is incompatible with the current implementation of test_daily_rotation_with_different_rotation(). It's crucial in such situation to call logger.complete() to make sure all messages have been processed, otherwise the test expectations can't be guaranteed.

Can I understand that if I add the enqueu=True parameter when executing logger.add(), then I need to add logger.complete()' after each logger.logis finishedIn order to avoid unexpected errors? Regardless single process or multi process?

🤔 Is it possible to implement a new feature? : logger._log would execute logger.complete() automaticly if enqueue=True has been set.


Maybe you actually want to subtract datetime.timedelta(days=1) as we previously discussed in #899?

I have already noticed this problem in #916 (comment) and fixed it successfully(It passed my unit-tests before with multiprocessing using "fork" method).


I haven't finished the refactoring of my project, so I would like to leave this issue without closing it, until I completely fix this issue 😄 .

@Delgan
Copy link
Owner

Delgan commented Jul 12, 2023

Can I understand that if I add the enqueu=True parameter when executing logger.add(), then I need to add logger.complete()' after each logger.logis finishedIn order to avoid unexpected errors? Regardless single process or multi process?

thinking Is it possible to implement a new feature? : logger._log would execute logger.complete() automaticly if enqueue=True has been set.

This is not required by default. You only should call logger.complete() if you need to wait until all messages have been logged. This is usually unnecessary, and one of the aims of enqueue=True is to enable non-blocking logging.

However, this is required for the unit tests you shared, as the listed expectations cannot be met if the messages have not been handled.

@changchiyou
Copy link
Author

changchiyou commented Jul 24, 2023

@Delgan You are right, it's the problem of timezone. I add print(time.localtime(creation_time), start_time, sep='\n') right after

loguru/loguru/_file_sink.py

Lines 118 to 123 in 9fc929a

filepath = os.path.realpath(file.name)
creation_time = get_ctime(filepath)
set_ctime(filepath, creation_time)
start_time = datetime.datetime.fromtimestamp(
creation_time, tz=datetime.timezone.utc
)

and I got:

time.struct_time(tm_year=2023, tm_mon=7, tm_mday=24, tm_hour=8, tm_min=48, tm_sec=11, tm_wday=0, tm_yday=205, tm_isdst=0) 
2023-07-24 00:48:11.868131+00:00
time.struct_time(tm_year=2023, tm_mon=7, tm_mday=24, tm_hour=7, tm_min=17, tm_sec=41, tm_wday=0, tm_yday=205, tm_isdst=0) 
2023-07-23 23:17:41.074867+00:00

The variable start_time would storage UTC time only, therefore, loguruwould fail with rotation if the last base file (in my case is record.log) was created before 0800(UTC+8).


I did solve the problem of rotating at wrong time with your suggestion in #916 (comment) (this is a composite bug consisting of #894 and #899, therefore I can't solve this bug before by downgrading loguru to 0.6.0 in #916 (comment) and #916 (comment)),

The easiest way to find out if it's this bug would be to use Loguru v0.6.0 instead of v0.7.0 (the bug was introduced between these two releases).

but I am currently have no idea whether the problem is completely solved because:

  1. I can't reproduce the rotation error with loguru==0.7.0 with unit-tests.

    with freeze_time(start_time, timezone=("UTC", 8*60*60)) as frozen:
    
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 00:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 01:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 02:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 03:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 04:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 05:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 06:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[True-2020-01-01 07:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 00:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 01:00:00] - AssertionError: assert PosixPath('/private/var/folders/w4/gpf418pn2nvcpg9nd2xvkgkc0000gn/T/pytest-of-christopherchang/pytest-40/test_mu...
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 02:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 03:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 04:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 05:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 06:00:00] - AssertionError: assert 2 == 1
     FAILED tests/utils/logging_enhance/test_renaming_while_rotation.py::test_multi_process_restart[False-2020-01-01 07:00:00] - AssertionError: assert 2 == 1
    
  2. I haven't finish the local unit-tests with loguru==0.6.0. (monkeypatch_date, darwin_filesystem from conftest.py of loguru==0.6.0 don't support multiprocessing via fork)

I would close this issue after I have confirmed everything via unit-tests. 👍

@changchiyou
Copy link
Author

Final conclusion I guess, this is a composite bug consisting of:

  1. utc error mentioned in The log file is abnormal when the log is rotating #894
  2. wrong answer in How to adjust the time string which is inserted when rotating? #899 (has already corrected in Rotating Error in multi-process (randomly happened) #916 (comment))

There are 2 way to solve problem 1. but both solutions have disadvantages.:

  1. Modify loguru==0.7.0 manually:
    remove tz=datetime.timezone.utc from:

    loguru/loguru/_file_sink.py

    Lines 121 to 123 in 4f5af16

    start_time = datetime.datetime.fromtimestamp(
    creation_time, tz=datetime.timezone.utc
    )

     start_time = datetime.datetime.fromtimestamp( 
    -     creation_time, tz=datetime.timezone.utc 
    +     creation_time,
     )

    and add .replace(tzinfo=None) in:

    if limit <= start_time:

    - if limit <= start_time: 
    + if limit.replace(tzinfo=None) <= start_time: 

    ⚠️ some unit-tests of loguru failed with pytest after this changing

  2. Downgrade loguru to 0.6.0
    ⚠️ monkeypatch_date, darwin_filesystem from conftest.py of loguru==0.6.0 don't support multiprocessing via fork. Therefore I can't determine whether this solution works with multi-process.

@Delgan
Copy link
Owner

Delgan commented Aug 31, 2023

Hi @changchiyou.

The root cause of your problem (#894) has been fixed on master branch. Starting with the next Loguru release, you should no longer face unexpected behavior when using rotation.

I'm closing this issue consequently, as it should be solved. If you need any further help, I remain available.

@Delgan Delgan closed this as completed Aug 31, 2023
@Delgan Delgan added the question Further information is requested label Aug 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants