Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REF] Continue shifting to logging #68

Merged
merged 8 commits into from
Apr 23, 2021
Merged

Conversation

tsalo
Copy link
Contributor

@tsalo tsalo commented Apr 16, 2021

References #41.

Changes proposed in this pull request:

  • Change how the general logger is initialized. I realized that the console messages were being weirdly duplicated because I was changing root settings for the general logger, which was only necessary when the general logger was initialized using each file's __name__ (basically creating unique loggers, AFAIK). When you replace __name__ with "GENERAL" (i.e., a shared logger, as with "TIMING" and "MEMORY", you can change the settings for the general logger across all files.
    • The console outputs look WAY better and the log file is unaffected.
  • Fix up the TimingLGR. It doesn't require try/except statements. I just had to fix up the TimingFormatter.
  • Remove debug arguments for functions in calcsimfunc.py, correlate.py, and dlfilter.py. The goal is to replace debug with LGR.debug calls (hence no need for the argument).
    • Hopefully I removed debug from all calls to these altered functions, but I'm not 100% sure.
  • Replace print statements with LGR.info calls. This also involves replacing comma-separated strings containing variables with f-strings.
    • E.g., print("testing", var, "here") --> LGR.info(f"testing {var} here")

Copy link
Owner

@bbfrederick bbfrederick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This generally looks good. Two questions:

  1. I worry a little about the execution time impact of the debugging logger - some of the debugging output can be kind of lengthy; in the existing system, that code is never executed if debugging is off. It appears that now that code is always run, it's just a question of whether you look at the debug logs or not. Also, I don't know how big the debugging logs could potentially get, if they are run all the time. Finally, some of my debugging code does things like throw up plots - not sure how that is handled.
  2. Some routines, such as dlfilter, are called by happy, not rapidtide. On the last round of changes I had to port over some of the logger initialization stuff because of crashes in routines called by happy - since I don't see any additions to rapidtide in this round, there may be no new issues, but if you can think of any, let me know.

@tsalo
Copy link
Contributor Author

tsalo commented Apr 19, 2021

  1. I worry a little about the execution time impact of the debugging logger - some of the debugging output can be kind of lengthy; in the existing system, that code is never executed if debugging is off. It appears that now that code is always run, it's just a question of whether you look at the debug logs or not. Also, I don't know how big the debugging logs could potentially get, if they are run all the time. Finally, some of my debugging code does things like throw up plots - not sure how that is handled.

I hadn't really thought about the time costs for those cases. I'm sure there's a way to inspect the LGR to determine its logging level, and then use that in an if statement for those cases where the debug messages involve running new code. I can look into it.

The debugging logs only print to the command line and the log file if the logging level is set to include debug messages. In cases where users use --debug, I imagine both could get quite long, but I think that was always the case, right?

The plot thing is also something I didn't consider, but I think the solution can be the same as the first issue (i.e., check the logger's logging level and then toggle plot creation based on that).

  1. Some routines, such as dlfilter, are called by happy, not rapidtide. On the last round of changes I had to port over some of the logger initialization stuff because of crashes in routines called by happy - since I don't see any additions to rapidtide in this round, there may be no new issues, but if you can think of any, let me know.

Sorry about that! I didn't even think of accounting for happy. I was just planning to work through the whole package file by file. I can keep an eye out for problems, though, and run some tests with happy as well as rapid tide.

@tsalo
Copy link
Contributor Author

tsalo commented Apr 20, 2021

LGR.getEffectiveLevel() looks like what we want to check the level in the code. I will add something like if LGR.getEffectiveLevel() <= logging.DEBUG for those cases where the debugging messages involve extra calculations or plots.

@bbfrederick
Copy link
Owner

Sounds like a plan.

rapidtide/dlfilter.py Outdated Show resolved Hide resolved
rapidtide/dlfilter.py Outdated Show resolved Hide resolved
rapidtide/workflows/rapidtide.py Outdated Show resolved Hide resolved
Copy link
Owner

@bbfrederick bbfrederick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to merge

@bbfrederick bbfrederick merged commit 3dff647 into bbfrederick:dev Apr 23, 2021
@bbfrederick
Copy link
Owner

Added in a new routine, test_smoke.py, that confirms that happy and rapidtide run, which should catch any breaking changes early. This required some rejiggering of the argument parsing, but it seemed to be compatible with your PR.

@tsalo tsalo deleted the logging-3 branch April 23, 2021 20:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants