-
Notifications
You must be signed in to change notification settings - Fork 780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Allow writing logs to custom file #2473
Conversation
Logging to stderr can make it hard to use logging agents in a manner described by https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent Since we run in a distroless container, simply redirecting stderr is not an option. This PR adds the ability to more freely customize a deployment's logging architecture. Signed-off-by: Max Smythe <[email protected]>
Codecov ReportBase: 53.42% // Head: 53.19% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #2473 +/- ##
==========================================
- Coverage 53.42% 53.19% -0.23%
==========================================
Files 115 116 +1
Lines 10196 10270 +74
==========================================
+ Hits 5447 5463 +16
- Misses 4334 4382 +48
- Partials 415 425 +10
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Signed-off-by: Max Smythe <[email protected]>
Signed-off-by: Max Smythe <[email protected]>
@@ -92,6 +93,7 @@ var ( | |||
) | |||
|
|||
var ( | |||
logFile = flag.String("log-file", "", "Log to file, if specified. Default is to log to stderr.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this mutual exclusive such that you can only choose either or between logFile and stderr? What if someone wants both?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. If the logging frameworks support it, we could extend this to allow the flag to be specified multiple times.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So as it's currently implemented, this is mutually exclusive right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct
setupLog.Error(err, "problem running manager") | ||
hadError = true | ||
blockingLoop: | ||
for i := 0; i < 2; i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this looping 2x?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a precautionary measure to make sure it's not an infinite loop. If both functions return we should exit (in practice we always return when Manager.Start() exits, but it guards against future whoopsies)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we add your explanation in as a comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added an explanation.
|
||
"- HELMSUBST_DEPLOYMENT_CONTROLLER_MANAGER_LOGFILE": ` | ||
{{- if .Values.controllerManager.logFile}} | ||
- --log-file={{ .Values.controllerManager.logFile }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to add this new flag to https://github.com/open-policy-agent/gatekeeper/blob/master/Makefile#L65 so we can test it as part of the end to end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH I'm a little wary of having too many permutations being tested end-to-end, for both test-run-time and being too e2e-test-heavy reasons. IMO this shouldn't change the overall behavior enough to warrant a new e2e tests.
If we tested every permutation of flags, we'd have probably millions of independent e2e tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will say that I ran G8r with this flag manually and it works, though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed in the 1/4/2023 community call, let's leave this out of e2e as the impact surface is low and optional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just some questions, otherwise LGTM
@@ -123,11 +125,15 @@ func init() { | |||
} | |||
|
|||
func main() { | |||
os.Exit(innerMain()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this refactor just so we don't call os.Exit()
in a bunch of places?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, os.Exit()
breaks defer
, so defer close()
would not reliably be called without this.
defer handle.Close() | ||
logStream = handle |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if there is an err above, the handle
will be nil
right? won't this cause problems when attempting to close?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would return ErrInvalid, which would be swallowed silently:
https://cs.opensource.google/go/go/+/refs/tags/go1.19.4:src/os/file_posix.go;l=23
Because an error implies nothing was opened, presumably there is nothing to close.
However, a failure to open a logfile should probably be a fatal error, so updating to exit immediately on error.
sink := zapcore.AddSync(os.Stderr) | ||
if dest != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a minimal unit test we could set up to prove that writing to file works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In short, no.
Given that these are all library functions, which presumably have their own unit tests, there is no need to duplicate work that is more properly owned by zapcore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we wanted to test, e2e would be the appropriate venue, per my answer to Rita's comment, however, I don't think this is a permutation that warrants another e2e run.
setupLog.Error(err, "problem running manager") | ||
hadError = true | ||
blockingLoop: | ||
for i := 0; i < 2; i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we add your explanation in as a comment?
Signed-off-by: Max Smythe <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Logging to stderr can make it hard to use logging agents in a manner described by
https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent
Since we run in a distroless container, simply redirecting stderr is not an option. This PR adds the ability to more freely customize a deployment's logging architecture.
Signed-off-by: Max Smythe [email protected]
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, using
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when the PR gets merged):Fixes #
Special notes for your reviewer: