-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User/maksiman/add binary logging support #1
Conversation
cmd/logging/logging.go
Outdated
}() | ||
|
||
wait.Write([]byte("#")) | ||
wait.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is it that we close the wait pipe here? Also is the write above this for testing something specific?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to use the wait channel to make sure that the invoked binary was ready before I return the object. I reworked this slightly, so now the ready
func does the write into pipe. I guess I've done it to mimic the containerd implementation, but obviously didn't initially account to the differences between linux's pipes and windows' named pipes 🤦♂️
internal/cmd/io_binary.go
Outdated
b.cmd.Wait() | ||
|
||
// b.binaryCloser.Do(func() { | ||
// // Borrowed this from Ming's PR | ||
// log.G(ctx).Debugf("Waiting for binaryIO to exit: %d", b.cmd.Process.Pid) | ||
// done := make(chan error, 1) | ||
// go func() { | ||
// done <- b.cmd.Wait() | ||
// }() | ||
|
||
// select { | ||
// case err := <- done: | ||
// log.G(ctx).WithError(err).Errorf("Error while waiting for cmd to finish") | ||
// case <-time.After(5*time.Second): | ||
// log.G(ctx).Errorf("Timeout while waiting for binaryIO process to finish") | ||
// err := b.cmd.Process.Kill() | ||
// if err != nil { | ||
// log.G(ctx).WithError(err).Errorf("Error while killing binaryIO process") | ||
// } | ||
// } | ||
// }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So running just this first line does not cause a hang, but uncommenting the commented out code and removing the first line does cause a hang?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct. but now that I implemented this through buffered copy, the issue is not there anymore
log.G(ctx).WithError(err).Errorf("Error while closing stdout npipe") | ||
} | ||
} | ||
if b.serr != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we set b.serr to nil after successfully closing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is that what we usually do? or is it to make sure that close doesn't get called more than once?
go func() { | ||
defer p.conWg.Done() | ||
c, err := l.Accept() | ||
if err != nil { | ||
p.conErr = err | ||
return | ||
} | ||
p.con = c | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand the use of a waitgroup and a separate goroutine here. Could you give some more context on why this is done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I suppose it's just so that calls later to write and read and close are only called when the connection has been set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, that's what I figured. I borrowed this portion from containerd
cmd/logging/logging.go
Outdated
go func() { | ||
if err := fn(ctx, config, wait.Close); err != nil { | ||
w.WriteString("Binary exited with error. sending error via channel\n") | ||
w.Flush() | ||
errCh <- err | ||
return | ||
} | ||
w.WriteString("Binary exited normally.\n") | ||
w.Flush() | ||
errCh <- nil | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logger
function takes in wait.Close as the parameter ready
and this is called at the end of that function. Why then does this function here call the logger function and additionally later call Close
on wait again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. replied above. changed the implementation a bit
Signed-off-by: Maksim An <[email protected]>
f11c686
to
06fc931
Compare
This adds basic directory mount support for job containers. As any path on the host is already accessible from the container, the concept of volume mounts is a bit funny for job containers. However, it still makes sense to treat the volume mount point where the container image is mounted as where most things should be found regarding the container. The manner in which this is done is by appending the container mount path for the volume to where the rootfs volume is mounted on the host and then symlinking it. So: Container rootfs volume path = "C:\C\123456789abcdefgh\" Example #1 -------------- { "host_path": "C:\mydir" "container_path": "\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Example #2 --------------- Drive letters will be stripped { "host_path": "C:\mydir" "container_path": "C:\dir\in\container" } "C:\mydir" would be symlinked to "C:\C\123456789abcdefgh\dir\in\container" Signed-off-by: Daniel Canter <[email protected]>
Signed-off-by: Maksim An <[email protected]>
Signed-off-by: Maksim An <[email protected]>
Signed-off-by: Maksim An <[email protected]>
Signed-off-by: Maksim An <[email protected]>
Signed-off-by: Maksim An <[email protected]>
No description provided.