-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use kube-rs
resource watcher
instead of Api::watch
#378
Use kube-rs
resource watcher
instead of Api::watch
#378
Conversation
@@ -228,7 +227,7 @@ async fn handle_config_delete( | |||
kube_interface: &impl KubeInterface, | |||
config: &Configuration, | |||
config_map: ConfigMap, | |||
) -> Result<(), Box<dyn std::error::Error + Send + Sync + 'static>> { | |||
) -> anyhow::Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yay!
} | ||
if *first_event { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: not sure what is more expensive, branching logic or setting logic, but it might be simpler to just always set *first_event=false
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. From a quick search i cant find an exploration someone has done on that -- might be fun to test. In the meantime, I think you're right that it makes more sense to just set it.
agent/src/util/config_action.rs
Outdated
info!("handle_config - watcher started"); | ||
} else { | ||
return Err(anyhow::anyhow!( | ||
"Configuration watcher restarted - throwing error" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe say "Configuration watcher restarted - throwing error to restart agent"
e8c59a9
to
2837828
Compare
Signed-off-by: vincepnguyen <[email protected]>
What this PR does / why we need it:
closes #374
Special notes for your reviewer:
We updated to using the
Api::watch
object inkube-rs
v0.59.0
in #361, expecting it to work similarly as the informer we were previously using to watch for actions on Pods, Nodes, Instances, Configurations. However, we have experienced different behavior, resulting in ref #372. and usingApi::watch
instead ofkube::runtime::watcher
. I was able to reproduce unexpected behavior locally when using the current implementation. If there are pre-existing configurations,handle_config_add
is called twice, once from where we handle pre-existing configs and the other from theApi::watch
.Api::watch
appears to pull previous events before the Agent starts, which is fairly unpredictable. Usingwatcher
should resolve this.Note: currently not handling restart events other than logging them. This is due to:1. The fact that we handledWatchEvent::Error
previously just by logging2. A restart event occurs on start up (aka is a start event too). We'd need to know it is the first restart event and handle that differently than others. We could pass afirst_event
bool
in and error so long as it is not the first event.In 19a44f9, made it so the watchers throw an error if they receive a restarted event that is not the initial start event. This will cause the controller or agent containers to error and restart and properly reconcile.
If applicable:
cargo fmt
)cargo build
)cargo clippy
)cargo test
)cargo doc
)./version.sh
)