-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NETOBSERV-1326: NETOBSERV-1231: Drops & RTT metrics #453
Conversation
@jotak: This pull request references NETOBSERV-1326 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.15.0" version, but it targets "netobserv-1.5" instead. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@jotak: This pull request references NETOBSERV-1326 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.15.0" version, but it targets "netobserv-1.5" instead. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Drop changes LGTM |
- Added metrics: node_rtt, namespace_rtt, workload_rtt, node_drop_packets_total, node_drop_bytes_total, namespace_drop_packets_total, namespace_drop_bytes_total, workload_drop_packets_total, workload_drop_bytes_total - Add dashboards for drops (not yet for RTT, need to handle histomgrams in dashboards first)
d270abf
to
7286588
Compare
/lgtm |
@jotak: This pull request references NETOBSERV-1326 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.15.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #453 +/- ##
==========================================
+ Coverage 62.26% 62.56% +0.29%
==========================================
Files 55 55
Lines 6769 6822 +53
==========================================
+ Hits 4215 4268 +53
Misses 2238 2238
Partials 316 316
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
/lgtm |
/ok-to-test |
New images:
They will expire after two weeks. To deploy this build: # Direct deployment, from operator repo
IMAGE=quay.io/netobserv/network-observability-operator:6bbedc2 make deploy
# Or using operator-sdk
operator-sdk run bundle quay.io/netobserv/network-observability-operator-bundle:v0.0.0-6bbedc2 Or as a Catalog Source: apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: netobserv-dev
namespace: openshift-marketplace
spec:
sourceType: grpc
image: quay.io/netobserv/network-observability-operator-catalog:v0.0.0-6bbedc2
displayName: NetObserv development catalog
publisher: Me
updateStrategy:
registryPoll:
interval: 1m |
While you are here might not be bad idea adding dns latency metrics and customer can use that to trigger alerts if latency is more than 20 Ms for example |
@msherif1234 yeah that's planned in another JIRA (https://issues.redhat.com/browse/NETOBSERV-1334) so for another PR |
/ok-to-test |
New images:
They will expire after two weeks. To deploy this build: # Direct deployment, from operator repo
IMAGE=quay.io/netobserv/network-observability-operator:2c27e30 make deploy
# Or using operator-sdk
operator-sdk run bundle quay.io/netobserv/network-observability-operator-bundle:v0.0.0-2c27e30 Or as a Catalog Source: apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: netobserv-dev
namespace: openshift-marketplace
spec:
sourceType: grpc
image: quay.io/netobserv/network-observability-operator-catalog:v0.0.0-2c27e30
displayName: NetObserv development catalog
publisher: Me
updateStrategy:
registryPoll:
interval: 1m |
/lgtm |
@jotak: This pull request references NETOBSERV-1326 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.15.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks @Amoghrd ! |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jotak The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
* NETOBSERV-1326: NETOBSERV-1231: Drops & RTT metrics - Added metrics: node_rtt, namespace_rtt, workload_rtt, node_drop_packets_total, node_drop_bytes_total, namespace_drop_packets_total, namespace_drop_bytes_total, workload_drop_packets_total, workload_drop_bytes_total - Add dashboards for drops (not yet for RTT, need to handle histomgrams in dashboards first) * Update CRD doc and tests with added metrics * Set new defaults * Update CRD doc * externalize metrics doc
PR based-on #447 , which must be merged first
For this PR alone, check commit d270abf
Description
node_drop_packets_total, node_drop_bytes_total,
namespace_drop_packets_total, namespace_drop_bytes_total,
workload_drop_packets_total, workload_drop_bytes_total
in dashboards first)
Dependencies
n/a
Checklist
If you are not familiar with our processes or don't know what to answer in the list below, let us know in a comment: the maintainers will take care of that.