Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.2.0 ) #6311

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

lumiere-bot[bot]
Copy link
Contributor

@lumiere-bot lumiere-bot bot commented Feb 6, 2025

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 68.4.5 -> 69.2.0

Release Notes

prometheus-community/helm-charts (kube-prometheus-stack)

v69.2.0

Compare Source

v69.1.2

Compare Source

v69.1.1

Compare Source

v69.1.0

Compare Source

v69.0.0

Compare Source

v68.5.0

Compare Source


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@lumiere-bot lumiere-bot bot requested a review from coolguy1771 as a code owner February 6, 2025 15:09
@lumiere-bot lumiere-bot bot added renovate/container type/major area/kubernetes Changes made in the kubernetes directory cluster/kyak labels Feb 6, 2025
@lumiere-bot
Copy link
Contributor Author

lumiere-bot bot commented Feb 6, 2025

--- kubernetes/kyak/apps/monitoring/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: monitoring/kube-prometheus-stack

+++ kubernetes/kyak/apps/monitoring/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: monitoring/kube-prometheus-stack

@@ -13,13 +13,13 @@

     spec:
       chart: kube-prometheus-stack
       sourceRef:
         kind: HelmRepository
         name: prometheus-community
         namespace: flux-system
-      version: 68.4.5
+      version: 69.2.0
   dependsOn:
   - name: openebs
     namespace: openebs-system
   - name: thanos
     namespace: monitoring
   install:

@lumiere-bot
Copy link
Contributor Author

lumiere-bot bot commented Feb 6, 2025

--- HelmRelease: monitoring/kube-prometheus-stack Deployment: monitoring/kube-prometheus-stack-operator

+++ HelmRelease: monitoring/kube-prometheus-stack Deployment: monitoring/kube-prometheus-stack-operator

@@ -31,20 +31,20 @@

         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
       - name: kube-prometheus-stack
-        image: quay.io/prometheus-operator/prometheus-operator:v0.79.2
+        image: quay.io/prometheus-operator/prometheus-operator:v0.80.0
         imagePullPolicy: IfNotPresent
         args:
         - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
         - --kubelet-endpoints=true
         - --kubelet-endpointslice=false
         - --localhost=127.0.0.1
-        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.79.2
+        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.80.0
         - --config-reloader-cpu-request=0
         - --config-reloader-cpu-limit=0
         - --config-reloader-memory-request=0
         - --config-reloader-memory-limit=0
         - --thanos-default-base-image=quay.io/thanos/thanos:v0.37.2
         - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
--- HelmRelease: monitoring/kube-prometheus-stack PrometheusRule: monitoring/kube-prometheus-stack-etcd

+++ HelmRelease: monitoring/kube-prometheus-stack PrometheusRule: monitoring/kube-prometheus-stack-etcd

@@ -26,15 +26,15 @@

         or
           count without (To) (
             sum without (instance, pod) (rate(etcd_network_peer_sent_failures_total{job=~".*etcd.*"}[120s])) > 0.01
           )
         )
         > 0
-      for: 10m
-      labels:
-        severity: critical
+      for: 20m
+      labels:
+        severity: warning
     - alert: etcdInsufficientMembers
       annotations:
         description: 'etcd cluster "{{ $labels.job }}": insufficient members ({{ $value
           }}).'
         summary: etcd cluster has insufficient number of members.
       expr: sum(up{job=~".*etcd.*"} == bool 1) without (instance, pod) < ((count(up{job=~".*etcd.*"})
--- HelmRelease: monitoring/kube-prometheus-stack PrometheusRule: monitoring/kube-prometheus-stack-kubernetes-system-kubelet

+++ HelmRelease: monitoring/kube-prometheus-stack PrometheusRule: monitoring/kube-prometheus-stack-kubernetes-system-kubelet

@@ -18,14 +18,16 @@

     - alert: KubeNodeNotReady
       annotations:
         description: '{{ $labels.node }} has been unready for more than 15 minutes
           on cluster {{ $labels.cluster }}.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubenodenotready
         summary: Node is not ready.
-      expr: kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"}
-        == 0
+      expr: |-
+        kube_node_status_condition{job="kube-state-metrics",condition="Ready",status="true"} == 0
+        and on (cluster, node)
+        kube_node_spec_unschedulable{job="kube-state-metrics"} == 0
       for: 15m
       labels:
         severity: warning
     - alert: KubeNodeUnreachable
       annotations:
         description: '{{ $labels.node }} is unreachable and some workloads may be
@@ -62,14 +64,16 @@

     - alert: KubeNodeReadinessFlapping
       annotations:
         description: The readiness status of node {{ $labels.node }} has changed {{
           $value }} times in the last 15 minutes on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubenodereadinessflapping
         summary: Node readiness status is flapping.
-      expr: sum(changes(kube_node_status_condition{job="kube-state-metrics",status="true",condition="Ready"}[15m]))
-        by (cluster, node) > 2
+      expr: |-
+        sum(changes(kube_node_status_condition{job="kube-state-metrics",status="true",condition="Ready"}[15m])) by (cluster, node) > 2
+        and on (cluster, node)
+        kube_node_spec_unschedulable{job="kube-state-metrics"} == 0
       for: 15m
       labels:
         severity: warning
     - alert: KubeletPlegDurationHigh
       annotations:
         description: The Kubelet Pod Lifecycle Event Generator has a 99th percentile

@lumiere-bot lumiere-bot bot changed the title feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.0.0 ) feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.1.0 ) Feb 6, 2025
@lumiere-bot lumiere-bot bot force-pushed the renovate/kyak-kube-prometheus-stack-69.x branch 2 times, most recently from e782843 to 7f30c4b Compare February 6, 2025 23:09
@lumiere-bot lumiere-bot bot changed the title feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.1.0 ) feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.1.1 ) Feb 6, 2025
@lumiere-bot lumiere-bot bot changed the title feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.1.1 ) feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.1.2 ) Feb 7, 2025
@lumiere-bot lumiere-bot bot force-pushed the renovate/kyak-kube-prometheus-stack-69.x branch from 7f30c4b to 30acbbf Compare February 7, 2025 07:10
@lumiere-bot lumiere-bot bot changed the title feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.1.2 ) feat(container)!: Update kube-prometheus-stack ( 68.4.5 → 69.2.0 ) Feb 7, 2025
@lumiere-bot lumiere-bot bot force-pushed the renovate/kyak-kube-prometheus-stack-69.x branch from 30acbbf to aadfab6 Compare February 7, 2025 08:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants