Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Executing "loki/templates/query-frontend/hpa.yaml" at <include "loki.hpa.apiVersion" .>: error calling includ #12716

Closed
mgialelis-pm opened this issue Apr 21, 2024 · 0 comments · Fixed by #12755
Labels
3.0 area/helm type/bug Somehing is not working as expected

Comments

@mgialelis-pm
Copy link

Describe the bug
When renedering the new 6.3.3 loki helm chart there is an error produced of:

Error: template: loki/templates/query-frontend/hpa.yaml:3:19: executing "loki/templates/query-frontend/hpa.yaml" at <include "loki.hpa.apiVersion" .>: error calling include: template: no template "loki.hpa.apiVersion" associated with template "gotpl"

To Reproduce
Steps to reproduce the behavior:

Helmvalues.yml


deploymentMode: Distributed
######################################################################################################################
#
# Base Loki Configs including kubernetes configurations and configurations for Loki itself,
# see below for more specifics on Loki's configuration.
#
######################################################################################################################
# -- Configuration for running Loki
loki:
  hpa:
    apiVersion: autoscaling/v2
  image:
    # -- The Docker registry
    registry: docker.io
    repository: grafana/loki
    tag: null
  annotations: {}
  podAnnotations: {}
  # -- Common labels for all services
  serviceLabels: {}
  revisionHistoryLimit: 10
  # -- The SecurityContext for Loki pods
  podSecurityContext:
    fsGroup: 10001
    runAsGroup: 10001
    runAsNonRoot: true
    runAsUser: 10001
  containerSecurityContext:
    readOnlyRootFilesystem: true
    capabilities:
      drop:
        - ALL
    allowPrivilegeEscalation: false
  # -- Should enableServiceLinks be enabled. Default to enable
  enableServiceLinks: true

  # Should authentication be enabled
  auth_enabled: true

  # -- Check https://grafana.com/docs/loki/latest/configuration/#server for more info on the server configuration.
  server:
    http_listen_port: 3100
    grpc_listen_port: 9095
    http_server_read_timeout: 600s
    http_server_write_timeout: 600s

  # -- Limits config
  limits_config:
    reject_old_samples: true
    reject_old_samples_max_age: 168h
    max_cache_freshness_per_query: 10m
    split_queries_by_interval: 15m
    query_timeout: 300s
    volume_enabled: true
    retention_period: 52560h
    ingestion_burst_size_mb: 25
    ingestion_rate_mb: 15
    max_label_names_per_series: 90
    per_stream_rate_limit: 10M
    per_stream_rate_limit_burst: 20M
    reject_old_samples_max_age: 168h
    deletion_mode: filter-and-delete

  # -- Provides a reloadable runtime configuration file for some specific configuration
  runtimeConfig:
    overrides:
    fake:
      deletion_mode: filter-and-delete
      retention_period: 1h


  # -- Check https://grafana.com/docs/loki/latest/configuration/#common_config for more info on how to provide a common configuration
  commonConfig:
    path_prefix: /var/loki
    replication_factor: 3
    compactor_address: '{{ include "loki.compactorAddress" . }}'

  # -- Storage config. Providing this will automatically populate all necessary storage configs in the templated config.
  storage:
    bucketNames:
      # REPALCE ME
      chunks: loki-storage
      ruler: loki-ruler-storage
    type: gcs
    gcs:
      chunkBufferSize: 0
      requestTimeout: "0s"
      enableHttp2: true

  # -- Configure memcached as an external cache for chunk and results cache. Disabled by default
  # must enable and specify a host for each cache you would like to use.

  # REPALCE ME
  memcached:
    chunk_cache:
      enabled: false
      host: "192.168.1.3:11211"
      service: "memcached-client"
      batch_size: 256
      parallelism: 10
    results_cache:
      enabled: false
      host: "192.168.1.131:11211"
      service: "memcached-client"
      timeout: "500ms"
      default_validity: "12h"

  # -- Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas
  schemaConfig:
    configs:
      - from: "2020-09-07"
        index:
          period: 24h
          prefix: loki_index_
        object_store: gcs
        schema: v12
        store: boltdb-shipper
      - from: "2024-04-13"
        index:
          period: 24h
          prefix: loki_index_
        object_store: gcs
        schema: v13
        store: tsdb

  # -- Check https://grafana.com/docs/loki/latest/configuration/#ruler for more info on configuring ruler
  rulerConfig: {}
  # -- Additional query scheduler config
  query_scheduler: {}
  # -- Additional storage config
  storage_config:
    boltdb_shipper:
      index_gateway_client:
        server_address: '{{ include "loki.indexGatewayAddress" . }}'
    tsdb_shipper:
      index_gateway_client:
        server_address: '{{ include "loki.indexGatewayAddress" . }}'
    hedging:
      at: "250ms"
      max_per_second: 20
      up_to: 3

  # --  compactor configuration
  compactor:
    delete_request_cancel_period: 15m
    retention_delete_delay: 30m
    compaction_interval: 10m

  # --  Optional pattern ingester configuration
  pattern_ingester:
    enabled: false

  # --  Optional querier configuration
  query_range:
    cache_results: true
    max_retries: 5

  querier:
    multi_tenant_queries_enabled: true
    max_concurrent: 10

  ingester:
    chunk_encoding: snappy
  # --  Optional index gateway configuration
  index_gateway:
    mode: simple
  frontend:
    scheduler_address: '{{ include "loki.querySchedulerAddress" . }}'
    tail_proxy_url: '{{ include "loki.querierAddress" . }}'
  frontend_worker:
    scheduler_address: '{{ include "loki.querySchedulerAddress" . }}'

  # -- Optional distributor configuration
  distributor: {}

  # -- Enable tracing
  tracing:
    enabled: false

######################################################################################################################
#
# Chart Testing
#
######################################################################################################################
# The Loki canary pushes logs to and queries from this loki installation to test
# that it's working correctly
lokiCanary:
  enabled: true
  # -- If true, the canary will send directly to Loki via the address configured for verification --
  # -- If false, it will write to stdout and an Agent will be needed to scrape and send the logs --
  push: true
  # -- The name of the label to look for at loki when doing the checks.
  labelname: pod
  # -- Additional annotations for the `loki-canary` Daemonset
  annotations: {}
  # -- Additional labels for each `loki-canary` pod
  podLabels: {}
  service:
    annotations: {}
    labels: {}
  nodeSelector: {}
  # -- Tolerations for canary pods
  tolerations: []
  image:
    # -- The Docker registry
    registry: docker.io
    # -- Docker image repository
    repository: grafana/loki-canary
    # -- Overrides the image tag whose default is the chart's appVersion
    tag: null
  # -- Update strategy for the `loki-canary` Daemonset pods
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1

######################################################################################################################
#
# Service Accounts and Kubernetes RBAC
#
######################################################################################################################
serviceAccount:
  annotations:
  # REPALCEME
    iam.gke.io/gcp-service-account: [email protected]
  create: true
  name: loki

######################################################################################################################
#
# Gateway and Ingress
#
# By default this chart will deploy a Nginx container to act as a gateway which handles routing of traffic
# and can also do auth.
#
# If you would prefer you can optionally disable this and enable using k8s ingress to do the incoming routing.
#
######################################################################################################################

# Configuration for the gateway
gateway:
  enabled: true
  replicas: 5
  # -- Enable logging of 2xx and 3xx HTTP requests
  verboseLogging: true
  autoscaling:
    enabled: false
    minReplicas: 5
    maxReplicas: 12
    targetCPUUtilizationPercentage: 60
    targetMemoryUtilizationPercentage:
  annotations: {}
  resources:
    limits:
      memory: 512Mi
    requests:
      cpu: 200m
      memory: 256Mi
  # Gateway service configuration
  service:
    annotations:
      cloud.google.com/load-balancer-type: Internal
      cloud.google.com/neg: '{"ingress": true}'
      external-dns.alpha.kubernetes.io/hostname: loki-gateway.project.local
    port: 80
    type: LoadBalancer

  # Basic auth configuration
  basicAuth:
    # -- Enables basic authentication for the gateway
    enabled: true
    username: loki
    password: loki

  nginxConfig:
    # -- Which schema to be used when building URLs. Can be 'http' or 'https'.
    schema: http
    # -- Enable listener for IPv6, disable on IPv4-only systems
    enableIPv6: true
    # -- Allows appending custom configuration to the server block
    serverSnippet: |-
      proxy_read_timeout 350;
      proxy_connect_timeout 350;
      proxy_send_timeout 350;
    # -- Allows appending custom configuration to the http block, passed through the `tpl` function to allow templating
    httpSnippet: >-
      {{ if .Values.loki.tenants }}proxy_set_header X-Scope-OrgID $remote_user;{{ end }}

######################################################################################################################
#
# Migration
#
######################################################################################################################

# -- Options that may be necessary when performing a migration from another helm chart
migrate:
  # -- When migrating from a distributed chart like loki-distributed or enterprise-logs
  fromDistributed:
    # -- Set to true if migrating from a distributed helm chart
    enabled: false
    # -- If migrating from a distributed service, provide the distributed deployment's
    # memberlist service DNS so the new deployment can join its ring.
    memberlistService: ""


######################################################################################################################
#
# Microservices Mode
#
#
######################################################################################################################

# -- Configuration for the ingester
ingester:
  # -- Number of replicas for the ingester, when zoneAwareReplication.enabled is true, the total
  # number of replicas will match this value with each zone having 1/3rd of the total replicas.
  replicas: 4
  autoscaling:
    enabled: true
    minReplicas: 4
    maxReplicas: 3
    targetCPUUtilizationPercentage: 60
  resources:
    requests:
      cpu: 310m
      memory: 2G
  persistence:
    claims:
    - name: data
      size: 100Gi
      storageClass: premium-rwo
    - name: wal
      size: 100Gi
      storageClass: premium-rwo
    enableStatefulSetAutoDeletePVC: true
    enabled: true
    inMemory: false


# --  Configuration for the distributor
distributor:
  # -- Number of replicas for the distributor
  replicas: 3
  maxUnavailable: 2
  autoscaling:
    enabled: true
    maxReplicas: 8
    minReplicas: 3
    targetCPUUtilizationPercentage: 60
  resources:
    requests:
      cpu: 310m
      memory: 256Mi

# --  Configuration for the querier
querier:
  # -- Number of replicas for the querier
  replicas: 4
  maxUnavailable: 2
  # -- hostAliases to add
  hostAliases: []
  #  - ip: 1.2.3.4
  #    hostnames:
  #      - domain.tld
  autoscaling:
    enabled: true
    maxReplicas: 8
    minReplicas: 4
    targetCPUUtilizationPercentage: 60
  # -- Resource requests and limits for the querier
  resources:
    requests:
      cpu: "1"
      memory: 750Mi
  persistence:
    # -- Enable creating PVCs for the querier cache
    enabled: false
    # -- Size of persistent disk
    size: 10Gi
    # -- Storage class to be used.
    # If defined, storageClassName: <storageClass>.
    # If set to "-", storageClassName: "", which disables dynamic provisioning.
    # If empty or set to null, no storageClassName spec is
    # set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
    storageClass: null
    # -- Annotations for querier PVCs
    annotations: {}

# -- Configuration for the query-frontend
queryFrontend:
  # -- Number of replicas for the query-frontend
  replicas: 2
  maxUnavailable: 1
  autoscaling:
    enabled: true
    maxReplicas: 3
    minReplicas: 1
    targetCPUUtilizationPercentage: 60
  # -- Resource requests and limits for the query-frontend
  resources:
    limits:
      memory: 512Mi
    requests:
      cpu: 256m
      memory: 256Mi


# -- Configuration for the query-scheduler
queryScheduler:
  # -- Number of replicas for the query-scheduler.
  # It should be lower than `-querier.max-concurrent` to avoid generating back-pressure in queriers;
  # it's also recommended that this value evenly divides the latter
  replicas: 2
  resources: {}

# -- Configuration for the index-gateway
indexGateway:
  # -- Number of replicas for the index-gateway
  replicas: 3
  maxUnavailable: 1
  # -- Whether the index gateway should join the memberlist hashring
  joinMemberlist: true
  # -- Resource requests and limits for the index-gateway
  resources:
    limits:
      memory: 512Mi
    requests:
      cpu: 200m
      memory: 256Mi
  persistence:
    enabled: true
    size: 80Gi
    storageClass: premium-rwo
    # -- Enable StatefulSetAutoDeletePVC feature
    enableStatefulSetAutoDeletePVC: false
    whenDeleted: Retain
    whenScaled: Retain

# -- Configuration for the compactor
compactor:
  # -- Number of replicas for the compactor
  replicas: 1
  persistence:
    # -- Enable creating PVCs for the compactor
    enabled: true
    size: 10Gi
    storageClass: premium-rwo

bloomCompactor:
  replicas: 0
bloomGateway:
  replicas: 0
patternIngester:
  replicas: 0

resultsCache:
  enabled: false
chunksCache:
  enabled: false


######################################################################################################################
# SAFETY
# Zero out replica counts of other deployment modes
backend:
  replicas: 0
read:
  replicas: 0
write:
  replicas: 0

singleBinary:
  replicas: 0

############################################### WARNING ###############################################################
#
# DEPRECATED VALUES
#
# The following values are deprecated and will be removed in a future version of the helm chart!
#
############################################## WARNING ##############################################################
# -- DEPRECATED Monitoring section determines which monitoring features to enable, this section is being replaced
# by https://github.com/grafana/meta-monitoring-chart
monitoring:
  serviceMonitor:
    # -- If enabled, ServiceMonitor resources for Prometheus Operator are created
    enabled: true
    namespaceSelector: {}
    annotations: {}
    labels: {}
    # -- ServiceMonitor scrape interval
    # Default is 15s because included recording rules use a 1m rate, and scrape interval needs to be at
    # least 1/4 rate interval.
    interval: 15s
    scrapeTimeout: null
    relabelings: []
    metricRelabelings: []
    scheme: http
    tlsConfig: null

  1. helm template --values loki-new.yml --version 6.3.3 loki grafana/loki
  2. Youll get the error

Expected behavior
Helm to template the helm chat with HPA

Environment:

  • Infrastructure: K8s
  • Deployment tool: helm

Screenshots, Promtail config, or terminal output
If applicable, add any output to help explain your problem.

 helm template --values loki-new.yml --version 6.3.3 loki grafana/loki
 
 
 Error: template: loki/templates/query-frontend/hpa.yaml:3:19: executing "loki/templates/query-frontend/hpa.yaml" at <include "loki.hpa.apiVersion" .>: error calling include: template: no template "loki.hpa.apiVersion" associated with template "gotpl"

Use --debug flag to render out invalid YAML

Can easily see the apiVersion finder for most of the resources here: https://github.com/grafana/loki/blob/main/production/helm/loki/templates/_helpers.tpl

But even doing a code search there is no helper for the HPA verison

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.0 area/helm type/bug Somehing is not working as expected
Projects
None yet
2 participants