-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update dashboard to use Cozystack API #539
Conversation
Caution Review failedThe pull request is closed. WalkthroughThis pull request introduces modifications to the Kubeapps dashboard and APIs build process. The changes include updating Dockerfiles for both dashboard and APIs images, modifying URL handling in source code, and expanding the Changes
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (8)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
packages/system/dashboard/values.yaml (3)
36-37
: Consider using specific version tags instead of 'latest'While the SHA256 digests provide immutability for this deployment, using the 'latest' tag is generally discouraged in production environments as it can complicate version tracking and rollback procedures. Consider using specific version tags to make deployments more explicit and maintainable.
image: registry: ghcr.io/aenix-io/cozystack repository: dashboard - tag: latest + tag: "v1.0.0" # Replace with actual version digest: "sha256:c1d14dbb5491490474936d18f2812e87cc3f8536743fb533cc748fe7c2157305"Also applies to: 42-43
292-293
: Standardize release prefix usageSome applications (Monitoring, Etcd, Ingress, SeaweedFS) have empty prefixes while others use explicit prefixes. Consider standardizing this approach across all applications for consistency.
Also applies to: 306-307, 320-321, 334-335
44-342
: Consider templating common configurationsThe sourceRef configuration is repeated across all applications with only minor variations (cozystack-apps vs cozystack-extra). Consider extracting this into a template or using Helm's built-in template functions to reduce duplication and improve maintainability.
Example approach using YAML anchors (though Helm templates would be preferred):
commonRefs: &cozystack-apps kind: HelmRepository name: cozystack-apps namespace: cozy-public &cozystack-extra kind: HelmRepository name: cozystack-extra namespace: cozy-public # Then in each application: sourceRef: *cozystack-apps # or *cozystack-extrapackages/system/dashboard/images/dashboard/Dockerfile (2)
12-12
: Patch management clarity.
Adding release-url.diff to patches clarifies the purpose of your modifications. Documenting the patch’s intent in a short comment near the COPY line would help future maintainers.
13-13
: Apply the patch with caution.
Ensure the patch tool is available in all downstream build environments and that the patch version remains compatible.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
packages/system/dashboard/images/dashboard/Dockerfile
(1 hunks)packages/system/dashboard/images/dashboard/release-url.diff
(1 hunks)packages/system/dashboard/images/kubeapps-apis/Dockerfile
(1 hunks)packages/system/dashboard/images/kubeapps-apis/labels.diff
(0 hunks)packages/system/dashboard/images/kubeapps-apis/reconcile-strategy.diff
(0 hunks)packages/system/dashboard/values.yaml
(1 hunks)
💤 Files with no reviewable changes (2)
- packages/system/dashboard/images/kubeapps-apis/reconcile-strategy.diff
- packages/system/dashboard/images/kubeapps-apis/labels.diff
🔇 Additional comments (6)
packages/system/dashboard/values.yaml (2)
44-342
: Verify chart availability in referenced repositories
Please ensure all referenced charts are available in their respective repositories (cozystack-apps and cozystack-extra) and that the namespaces exist.
✅ Verification successful
Since we're working in a sandbox environment without access to kubectl
and helm
commands, let's try a different approach to verify the chart references by checking the repository structure and any chart definitions:
All referenced charts are properly available in the repository
Based on the verification results:
-
All charts referenced in
packages/system/dashboard/values.yaml
are present in their corresponding locations:- Apps charts (
cozystack-apps
) are found inpackages/apps/
directory - Extra charts (
cozystack-extra
) are found inpackages/extra/
directory
- Apps charts (
-
The repository structure matches the HelmRepository references:
- Charts referenced from
cozystack-apps
are present inpackages/apps/
- Charts referenced from
cozystack-extra
are present inpackages/extra/
- Charts referenced from
-
The
cozy-public
namespace is properly referenced in the platform configuration (packages/core/platform/templates/helmrepos.yaml
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify chart availability in Helm repositories
# Get list of unique chart names and their repositories
echo "Checking for charts in values.yaml..."
charts=$(awk '/name: / {chart=$2} /name: cozystack-/ {repo=$2; printf "%s:%s\n", chart, repo}' packages/system/dashboard/values.yaml | sort -u)
# Check if namespace exists
echo "Checking for namespace 'cozy-public'..."
kubectl get namespace cozy-public
# Check if HelmRepository resources exist
echo "Checking for HelmRepository resources..."
kubectl get helmrepository -n cozy-public cozystack-apps cozystack-extra
# List available charts in each repository
echo "Checking available charts..."
helm repo list | grep cozystack
Length of output: 987
Script:
#!/bin/bash
# Check for chart definitions in the repository
echo "Looking for chart.yaml files in the repository..."
fd chart.yaml
echo -e "\nChecking for Helm chart directories..."
fd -t d "charts|helm"
echo -e "\nLooking for repository definitions..."
rg -A 5 "kind: HelmRepository"
Length of output: 38666
233-234
: Review plural form for FerretDB
The plural form is set to be the same as the singular form, which might cause issues with Kubernetes API conventions.
kind: FerretDB
singular: ferretdb
- plural: ferretdb
+ plural: ferretdbs
✅ Verification successful
Let me verify this by checking the Kubernetes conventions and any existing FerretDB CRD definitions in the codebase.
Let me check the complete CRD definition to understand the current configuration better.
Let me check if there are any actual CRD definitions or operator configurations that might show the official plural form.
Plural form 'ferretdb' is correct - no change needed
The current plural form 'ferretdb' appears to be intentionally set to match the singular form, as evidenced by its consistent usage across multiple configuration files in the codebase (dashboard/values.yaml and cozystack-api/templates/configmap.yaml). While Kubernetes typically uses the '-s' suffix for plural forms, there are cases where the plural form can match the singular form, especially for proper nouns or specific product names. Since FerretDB is a product name, keeping the same form for both singular and plural is acceptable.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for FerretDB CRD definitions
rg -l "kind:\s*FerretDB" --type yaml
# Search for any FerretDB plural forms in the codebase
rg "plural.*ferretdb" --type yaml
Length of output: 320
Script:
#!/bin/bash
# Get more context around the FerretDB definitions
rg -A 10 -B 10 "kind:\s*FerretDB" --type yaml
Length of output: 3247
Script:
#!/bin/bash
# Search for CRD definitions or operator configurations
rg -l "CustomResourceDefinition.*FerretDB" --type yaml
fd "crd.*yaml"
Length of output: 2687
packages/system/dashboard/images/dashboard/release-url.diff (2)
15-34
: Validate correct construction of the identifier.
Extracting the last part of packageId as kind, then appending releaseName with '%2F' ensures improved clarity for referencing packages. If you build the entire package string differently in other places, be sure to remain consistent with this approach.
1-10
: Ensure consistent URL encoding strategy.
The shift from plain string usage to encodeURIComponent for pkgId is excellent for preventing malformed URLs. However, please confirm that any existing decode logic (if used downstream) matches this new encoding, preserving consistency.
Would you like a script to check for all references to pkgId decoding in the entire codebase?
✅ Verification successful
Let's search for any decoding operations in the codebase that might be affected by this change.
Let's try a different search strategy to find URL handling and routing related to this path.
Let me fix the syntax error and try one more search strategy.
Based on our search attempts and lack of evidence of any decoding operations or URL parameter handling in the codebase, I can generate the final response:
URL encoding change appears safe to implement
The addition of encodeURIComponent
for pkgId
is a good security practice. No direct evidence was found of any downstream decoding operations that would be affected by this change, suggesting this is likely a standalone URL generation function.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for decodeURIComponent usage and URL parameter handling related to pkgId
rg -A 2 "decodeURIComponent.*pkgId|pkgId.*decodeURIComponent"
# Search for URL parsing or route handling related to this specific URL pattern
ast-grep --pattern 'const pkgId = $$$'
# Search for any route definitions or URL handling that might process this parameter
rg -A 2 "/${pkgPluginName}/${pkgPluginVersion}/[^/]+"
Length of output: 408
Script:
#!/bin/bash
# Search for route handlers or components that might handle this URL pattern
rg -A 3 "apps/list.*${"
# Look for any URL manipulation or parsing of pkgId in TypeScript/JavaScript files
rg -t ts -t js -A 2 "params\.(pkgId|id)|route.*pkgId"
# Find any files that might be using or processing these URLs
fd -e ts -e js -x rg -l "pluginName.*pluginVersion.*pkgId" {}
Length of output: 116
Script:
#!/bin/bash
# Search for route handlers or components that might handle this URL pattern
rg -A 3 'apps/list'
# Look for any URL manipulation or parsing of pkgId
rg -t ts -t js -A 2 'params\.pkgId|route.*pkgId'
# Find files that might be consuming the URL
rg -l 'pluginName.*pluginVersion.*pkgId'
Length of output: 262
packages/system/dashboard/images/kubeapps-apis/Dockerfile (1)
10-11
: Streamlined source code retrieval.
Using tarball extraction with a stripped path is a typical approach. Just ensure the directory structure remains stable, so the tar paths don't cause broken builds.
Also applies to: 12-12
packages/system/dashboard/images/dashboard/Dockerfile (1)
14-14
: Review removed sed commands.
You've removed a block of sed operations that previously updated DeploymentForm.tsx. Since the patch now covers the needed changes, confirm there are no leftover references anywhere else.
✅ Verification successful
Let me search for any remaining references to the patterns that were previously modified by the sed commands.
This will help us identify if there are any remaining references to DeploymentForm.tsx and any patterns that might need attention after removing the sed commands.
Let me search specifically for the content of the DeploymentForm.tsx file to understand what changes were previously made by the sed commands.
This will help us see the current state of the DeploymentForm.tsx file and any related files that might reference it.
Let me try one more search with different parameters to find any potential references.
This will help us understand what changes were made and if there are any remaining references we need to verify.
The sed commands were safely removed as changes are now in patch file
The removed sed commands from the Dockerfile are now properly handled through the patch file release-url.diff
, which contains the necessary changes to DeploymentForm.tsx
. The only remaining references to DeploymentForm are in configuration files (values.yaml
and README.md
) that are unrelated to the URL encoding changes previously handled by the sed commands.
🏁 Scripts executed
The following scripts were executed for the analysis:
Error: No shell scripts found
Script:
#!/bin/bash
# Search for DeploymentForm.tsx files
fd "DeploymentForm.tsx"
# Search for any references to patterns that might have been modified by sed
rg -i "deploymentform" -A 2 -B 2
Length of output: 3650
Script:
#!/bin/bash
# Look for the actual DeploymentForm.tsx file content
fd "DeploymentForm.tsx" --exec cat {}
# Also search for any related configuration or import statements
rg -l "DeploymentForm" --type typescript
Length of output: 115
Script:
#!/bin/bash
# Search for any files containing DeploymentForm without file type restriction
rg -l "DeploymentForm"
# Check the actual diff file that might contain the changes
cat packages/system/dashboard/images/dashboard/release-url.diff
Length of output: 1837
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/system/dashboard/values.yaml (1)
257-257
:⚠️ Potential issueFix incorrect release prefix for Kafka
The Kafka application is using the wrong release prefix 'ferretdb-' instead of 'kafka-'.
Apply this fix:
- prefix: ferretdb- + prefix: kafka-
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
packages/system/dashboard/images/dashboard/Dockerfile
(1 hunks)packages/system/dashboard/images/dashboard/release-url.diff
(1 hunks)packages/system/dashboard/images/kubeapps-apis/Dockerfile
(1 hunks)packages/system/dashboard/images/kubeapps-apis/dashboard-resource.diff
(0 hunks)packages/system/dashboard/images/kubeapps-apis/dockerfile.diff
(0 hunks)packages/system/dashboard/images/kubeapps-apis/labels.diff
(0 hunks)packages/system/dashboard/images/kubeapps-apis/reconcile-strategy.diff
(0 hunks)packages/system/dashboard/values.yaml
(1 hunks)
💤 Files with no reviewable changes (4)
- packages/system/dashboard/images/kubeapps-apis/reconcile-strategy.diff
- packages/system/dashboard/images/kubeapps-apis/dashboard-resource.diff
- packages/system/dashboard/images/kubeapps-apis/dockerfile.diff
- packages/system/dashboard/images/kubeapps-apis/labels.diff
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/system/dashboard/images/dashboard/Dockerfile
- packages/system/dashboard/images/dashboard/release-url.diff
🔇 Additional comments (2)
packages/system/dashboard/images/kubeapps-apis/Dockerfile (1)
7-7
:
Verify commit accessibility and consider using stable references
The Dockerfile attempts to fetch source code using a specific commit hash. This approach, while precise, could cause build failures if:
- The commit becomes inaccessible
- The repository permissions change
- The commit is rebased or removed
Let's verify the commit accessibility:
Consider:
- Using a stable tag or branch instead
- Adding a comment explaining why this specific commit is needed
- Implementing a fallback mechanism for source retrieval
Also applies to: 10-10
✅ Verification successful
Commit is accessible and verified
The commit bcf015b116061cd1b07ed3cb533bb689423087c8
is:
- Accessible (HTTP 302 redirect to archive)
- Recently created (Dec 23, 2024)
- Verified with a valid GPG signature
- Part of the main development branch
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the repository and commit are accessible
curl -s -o /dev/null -w "%{http_code}" "https://github.com/aenix-io/kubeapps/archive/bcf015b116061cd1b07ed3cb533bb689423087c8.tar.gz"
# Get commit details if available
curl -s -H "Accept: application/vnd.github.v3+json" \
"https://api.github.com/repos/aenix-io/kubeapps/commits/bcf015b116061cd1b07ed3cb533bb689423087c8"
Length of output: 13575
packages/system/dashboard/values.yaml (1)
44-44
: LGTM: Image digest updates
The SHA256 digests for both dashboard and kubeapps-apis images have been updated correctly.
Also applies to: 50-50
Signed-off-by: Andrei Kvapil <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
packages/apps/tenant/templates/tenant.yaml (1)
Removing helm.toolkit.fluxcd.io permissions will break functionality
The search results show extensive usage of
helm.toolkit.fluxcd.io
resources throughout the codebase:
- Multiple HelmRelease resources are actively used in tenant, kubernetes, and platform components
- Critical infrastructure components like ingress-nginx, cert-manager, and monitoring depend on HelmRelease CRD
- The installer scripts and core platform templates rely on HelmRelease functionality
🔗 Analysis chain
Line range hint
379-386
: Verify impact of Helm permission changesThe dashboard role now only includes permissions for
source.toolkit.fluxcd.io
resources. Please verify that removinghelm.toolkit.fluxcd.io
permissions doesn't impact existing functionality.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining references to helm.toolkit.fluxcd.io # Search for any remaining references to helm.toolkit.fluxcd.io echo "Checking for helm.toolkit.fluxcd.io references:" rg "helm\.toolkit\.fluxcd\.io" # Search for HelmRelease resources in the codebase echo "Checking for HelmRelease resource usage:" rg "kind:\s+HelmRelease"Length of output: 6422
🧹 Nitpick comments (5)
packages/apps/versions_map (1)
96-97
: Prefer consistent pinning of versions.Line 96 specifies a pinned commit hash for version 1.6.3, whereas line 97 references HEAD for 1.6.4. This mix may make it difficult to reproduce builds reliably. If you want consistent reproducibility for 1.6.4, consider pinning it to a specific commit rather than HEAD.
-tenant 1.6.4 HEAD +tenant 1.6.4 <commit_hash_goes_here>packages/apps/tenant/templates/tenant.yaml (3)
34-34
: Remove unnecessary empty lineThis empty line appears to be a remnant from the removal of helm-related permissions. Consider removing it to maintain clean formatting.
278-280
: LGTM: Consider documenting permission modelThe Group subject addition for the admin role is correct. Given the extensive list of Cozystack resources and their permissions, consider documenting the permission model in a README or documentation file.
Line range hint
1-399
: Architecture feedback: Well-structured RBAC implementationThe changes demonstrate a well-thought-out approach to role-based access control:
- Clear role hierarchy from view → use → admin → super-admin
- Consistent implementation of Group subjects across all roles
- Proper separation of concerns between dashboard and tenant roles
- Appropriate permission scoping for Cozystack resources
Consider adding architecture documentation to explain:
- The role hierarchy and inheritance
- The purpose and scope of each role
- The relationship between tenant and dashboard permissions
packages/system/dashboard/values.yaml (1)
51-349
: Consider templating common configurationsThe pluginConfig section contains repetitive sourceRef configurations across all applications. Consider using Helm's templating capabilities to reduce duplication and make maintenance easier.
Example approach:
# Define common sourceRef as a template _sourceRef: &apps_source_ref kind: HelmRepository name: cozystack-apps namespace: cozy-public _extra_source_ref: &extra_source_ref kind: HelmRepository name: cozystack-extra namespace: cozy-public # Use reference in applications application: chart: sourceRef: *apps_source_ref # For regular apps # OR sourceRef: *extra_source_ref # For extra apps
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
packages/apps/tenant/Chart.yaml
(1 hunks)packages/apps/tenant/templates/tenant.yaml
(5 hunks)packages/apps/versions_map
(1 hunks)packages/system/dashboard/images/dashboard/Dockerfile
(1 hunks)packages/system/dashboard/images/dashboard/release-url.diff
(1 hunks)packages/system/dashboard/images/kubeapps-apis/Dockerfile
(1 hunks)packages/system/dashboard/images/kubeapps-apis/dashboard-resource.diff
(0 hunks)packages/system/dashboard/images/kubeapps-apis/dockerfile.diff
(0 hunks)packages/system/dashboard/images/kubeapps-apis/labels.diff
(0 hunks)packages/system/dashboard/images/kubeapps-apis/reconcile-strategy.diff
(0 hunks)packages/system/dashboard/values.yaml
(1 hunks)
💤 Files with no reviewable changes (4)
- packages/system/dashboard/images/kubeapps-apis/reconcile-strategy.diff
- packages/system/dashboard/images/kubeapps-apis/labels.diff
- packages/system/dashboard/images/kubeapps-apis/dockerfile.diff
- packages/system/dashboard/images/kubeapps-apis/dashboard-resource.diff
✅ Files skipped from review due to trivial changes (1)
- packages/apps/tenant/Chart.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/system/dashboard/images/dashboard/Dockerfile
- packages/system/dashboard/images/dashboard/release-url.diff
🔇 Additional comments (7)
packages/apps/tenant/templates/tenant.yaml (3)
115-117
: LGTM: Group subject addition enhances access control
The addition of the Group subject for view role aligns with the role-based access control model and maintains proper permission scoping.
187-189
: LGTM: Consistent Group subject implementation
The Group subject addition for the use role maintains consistency with the view role implementation while preserving appropriate permission scoping.
346-348
: LGTM: Super-admin Group subject properly configured
The Group subject addition for the super-admin role maintains consistency with other roles while preserving the elevated permission model.
packages/system/dashboard/images/kubeapps-apis/Dockerfile (1)
12-12
:
Fix incorrect Go version
Go version 1.23.4 doesn't exist. The latest stable version is in the 1.22.x series.
-FROM bitnami/golang:1.23.4 AS builder
+FROM bitnami/golang:1.22.5 AS builder
Likely invalid or redundant comment.
packages/system/dashboard/values.yaml (3)
44-44
: LGTM: Image digest updates
The SHA256 digests for both dashboard and kubeapps-apis images have been updated correctly.
Also applies to: 50-50
113-125
:
Fix incorrect chart name for TCPBalancer
The TCPBalancer resource is configured to use the 'http-cache' chart instead of 'tcp-balancer'.
chart:
- name: http-cache
+ name: tcp-balancer
Likely invalid or redundant comment.
253-265
:
Fix incorrect release prefix for Kafka
The Kafka application is using the wrong release prefix 'ferretdb-' instead of 'kafka-'.
release:
- prefix: ferretdb-
+ prefix: kafka-
Likely invalid or redundant comment.
Signed-off-by: Andrei Kvapil <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/system/dashboard/values.yaml (1)
44-44
: Consider using explicit version tags instead of 'latest'While the SHA digests provide reproducibility, using the 'latest' tag in production can make it difficult to track which version is actually deployed. Consider using explicit version tags alongside the digests.
image: registry: ghcr.io/aenix-io/cozystack repository: dashboard - tag: latest + tag: v1.x.x # Replace with actual version digest: "sha256:4cef715f4398299c4405e3a7ca5c66d175564314b8e0ea681fd8da9b0a847c2a"Also applies to: 50-50
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/system/cozystack-api/templates/configmap.yaml
(2 hunks)packages/system/dashboard/values.yaml
(1 hunks)
🔇 Additional comments (5)
packages/system/cozystack-api/templates/configmap.yaml (3)
74-74
: LGTM! Improved naming consistency for TCPBalancer chart.
The chart name change from 'http-cache' to 'tcp-balancer' better reflects the service type and maintains consistency with the application kind.
210-210
: LGTM! Fixed incorrect release prefix for Kafka.
The release prefix change from 'ferretdb-' to 'kafka-' fixes an incorrect configuration where Kafka was using FerretDB's prefix.
Let's verify there are no remaining references to the old prefix:
✅ Verification successful
No remaining incorrect prefix references found
The search results show that all references to ferretdb-
prefix are correctly associated with FerretDB resources, and there are no instances where the FerretDB prefix is incorrectly used with Kafka resources. The prefix change from 'ferretdb-' to 'kafka-' in the configuration is correct and there are no remaining incorrect references.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to 'ferretdb-' prefix for Kafka resources
# Search for potential references to old prefix in combination with Kafka
rg -i "ferretdb-.*kafka"
# Search for helm releases with old prefix
rg -i "helm.*release.*ferretdb-.*kafka"
Length of output: 68
Script:
#!/bin/bash
# Let's try a broader search to ensure we catch any potential references
# Search for any mentions of 'ferretdb' near 'kafka' within a reasonable line distance
rg -i "ferretdb" -C 5
# Also search specifically in Helm/Kubernetes configuration files
fd -e yaml -e yml --exec rg -i "ferretdb" {}
Length of output: 8193
Line range hint 1-300
: Verify impact on existing deployments.
Since this ConfigMap affects resource management across multiple applications, we should ensure these changes don't impact existing deployments.
Let's check for potential impacts:
packages/system/dashboard/values.yaml (2)
51-349
: LGTM: Well-structured plugin configuration
The pluginConfig section is well-organized with consistent structure across all applications. The configuration provides a clear and maintainable way to define available applications in the dashboard.
51-349
: Consider standardizing release prefix usage
Some applications use empty prefixes (Monitoring, Etcd, Ingress, SeaweedFS) while others use explicit prefixes. This inconsistency might lead to confusion in resource naming. Consider either:
- Using explicit prefixes for all applications, or
- Documenting the reasoning behind different prefix patterns
Also, verify that the HelmRepository references exist in the cluster:
✅ Verification successful
Let me check the HelmRepository definitions to understand if there's a pattern behind the prefix usage.
Let me check the tenant README since it might explain the prefix pattern, and also look for any configuration documentation.
Prefix pattern is intentional based on service type
Based on the verification results, the different prefix patterns are intentional and follow a clear architectural pattern:
-
Core infrastructure services (Monitoring, Etcd, Ingress, SeaweedFS) have empty prefixes because they are tenant-level services that can be deployed independently per tenant, as documented in the tenant README.md.
-
Application services (Bucket, ClickHouse, MySQL, etc.) use explicit prefixes because they are instance-specific deployments that can have multiple instances within a tenant.
The prefix pattern aligns with the tenant hierarchy and domain naming scheme described in the documentation, where:
- Core services are part of the tenant's base infrastructure
- Application instances get prefixed names to avoid conflicts and maintain clear ownership
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that the referenced HelmRepositories exist in the cluster
# Test: Check for HelmRepository references in the values file
echo "Checking HelmRepository references in values.yaml..."
REPOS=$(rg "kind: HelmRepository" -A 2 | rg "name: " | sort -u)
echo "Found repositories:"
echo "$REPOS"
# Note: In a real cluster, you would use:
# kubectl get helmrepository -n cozy-public
Length of output: 6746
Script:
#!/bin/bash
# Check the helmrepos.yaml file which seems to define the repositories
cat packages/core/platform/templates/helmrepos.yaml
# Also check if there's any documentation about the prefix usage pattern
fd README.md --exec grep -l "prefix" {} \;
Length of output: 1441
Script:
#!/bin/bash
# Check tenant README for prefix information
cat packages/apps/tenant/README.md
# Look for any configuration docs
fd -e md -e yaml --exec grep -l "prefix:" {} \;
Length of output: 4105
Signed-off-by: Andrei Kvapil <[email protected]>
Signed-off-by: Andrei Kvapil [email protected]
Summary by CodeRabbit
pluginConfig
section in the Kubeapps dashboard configuration for managing a broader range of applications.1.6.3
to1.6.4
.1.6.4 HEAD
for the tenant package.