diff --git a/documentation/assemblies/deploying/assembly-deploy-options.adoc b/documentation/assemblies/deploying/assembly-deploy-options.adoc deleted file mode 100644 index 0353f37e426..00000000000 --- a/documentation/assemblies/deploying/assembly-deploy-options.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// This assembly is included in: -// -// deploying/deploying.adoc - -[id="deploy-options_{context}"] -= What is deployed with Strimzi - -//Standard kafka deployment introduction -include::../../shared/snip-intro-kafka-deployment.adoc[leveloffset=+1] -//General deploy order -include::../../modules/deploying/con-deploy-options-order.adoc[leveloffset=+1] diff --git a/documentation/assemblies/deploying/assembly-deploy-tasks-prep.adoc b/documentation/assemblies/deploying/assembly-deploy-tasks-prep.adoc index f505026e47c..1636e1148b8 100644 --- a/documentation/assemblies/deploying/assembly-deploy-tasks-prep.adoc +++ b/documentation/assemblies/deploying/assembly-deploy-tasks-prep.adoc @@ -3,14 +3,14 @@ // deploying/deploying.adoc [id="deploy-tasks-prereqs_{context}"] -= Preparing for your Strimzi deployment += Preparing for your deployment [role="_abstract"] Prepare for a deployment of Strimzi by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following: * xref:deploy-prereqs-{context}[Ensuring you have the necessary prerequisites before deploying Strimzi] -* xref:downloads-{context}[Downloading the Strimzi release artifacts to facilitate your deployment] +* xref:con-deploy-operator-best-practices-{context}[Considering operator deployment best practices] * xref:container-images-{context}[Pushing the Strimzi container images into your own registry (if required)] * xref:adding-users-the-strimzi-admin-role-{context}[Setting up admin roles to enable configuration of custom resources used in the deployment] @@ -20,8 +20,6 @@ NOTE: To run the commands in this guide, your cluster user must have the rights include::../../modules/deploying/con-deploy-prereqs.adoc[leveloffset=+1] //operator deployment tips include::../../modules/deploying/con-deploy-operator-best-practices.adoc[leveloffset=+1] -//How to access release artifacts -include::../../modules/deploying/con-deploy-product-downloads.adoc[leveloffset=+1] //Container images include::../../modules/deploying/con-deploy-container-images.adoc[leveloffset=+1] //Designating administrators to manage the install process diff --git a/documentation/assemblies/deploying/assembly-deploy-tasks.adoc b/documentation/assemblies/deploying/assembly-deploy-tasks.adoc index abb7ebd491a..0a02d8d0fa6 100644 --- a/documentation/assemblies/deploying/assembly-deploy-tasks.adoc +++ b/documentation/assemblies/deploying/assembly-deploy-tasks.adoc @@ -3,11 +3,10 @@ // deploying/deploying.adoc [id="deploy-tasks_{context}"] -= Deploying Strimzi using installation artifacts += Deploying Strimzi using installation files [role="_abstract"] -Having xref:deploy-tasks-prereqs_{context}[prepared your environment for a deployment of Strimzi], you can deploy Strimzi to a Kubernetes cluster. -Use the installation files provided with the release artifacts. +Download and use the Strimzi xref:downloads-{context}[deployment files] to deploy Strimzi components to a Kubernetes cluster. ifdef::Section[] You can deploy Strimzi {ProductVersion} on Kubernetes {KubernetesVersion}. @@ -32,8 +31,6 @@ The steps to deploy Strimzi using the installation files are as follows: NOTE: To run the commands in this guide, a Kubernetes user must have the rights to manage role-based access control (RBAC) and CRDs. -//Deployment paths -include::../../modules/deploying/con-deploy-paths.adoc[leveloffset=+1] //Options and instructions for deploying Cluster Operator include::assembly-deploy-cluster-operator.adoc[leveloffset=+1] //Options and instructions for deploying Kafka resource diff --git a/documentation/assemblies/deploying/assembly-drain-cleaner.adoc b/documentation/assemblies/deploying/assembly-drain-cleaner.adoc index b1c9e2d154a..5950335310d 100644 --- a/documentation/assemblies/deploying/assembly-drain-cleaner.adoc +++ b/documentation/assemblies/deploying/assembly-drain-cleaner.adoc @@ -57,13 +57,6 @@ webhooks: # ... ---- -[id='drain-cleaner-prereqs-{context}'] -== Downloading the Strimzi Drain Cleaner deployment files - -To deploy and use the Strimzi Drain Cleaner, you need to download the deployment files. - -The Strimzi Drain Cleaner deployment files are available from the link:{ReleaseDownload}. - //steps for deploying drain cleaner include::../../modules/drain-cleaner/proc-drain-cleaner-deploying.adoc[leveloffset=+1] ifdef::Section[] diff --git a/documentation/assemblies/overview/assembly-kafka-components.adoc b/documentation/assemblies/overview/assembly-kafka-components.adoc index ebc79868b8f..c4cb4b2e023 100644 --- a/documentation/assemblies/overview/assembly-kafka-components.adoc +++ b/documentation/assemblies/overview/assembly-kafka-components.adoc @@ -5,7 +5,20 @@ [id="kafka-components_{context}"] = Strimzi deployment of Kafka -//standard kafka deployment intro -include::../../shared/snip-intro-kafka-deployment.adoc[leveloffset=+1] +Strimzi enables the deployment of Apache Kafka components to a Kubernetes cluster, typically running as clusters for high availability. + +A standard Kafka deployment using Strimzi might include the following components: + +* *Kafka* cluster of broker nodes as the core component +* *Kafka Connect* cluster for external data connections +* *Kafka MirrorMaker* cluster to mirror data to another Kafka cluster +* *Kafka Exporter* to extract additional Kafka metrics data for monitoring +* *Kafka Bridge* to enable HTTP-based communication with Kafka +* *Cruise Control* to rebalance topic partitions across brokers + +Not all of these components are required, though you need Kafka as a minimum for a Strimzi-managed Kafka cluster. +Depending on your use case, you can deploy the additional components as needed. +These components can also be used with Kafka clusters that are not managed by Strimzi. + //Overview of Kafka component interaction include::../../modules/overview/con-kafka-concepts-components.adoc[leveloffset=+1] diff --git a/documentation/assemblies/upgrading/assembly-upgrade-cluster-operator.adoc b/documentation/assemblies/upgrading/assembly-upgrade-cluster-operator.adoc index 2448a79f257..7b9f94533b4 100644 --- a/documentation/assemblies/upgrading/assembly-upgrade-cluster-operator.adoc +++ b/documentation/assemblies/upgrading/assembly-upgrade-cluster-operator.adoc @@ -31,7 +31,7 @@ If you deployed the Cluster Operator using a Helm chart, use `helm upgrade`. The `helm upgrade` command does not upgrade the {HelmCustomResourceDefinitions}. Install the new CRDs manually after upgrading the Cluster Operator. -You can access the CRDs from the {ReleaseDownload} or find them in the `crd` subdirectory inside the Helm Chart. +You can download the CRDs from the {ReleaseDownload} or find them in the `crd` subdirectory inside the Helm Chart. [id='con-upgrade-cluster-operator-unsupported-kafka-{context}'] == Upgrading the Cluster Operator returns Kafka version error diff --git a/documentation/assemblies/upgrading/assembly-upgrade.adoc b/documentation/assemblies/upgrading/assembly-upgrade.adoc index a26ad115a0b..d2018b4749b 100644 --- a/documentation/assemblies/upgrading/assembly-upgrade.adoc +++ b/documentation/assemblies/upgrading/assembly-upgrade.adoc @@ -6,7 +6,7 @@ = Upgrading Strimzi [role="_abstract"] -Upgrade your Strimzi installation to version {ProductVersion} and benefit from new features, performance improvements, and enhanced security options. +Download the latest Strimzi xref:downloads-{context}[deployment files] and upgrade your Strimzi installation to version {ProductVersion} to benefit from new features, performance improvements, and enhanced security options. During the upgrade, Kafka is also be updated to the latest supported version, introducing additional features and bug fixes to your Strimzi deployment. Use the same method to upgrade the Cluster Operator as the initial method of deployment. @@ -16,8 +16,6 @@ Kafka upgrades are performed by the Cluster Operator through rolling updates of If you encounter any issues with the new version, Strimzi can be xref:assembly-downgrade-{context}[downgraded] to the previous version. -Released Strimzi versions can be found at {ReleaseDownload}. - .Upgrade without downtime For topics configured with high availability (replication factor of at least 3 and evenly distributed partitions), the upgrade process should not cause any downtime for consumers and producers. diff --git a/documentation/deploying/deploying.adoc b/documentation/deploying/deploying.adoc index c06fc58b0fd..ec58d6ec5ff 100644 --- a/documentation/deploying/deploying.adoc +++ b/documentation/deploying/deploying.adoc @@ -8,14 +8,16 @@ include::shared/attributes.adoc[] //Introduction to the install process include::assemblies/deploying/assembly-deploy-intro.adoc[leveloffset=+1] +//Using Kafka in Kraft mode +include::assemblies/deploying/assembly-kraft-mode.adoc[leveloffset=+1] //Install options include::modules/deploying/con-strimzi-installation-methods.adoc[leveloffset=+1] -//Checklist to show deployment order and the options available -include::assemblies/deploying/assembly-deploy-options.adoc[leveloffset=+1] +//Deployment path +include::modules/deploying/con-deploy-paths.adoc[leveloffset=+1] +//How to access release artifacts +include::modules/deploying/con-deploy-product-downloads.adoc[leveloffset=+1] //Prep for the deployment include::assemblies/deploying/assembly-deploy-tasks-prep.adoc[leveloffset=+1] -//Using Kafka in Kraft mode -include::assemblies/deploying/assembly-kraft-mode.adoc[leveloffset=+1] //Deployment steps using installation artifacts include::assemblies/deploying/assembly-deploy-tasks.adoc[leveloffset=+1] //Deployment using operatorhub.io diff --git a/documentation/modules/configuring/con-config-examples.adoc b/documentation/modules/configuring/con-config-examples.adoc index 4a0318595ed..b9f2a464922 100644 --- a/documentation/modules/configuring/con-config-examples.adoc +++ b/documentation/modules/configuring/con-config-examples.adoc @@ -7,7 +7,7 @@ [role="_abstract"] Further enhance your deployment by incorporating additional supported configuration. -Example configuration files are provided with the downloadable release artifacts from the {ReleaseDownload}. +Example configuration files are included in the Strimzi xref:downloads-{context}[deployment files]. ifdef::Section[] You can also access the example files directly from the link:https://github.com/strimzi/strimzi-kafka-operator/tree/{GithubVersion}/examples/[`examples` directory^]. diff --git a/documentation/modules/configuring/con-config-kafka-bridge.adoc b/documentation/modules/configuring/con-config-kafka-bridge.adoc index fece80dd6f4..85fdb8ac41b 100644 --- a/documentation/modules/configuring/con-config-kafka-bridge.adoc +++ b/documentation/modules/configuring/con-config-kafka-bridge.adoc @@ -12,7 +12,7 @@ In order to prevent issues arising when client consumer requests are processed b Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances. -For a deeper understanding of the Kafka Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] and the {BookURLConfiguring}[Strimzi Custom Resource API Reference^]. +For a deeper understanding of the Kafka Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] guide and the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^]. .Example `KafkaBridge` custom resource configuration [source,yaml,subs="+quotes,attributes"] diff --git a/documentation/modules/cruise-control/proc-configuring-deploying-cruise-control.adoc b/documentation/modules/cruise-control/proc-configuring-deploying-cruise-control.adoc index eb30097b11f..4bb41eb504b 100644 --- a/documentation/modules/cruise-control/proc-configuring-deploying-cruise-control.adoc +++ b/documentation/modules/cruise-control/proc-configuring-deploying-cruise-control.adoc @@ -16,12 +16,12 @@ If brokers are running on nodes with heterogeneous network resources, you can us If an empty object (`{}`) is used for the `cruiseControl` configuration, all properties use their default values. +Strimzi provides xref:config-examples-{context}[example configuration files], which include `Kafka` custom resources with Cruise Control configuration. For more information on the configuration options for Cruise Control, see the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^]. .Prerequisites -* A Kubernetes cluster -* A running Cluster Operator +* xref:deploying-cluster-operator-str[The Cluster Operator must be deployed.] .Procedure diff --git a/documentation/modules/deploying/con-deploy-options-order.adoc b/documentation/modules/deploying/con-deploy-options-order.adoc deleted file mode 100644 index 311eb2d7330..00000000000 --- a/documentation/modules/deploying/con-deploy-options-order.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// deploying/assembly_deploy-options.adoc - -[id='deploy-options-order-{context}'] -= Order of deployment - -[role="_abstract"] -The required order of deployment to a Kubernetes cluster is as follows: - -. Deploy the Cluster Operator to manage your Kafka cluster -. Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment -. Optionally deploy: -** The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster -** Kafka Connect -** Kafka MirrorMaker -** Kafka Bridge -** Components for the monitoring of metrics - -The Cluster Operator creates Kubernetes resources for the components, -such as `Deployment`, `Service`, and `Pod` resources. -The names of the Kubernetes resources are appended with the name specified for a component when it's deployed. -For example, a Kafka cluster named `my-kafka-cluster` has a service named `my-kafka-cluster-kafka`. diff --git a/documentation/modules/deploying/con-deploy-paths.adoc b/documentation/modules/deploying/con-deploy-paths.adoc index edb9ce48b9a..2c537296af7 100644 --- a/documentation/modules/deploying/con-deploy-paths.adoc +++ b/documentation/modules/deploying/con-deploy-paths.adoc @@ -3,22 +3,26 @@ // deploying/assembly_deploy-tasks.adoc [id='con-deploy-paths-{context}'] -= Basic deployment path += Deployment path [role="_abstract"] -You can set up a deployment where Strimzi manages a single Kafka cluster in the same namespace. -You might use this configuration for development or testing. -Or you can use Strimzi in a production environment to manage a number of Kafka clusters in different namespaces. +You can configure a deployment where Strimzi manages a single Kafka cluster in the same namespace, suitable for development or testing. +Alternatively, Strimzi can manage multiple Kafka clusters across different namespaces in a production environment. -The basic deployment path is as follows: +The basic deployment path includes the following steps: -. xref:downloads-{context}[Download the release artifacts] -. Create a Kubernetes namespace in which to deploy the Cluster Operator -. xref:cluster-operator-{context}[Deploy the Cluster Operator] -.. Update the `install/cluster-operator` files to use the namespace created for the Cluster Operator -.. Install the Cluster Operator to watch one, multiple, or all namespaces -. xref:kafka-cluster-{context}[Create a Kafka cluster] +. Create a Kubernetes namespace for the Cluster Operator. +. Deploy the Cluster Operator based on your chosen deployment method. +. Deploy the Kafka cluster, including the Topic Operator and User Operator if desired. +. Optionally, deploy additional components: +** The Topic Operator and User Operator as standalone components, if not deployed with the Kafka cluster +** Kafka Connect +** Kafka MirrorMaker +** Kafka Bridge +** Metrics monitoring components -After which, you can deploy other Kafka components and set up monitoring of your deployment. +The Cluster Operator creates Kubernetes resources such as `Deployment`, `Service`, and `Pod` for each component. +The resource names are appended with the name of the deployed component. +For example, a Kafka cluster named `my-kafka-cluster` will have a service named `my-kafka-cluster-kafka`. diff --git a/documentation/modules/deploying/con-deploy-product-downloads.adoc b/documentation/modules/deploying/con-deploy-product-downloads.adoc index b2af01f0e4e..c6aac0da7c3 100644 --- a/documentation/modules/deploying/con-deploy-product-downloads.adoc +++ b/documentation/modules/deploying/con-deploy-product-downloads.adoc @@ -3,18 +3,22 @@ // deploying/assembly_deploy-tasks-prep.adoc [id='downloads-{context}'] -= Downloading Strimzi release artifacts += Downloading deployment files [role="_abstract"] -To use deployment files to install Strimzi, download and extract the files from the {ReleaseDownload}. +To deploy Strimzi components using YAML files, download and extract the latest release archive (`{ReleaseFile}`) from the {ReleaseDownload}. -Strimzi release artifacts include sample YAML files to help you deploy the components of Strimzi to Kubernetes, perform common operations, -and configure your Kafka cluster. +The release archive contains sample YAML files for deploying Strimzi components to Kubernetes using `kubectl`. -Use `kubectl` to deploy the Cluster Operator from the `install/cluster-operator` folder of the downloaded ZIP file. -For more information about deploying and configuring the Cluster Operator, see xref:cluster-operator-{context}[]. +Begin by deploying the Cluster Operator from the `install/cluster-operator` directory to watch a single namespace, multiple namespaces, or all namespaces. -In addition, if you want to use standalone installations of the Topic and User Operators with a Kafka cluster that is not managed by the Strimzi Cluster Operator, you can deploy them from the `install/topic-operator` and `install/user-operator` folders. +In the `install` folder, you can also deploy other Strimzi components, including: -NOTE: Strimzi container images are also available through the {DockerRepository}. -However, we recommend that you use the YAML files provided to deploy Strimzi. +* Strimzi administrator roles (`strimzi-admin`) +* Standalone Topic Operator (`topic-operator`) +* Standalone User Operator (`user-operator`) +* Strimzi Drain Cleaner (`drain-cleaner`) + +The `examples` folder xref:config-examples-str[provides examples of Strimzi custom resources] to help you develop your own Kafka configurations. + +NOTE: Strimzi container images are available through the {DockerRepository}, but we recommend using the provided YAML files for deployment. diff --git a/documentation/modules/deploying/con-strimzi-installation-methods.adoc b/documentation/modules/deploying/con-strimzi-installation-methods.adoc index 8fbc12345f5..d27500d043e 100644 --- a/documentation/modules/deploying/con-strimzi-installation-methods.adoc +++ b/documentation/modules/deploying/con-strimzi-installation-methods.adoc @@ -3,10 +3,10 @@ // deploying.adoc (downstream) [id="con-strimzi-installation-methods_{context}"] -= Strimzi installation methods += Deployment methods [role="_abstract"] -You can install Strimzi on Kubernetes {KubernetesVersion} in three ways. +You can deploy Strimzi on Kubernetes {KubernetesVersion} using one of the following methods: [cols="2*",options="header"] |=== @@ -14,28 +14,16 @@ You can install Strimzi on Kubernetes {KubernetesVersion} in three ways. |Installation method |Description -|xref:deploy-tasks_str[Installation artifacts (YAML files)] -a|Download the release artifacts from the {ReleaseDownload}. - -Download the `strimzi-__.zip` or `strimzi-__.tar.gz` archive file. -The archive file contains installation artifacts and example configuration files. - -Deploy the YAML installation artifacts to your Kubernetes cluster using `kubectl`. -You start by deploying the Cluster Operator from `install/cluster-operator` to a single namespace, multiple namespaces, or all namespaces. - -You can also use the `install/` artifacts to deploy the following: - -* Strimi administrator roles (`strimzi-admin`) -* A standalone Topic Operator (`topic-operator`) -* A standalone User Operator (`user-operator`) -* Strimzi Drain Cleaner (`drain-cleaner`) - +|xref:deploy-tasks_str[Deployment files (YAML files)] +a|xref:downloads-{context}[Download the deployment files] to manually deploy Strimzi components. |xref:deploying-strimzi-from-operator-hub-str[OperatorHub.io] -|Use the *Strimzi Kafka* operator in the OperatorHub.io to deploy the Cluster Operator. You then deploy Strimzi components using custom resources. +|Deploy the Strimzi Cluster operator through the OperatorHub.io, then deploy Strimzi components using custom resources. +ifdef::Section[] |xref:deploying-cluster-operator-helm-chart-str[Helm chart] -|Use a Helm chart to deploy the Cluster Operator. You then deploy Strimzi components using custom resources. +|Use a Helm chart to deploy the Cluster Operator, then deploy Strimzi components using custom resources. +endif::Section[] |=== diff --git a/documentation/modules/deploying/proc-deploy-designating-strimzi-administrators.adoc b/documentation/modules/deploying/proc-deploy-designating-strimzi-administrators.adoc index aa83b29633a..945fcb66c53 100644 --- a/documentation/modules/deploying/proc-deploy-designating-strimzi-administrators.adoc +++ b/documentation/modules/deploying/proc-deploy-designating-strimzi-administrators.adoc @@ -22,6 +22,7 @@ A system administrator can designate Strimzi administrators after the Cluster Op .Prerequisites +* The Strimzi admin deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files]. * The Strimzi Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been xref:cluster-operator-{context}[deployed with the Cluster Operator]. .Procedure diff --git a/documentation/modules/deploying/proc-deploy-topic-operator-standalone.adoc b/documentation/modules/deploying/proc-deploy-topic-operator-standalone.adoc index 7bc6f342cce..24375a67ca7 100644 --- a/documentation/modules/deploying/proc-deploy-topic-operator-standalone.adoc +++ b/documentation/modules/deploying/proc-deploy-topic-operator-standalone.adoc @@ -22,6 +22,7 @@ In this way, you can use Topic Operators with multiple Kafka clusters. .Prerequisites +* The standalone Topic Operator deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files]. * You are running a Kafka cluster for the Topic Operator to connect to. + As long as the standalone Topic Operator is correctly configured for connection, diff --git a/documentation/modules/deploying/proc-deploy-user-operator-standalone.adoc b/documentation/modules/deploying/proc-deploy-user-operator-standalone.adoc index b32e2a22cbd..07f146ef95d 100644 --- a/documentation/modules/deploying/proc-deploy-user-operator-standalone.adoc +++ b/documentation/modules/deploying/proc-deploy-user-operator-standalone.adoc @@ -24,6 +24,7 @@ In this way, you can use the User Operator with multiple Kafka clusters. .Prerequisites +* The standalone User Operator deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files]. * You are running a Kafka cluster for the User Operator to connect to. + As long as the standalone User Operator is correctly configured for connection, diff --git a/documentation/modules/drain-cleaner/proc-drain-cleaner-deploying.adoc b/documentation/modules/drain-cleaner/proc-drain-cleaner-deploying.adoc index 9e1ee50d9b5..218cb6cd729 100644 --- a/documentation/modules/drain-cleaner/proc-drain-cleaner-deploying.adoc +++ b/documentation/modules/drain-cleaner/proc-drain-cleaner-deploying.adoc @@ -16,7 +16,7 @@ For the legacy mode to work, you have to configure the `PodDisruptionBudget` to .Prerequisites -* You have xref:drain-cleaner-prereqs-str[downloaded the Strimzi Drain Cleaner deployment files]. +* The Drain Cleaner deployment files, which are included in the Strimzi xref:downloads-{context}[deployment files]. * You have a highly available Kafka cluster deployment running with Kubernetes worker nodes that you would like to update. * Topics are replicated for high availability. + diff --git a/documentation/shared/attributes.adoc b/documentation/shared/attributes.adoc index 15fa0797de3..9b2fe0c7daf 100644 --- a/documentation/shared/attributes.adoc +++ b/documentation/shared/attributes.adoc @@ -44,6 +44,7 @@ // Source and download links :ReleaseDownload: https://github.com/strimzi/strimzi-kafka-operator/releases[GitHub releases page^] +:ReleaseFile: strimzi-{ProductVersion}.* :supported-configurations: https://strimzi.io/downloads/ //Monitoring links diff --git a/documentation/shared/snip-intro-kafka-deployment.adoc b/documentation/shared/snip-intro-kafka-deployment.adoc deleted file mode 100644 index 1ea4639a99b..00000000000 --- a/documentation/shared/snip-intro-kafka-deployment.adoc +++ /dev/null @@ -1,15 +0,0 @@ -//standard kafka deployment text -Strimzi enables the deployment of Apache Kafka components to a Kubernetes cluster, typically running as clusters for high availability. - -A standard Kafka deployment using Strimzi might include the following components: - -* *Kafka* cluster of broker nodes as the core component -* *Kafka Connect* cluster for external data connections -* *Kafka MirrorMaker* cluster to mirror data to another Kafka cluster -* *Kafka Exporter* to extract additional Kafka metrics data for monitoring -* *Kafka Bridge* to enable HTTP-based communication with Kafka -* *Cruise Control* to rebalance topic partitions across brokers - -Not all of these components are required, though you need Kafka as a minimum for a Strimzi-managed Kafka cluster. -Depending on your use case, you can deploy the additional components as needed. -These components can also be used with Kafka clusters that are not managed by Strimzi.