Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(review): minor doc updates and improvements from review #10579

Merged
merged 1 commit into from
Sep 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ This is because an OAuth listener appends its own principle builder to the Kafka

Custom principal builders must support peer certificates for authentication, as Strimzi uses these to manage the Kafka cluster.

ifdef::Downloading[]
ifdef::Section[]
A custom OAuth principal builder might be identical or very similar to the Strimzi https://github.com/strimzi/strimzi-kafka-oauth/blob/main/oauth-server/src/main/java/io/strimzi/kafka/oauth/server/OAuthKafkaPrincipalBuilder.java[OAuth principal builder].
endif::Downloading[]
endif::Section[]

NOTE: link:https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/authenticator/DefaultKafkaPrincipalBuilder.java#L73-L79[Kafka's default principal builder class] supports the building of principals based on the names of peer certificates.
The custom principal builder should provide a principal of type `user` using the name of the SSL peer certificate.
Expand Down
8 changes: 4 additions & 4 deletions documentation/assemblies/deploying/assembly-deploy-tasks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@
Having xref:deploy-tasks-prereqs_{context}[prepared your environment for a deployment of Strimzi], you can deploy Strimzi to a Kubernetes cluster.
Use the installation files provided with the release artifacts.

ifdef::Downloading[]
ifdef::Section[]
You can deploy Strimzi {ProductVersion} on Kubernetes {KubernetesVersion}.
endif::Downloading[]
endif::Section[]

ifndef::Downloading[]
ifndef::Section[]
Strimzi is based on {StrimziVersion}.
You can deploy Strimzi {ProductVersion} on OpenShift {OpenShiftVersion}.
endif::Downloading[]
endif::Section[]

The steps to deploy Strimzi using the installation files are as follows:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,9 @@ metrics
│ ├── strimzi-kafka-connect.json
│ ├── strimzi-kafka-exporter.json
│ ├── strimzi-kafka-mirror-maker-2.json
| ├── strimzi-kafka-oauth.json
│ ├── strimzi-kafka.json
| ├── strimzi-kraft.json
│ ├── strimzi-operators.json
│ └── strimzi-zookeeper.json
├── grafana-install
Expand All @@ -38,7 +40,9 @@ metrics
├── kafka-connect-metrics.yaml <10>
├── kafka-cruise-control-metrics.yaml <11>
├── kafka-metrics.yaml <12>
└── kafka-mirror-maker-2-metrics.yaml <13>
├── kafka-mirror-maker-2-metrics.yaml <13>
└── oauth-metrics.yaml <14>

--
<1> Example Grafana dashboards for the different Strimzi components.
<2> Installation file for the Grafana image.
Expand All @@ -52,7 +56,7 @@ metrics
<10> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Connect.
<11> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Cruise Control.
<12> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka and ZooKeeper.
<13> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka MirrorMaker 2.
<13> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for OAuth 2.0.

//Example Prometheus metrics files
include::../../modules/metrics/ref-prometheus-metrics-config.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -725,3 +725,6 @@ spec:
- name: example-pvc-volume
mountPath: /mnt/data
----

You can use volumes to store files containing configuration values for a Kafka component and then load those values using a configuration provider.
For more information, see link:{BookURLDeploying}#assembly-loading-config-with-providers-str[Loading configuration values from external sources^].
4 changes: 2 additions & 2 deletions documentation/modules/configuring/con-config-examples.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@
[role="_abstract"]
Further enhance your deployment by incorporating additional supported configuration.
Example configuration files are provided with the downloadable release artifacts from the {ReleaseDownload}.
ifdef::Downloading[]
ifdef::Section[]
You can also access the example files directly from the
link:https://github.com/strimzi/strimzi-kafka-operator/tree/{GithubVersion}/examples/[`examples` directory^].
endif::Downloading[]
endif::Section[]

The example files include only the essential properties and values for custom resources by default.
You can download and apply the examples using the `kubectl` command-line tool.
Expand Down
12 changes: 6 additions & 6 deletions documentation/modules/deploying/con-deploy-prereqs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

To deploy Strimzi, you will need the following:

ifdef::Downloading[]
ifdef::Section[]
* A Kubernetes {KubernetesVersion} cluster.
+
* The `kubectl` command-line tool is installed and configured to connect to the running cluster.
Expand All @@ -26,22 +26,22 @@ In almost all cases the example `kubectl` commands used in this guide can be don
In other words, instead of using:

[source,shell,subs=+quotes]
kubectl apply -f _your-file_
kubectl apply -f <your_file>

when using OpenShift you can use:

[source,shell,subs=+quotes]
oc apply -f _your-file_
oc apply -f <your_file>

NOTE: As an exception to this general rule, `oc` uses `oc adm` subcommands for _cluster management_ functionality,
whereas `kubectl` does not make this distinction.
For example, the `oc` equivalent of `kubectl taint` is `oc adm taint`.

endif::Downloading[]
ifndef::Downloading[]
endif::Section[]
ifndef::Section[]
* An OpenShift {OpenShiftVersion} cluster.
+
Strimzi is based on {StrimziVersion}.

* The `oc` command-line tool is installed and configured to connect to the running cluster.
endif::Downloading[]
endif::Section[]
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,9 @@ sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginMo
oauth.token.endpoint.uri="<token_endpoint_url>" \ # <4>
oauth.client.id="<client_id>" \ # <5>
oauth.client.secret="<client_secret>" \ # <6>
oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ <7>
oauth.ssl.truststore.password="$STOREPASS" \ <8>
oauth.ssl.truststore.type="PKCS12" \ <9>
oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ # <7>
oauth.ssl.truststore.password="$STOREPASS" \ # <8>
oauth.ssl.truststore.type="PKCS12" \ # <9>
oauth.scope="<scope>" \ # <10>
oauth.audience="<audience>" ; # <11>
sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler
Expand Down Expand Up @@ -138,7 +138,7 @@ sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginMo
sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler
----
<1> Path to the client assertion file used for authenticating the client. This file is a private key file as an alternative to the client secret.
Alternatively, use `oauth.client.assertion` option to specify the client assertion value in clear text.
Alternatively, use the `oauth.client.assertion` option to specify the client assertion value in clear text.
<2> (Optional) Sometimes you may need to specify the client assertion type. In not specified, the default value is `urn:ietf:params:oauth:client-assertion-type:jwt-bearer`.

[id='con-oauth-authentication-password-grants-{context}']
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Alpha and beta stage features are removed if they do not prove to be useful.
It is now permanently enabled and cannot be disabled.
* The `KafkaNodePools` feature gate moved to GA stage in Strimzi 0.41.
It is now permanently enabled and cannot be disabled.
To use the Kafka Node Pool resources, you still need to use the `strimzi.io/node-pools: enabled` annotation on the `Kafka` custom resources.
To use `KafkaNodePool` resources, you still need to use the `strimzi.io/node-pools: enabled` annotation on the `Kafka` custom resources.
* The `UnidirectionalTopicOperator` feature gate moved to GA stage in Strimzi 0.41.
It is now permanently enabled and cannot be disabled.
* The `UseKRaft` feature gate moved to GA stage in Strimzi 0.42.
Expand Down
2 changes: 1 addition & 1 deletion documentation/modules/upgrading/con-upgrade-paths.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ An incremental upgrade involves upgrading Strimzi from the previous minor versio

Multi-version upgrade::
A multi-version upgrade involves upgrading an older version of Strimzi to version {ProductVersion} within a single upgrade, skipping one or more intermediate versions.
For example, upgrading directly from Strimzi 0.30.0 to Strimzi {ProductVersion} is possible.
Upgrading from any earlier Strimzi version to the latest version is possible.

[id='con-upgrade-paths-kafka-versions-{context}']
== Support for Kafka versions when upgrading
Expand Down
6 changes: 0 additions & 6 deletions documentation/shared/attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -169,14 +169,8 @@
:KafkaBridgeApiVersion: kafka.strimzi.io/v1beta2

// Section enablers
:InstallationAppendix:
:Metrics:
:Downloading:
:Section:

//EXCLUSIVE TO STRIMZI
:sectlinks:

// Helm Chart - deploy cluster operator
:ChartReleaseCoordinate: strimzi/strimzi-kafka-operator
:ChartRepositoryUrl: https://strimzi.io/charts/
Expand Down
6 changes: 2 additions & 4 deletions documentation/shared/ref-document-conventions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,13 @@
// assembly-overview.adoc

[id='document-conventions-{context}']
= Document Conventions

.User-replaced values
= Document conventions

User-replaced values, also known as _replaceables_, are shown in with angle brackets (< >).
Underscores ( _ ) are used for multi-word values.
If the value refers to code or commands, `monospace` is also used.

For example, the following code shows that `_<my_namespace>_` must be replaced by the correct namespace name:
For example, the following code shows that `<my_namespace>` must be replaced by the correct namespace name:

[source, subs="+quotes"]
----
Expand Down
Loading