diff --git a/README.md b/README.md index c27f6766..74533a6a 100644 --- a/README.md +++ b/README.md @@ -72,6 +72,7 @@ Copy the following jars into your Kafka libs directory: oauth-common/target/kafka-oauth-common-*.jar oauth-server/target/kafka-oauth-server-*.jar + oauth-keycloak-authorizer/target/kafka-oauth-keycloak-authorizer-*.jar oauth-client/target/kafka-oauth-client-*.jar oauth-client/target/lib/keycloak-common-*.jar oauth-client/target/lib/keycloak-core-*.jar diff --git a/examples/README-authz.md b/examples/README-authz.md new file mode 100644 index 00000000..b0191bdb --- /dev/null +++ b/examples/README-authz.md @@ -0,0 +1,419 @@ +## Token Based Authorization with Keycloak Authorization Services + +Once the Kafka Broker has obtained an access token by using Strimzi Kafka OAuth for authentication, it is possible to use centrally managed authorization rules to enforce access restrictions onto Kafka Clients. +For this Strimzi Kafka OAuth supports the use of `Keycloak Authorization Services`. + +A custom authorizer has to be configured on the Kafka Broker to take advantage of Authorization Services REST endpoints available on Keycloak, which provide a list of granted permissions on resources for authenticated users. +The list is fetched once, and enforced locally on the Kafka Broker for each user session in order to provide fast authorization decisions. + + +## Building the Example Project + +Before using the example, we first need to build the project, and prepare resources. + +First change the current directory to `examples/docker`: + + cd examples/docker + +Now build the project, and prepare resources: + + mvn clean install -f ../.. + mvn clean install + +We are now ready to start up the containers and see `Keycloak Authorization Services` in action. + + +## Starting Up the Containers + +First make sure any existing containers with the same name are removed, otherwise we might use previous configurations: + + docker rm keycloak kafka zookeeper + +Let's start up all the containers with authorization configured, and we'll then perform any manual step, and explain how everything works. + + docker-compose -f compose.yml -f keycloak/compose.yml -f keycloak-import/compose.yml \ + -f kafka-oauth-strimzi/compose-authz.yml up --build + +When everything starts up without errors we should have one instance of `keycloak` listening on localhost:8080. + + +## Using Keycloak Admin Console to Configure Authorization + +You can login to the Admin Console by opening `http://localhost:8080/auth/admin` and using `admin` as both username, and a password. + +In the upper left corner under the Keycloak icon you should see `Master` selected as a current realm. +Moving the mouse pointer over it should reveal two additional realms - `Demo` and `Kafka-authz`. + +For this example we are interested in the `kafka-authz` realm. +Selecting it will open the `Realm Settings` for the `kafka-authz` realm. +Next to `Realm Settings` there are other sections we are interested in - `Groups`, `Roles`, `Clients` and `Users`. + +Under `Groups` we can see several groups that can be used to mark users as having some permissions. +Groups are sets of users with name assigned. Typically they are used to geographically or organisationally compartmentalize users into organisations, organisational units, departments etc. + +In Keycloak the groups can be stored in an LDAP identity provider. +That makes it possible to make some user a member of some group - through a custom LDAP server admin UI for example, which grants them some permissions on Kafka resources. + +Under `Users`, click on the `View all users` button and you will see two users defined - `alice` and `bob`. `alice` is a member of the `ClusterManager Group`, and `bob` is a member of `ClusterManager-cluster2 Group`. +In Keycloak the users can be stored in an LDAP identity provider. + +Under `Roles` we can see several realm roles which can be used to mark users or clients as having some permissions. +Roles are a concept analogous to groups. They are usually used to 'tag' users as playing organisational roles and having permissions that pertain to it. +In Keycloak the roles can not be stored in an LDAP identity provider - if that is your requirement then you should use groups instead. + +Under `Clients` we can see some additional clients configured - `kafka`, `kafka-cli`, `team-a-client`, `team-b-client`. +The client with client id `kafka` is used by Kafka Brokers to perform the necessary OAuth2 communication for access token validation, +and to authenticate to other Kafka Broker instances using OAuth2 client authentication. +This client also contains Authorization Services resource definitions, policies and authorization scopes used to perform authorization on the Kafka Brokers. + +The client with client id `kafka-cli` is a public client that can be used by the Kafka command line tools when authenticating with username and password to obtain an access token or a refresh token. + +Clients `team-a-client`, and `team-b-client` are confidential clients representing services with partial access to certain Kafka topics. + +The authorization configuration is defined in the `kafka` client under `Authorization` tab. +This tab becomes visible when `Authorization Enabled` is turned on under the `Settings` tab. + + +## Authorization Services - Resources, Authorization Scopes, Policies and Permissions + +`Keycloak Authorization Services` uses several concepts that together take part in defining, and applying access control to resources. + +`Resources` define _what_ we are protecting from unauthorized access. +Each resource can contain a list of `authorization scopes` - actions that are available on the resource, so that permission on a resource can be granted for one or more actions only. +`Policies` define the groups of users we want to target with permissions. Users can be targeted based on group membership, assigned roles, or individually. +Finally, the `permissions` tie together specific `resources`, `action scopes` and `policies` to define that 'specific users U can perform certain actions A on the resource R'. + +You can read more about `Keycloak Authorization Services` on [project's web site](https://www.keycloak.org/docs/latest/authorization_services/index.html). + +If we take a look under the `Resources` sub-tab of `Authorization` tab, we'll see the list of resource definitions. +These are resource specifiers - patterns in a specific format, that are used to target policies to specific resources. +The format is quite simple. For example: + +- `kafka-cluster:cluster-1,Topic:a_*` ... targets only topics in kafka cluster 'cluster-1' with names starting with 'a_' + +If `kafka-cluster:XXX` segment is not present, the specifier targets any cluster. + +- `Group:x_*` ... targets all consumer groups on any cluster with names starting with 'x_' + +The possible resource types mirror the [Kafka authorization model](https://kafka.apache.org/documentation/#security_authz_primitives) (Topic, Group, Cluster, ...). + +Under `Authorization Scopes` we can see a list of all the possible actions (Kafka permissions) that can be granted on resources of different types. +It requires some understanding of [Kafka's permissions model](https://kafka.apache.org/documentation/#resources_in_kafka) to know which of these make sense with which resource type (Topic, Group, Cluster, ...). +This list mirrors Kafka permissions and should be the same for any deployment. + +There is an [authorization-scopes.json](../oauth-keycloak-authorizer/etc/authorization-scopes.json) file containing the authorization scopes that can be imported, so that they don't have to be manually entered for every new `Authorization Services` enabled client. +In order to import `authorization-scopes.json` into a new client, first make sure the new client is `Authorization Enabled` and saved. Then, click on the `Authorization` tab and use the `Import` to import the file. Afterwards, if you select the `Authorization Scopes` you will see the loaded scopes. +For this example the authorization scopes have already been imported as part of the realm import. + +Under the `Policies` sub-tab there are filters that match sets of users. +Users can be explicitly listed, or they can be matched based on the Roles, or Groups they are assigned. +Policies can even be programmatically defined using JavaScript where logic can take into account the context of the client session - e.g. client ip (that is client ip of the Kafka client). + +Then, finally, there is the `Permissions` sub-tab, which defines 'role bindings' where `resources`, `authorization scopes` and `policies` are tied together to apply a set of permissions on specific resources for certain users. + +Each `permission` definition can have a nice descriptive name which can make it very clear what kind of access is granted to which users. +For example: + + Dev Team A can write to topics that start with x_ on any cluster + + Dev Team B can read from topics that start with x_ on any cluster + Dev Team B can update consumer group offsets that start with x_ on any cluster + + ClusterManager of cluster2 Group has full access to cluster config on cluster2 + ClusterManager of cluster2 Group has full access to consumer groups on cluster2 + ClusterManager of cluster2 Group has full access to topics on cluster2 + +If we take a closer look at the `Dev Team A can write ...` permission definition, we see that it combines a resource called `Topic:x_*`, scopes `Describe` and `Write`, and `Dev Team A` policy. +If we click on the `Dev Team A` policy, we see that it matches all users that have a realm role called `Dev Team A`. + +Similarly, the `Dev Team B ...` permissions perform matching using the `Dev Team B` policy which also uses realm role to match allowed users - in this case those with realm role `Dev Team B`. +The `Dev Team B ...` permissions grant users `Describe` and `Read` on `Topic:x_*`, and `Group:x_*` resources, effectively giving matching users and clients the ability to read from topics, and update the consumed offsets for topics and consumer groups that have names starting with 'x_'. + +## Targeting Permissions - Clients and Roles vs. Users and Groups + +In Keycloak, confidential clients with 'service accounts' enabled can authenticate to the server in their own name using a clientId and a secret. +This is convenient for microservices which typically act in their own name, and not as agents of a particular user (like a web site would, for example). +Service accounts can have roles assigned like regular users. +They can not, however, have groups assigned. +As a consequence, if you want to target permissions to microservices using service accounts, you can't use Group policies, but are forced to use Role policies. +Or, thinking about it another way, if you want to limit certain permissions only to regular user accounts where authentication with username and password is required, you should use Group policies, rather than Role policies. +That's what we see used in `permissions` that start with 'ClusterManager'. +Performing cluster management is usually done interactively - in person - using CLI tools. +It makes sense to require the user to log-in, before using the resulting access token to authenticate to Kafka Broker. +In this case the access token represents the specific user, rather than the client application. + + +## Authorization in Action Using CLI Clients + +Before continuing, there is one setting we need to check. +Due to [a little bug in Keycloak](https://issues.redhat.com/browse/KEYCLOAK-12640) the realm is at this point misconfigured, and we have to fix the configuration manually. +Under `Clients` / `kafka` / `Authorization` / `Settings` make sure the `Decision Strategy` is set to `Affirmative`, and NOT to `Unanimous`. Click `Save` after fixing it. + +With configuration now in place, let's create some topics, use a producer, a consumer, and try to perform some management operations using different user and service accounts. + +First, we'll spin up a new docker container based on a Kafka image previously built by `docker-compose` which we'll use to connect to the already running Kafka Broker. + + docker run -ti --rm --name kafka-cli --network docker_default strimzi/example-kafka /bin/sh + +Let's try to produce some messages as client `team-a-client`. +First, we prepare a Kafka consumer configuration file with authentication parameters. + +``` +cat > ~/team-a-client.properties << EOF +security.protocol=SASL_PLAINTEXT +sasl.mechanism=OAUTHBEARER +sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ + oauth.client.id="team-a-client" \ + oauth.client.secret="team-a-client-secret" \ + oauth.token.endpoint.uri="http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/token" ; +sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler +EOF +``` + +In the Keycloak Console you can find which roles are assigned to the `team-a-client` service account, by selecting `team-a-client` in the `Clients` section. +and then opening the `Service Account Roles` tab for the client. +You should see the `Dev Team A` realm role assigned. + +We can now use this configuration with Kafka's CLI tools. + +Make sure the necessary classes are on the classpath: + + export CLASSPATH=/opt/kafka/libs/strimzi/*:$CLASSPATH + + +### Producing Messages + +Let's try produce some messages to topic 'my-topic': + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic my-topic \ + --producer.config=$HOME/team-a-client.properties +First message +``` + +When we press `Enter` to push the first message we receive `Not authorized to access topics: [my-topic]` error. + +`team-a-client` has a `Dev Team A` role which gives it permissions to do anything on topics that start with 'a_', and only write to topics that start with 'x_'. +The topic named `my-topic` matches neither of those. + +Use CTRL-C to exit the CLI application, and let's try to write to topic `a_messages`. + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic a_messages \ + --producer.config ~/team-a-client.properties +First message +Second message +``` + +Although we can see some unrelated warnings, looking at the Kafka container log there is DEBUG level output saying 'Authorization GRANTED'. + +Use CTRL-C to exit the CLI application. + + +### Consuming Messages + +Let's now try to consume the messages we have produced. + + bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic a_messages \ + --from-beginning --consumer.config ~/team-a-client.properties + +This gives us an error like: 'Not authorized to access group: console-consumer-55841'. + +The reason is that we have to override the default consumer group name - `Dev Team A` only has access to consumer groups that have names starting with 'a_'. +Let's set custom consumer group name that starts with 'a_' + + bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic a_messages \ + --from-beginning --consumer.config ~/team-a-client.properties --group a_consumer_group_1 + +We should now receive all the messages for the 'a_messages' topic, after which the client blocks waiting for more messages. + +Use CTRL-C to exit. + + +### Using Kafka's CLI Administration Tools + +Let's now list the topics: + + bin/kafka-topics.sh --bootstrap-server kafka:9092 --command-config ~/team-a-client.properties --list + +We get one topic listed: 'a_messages'. + +Let's try and list the consumer groups: + + bin/kafka-consumer-groups.sh --bootstrap-server kafka:9092 \ + --command-config ~/team-a-client.properties --list + +Similarly to listing topics, we get one consumer group listed: `a_consumer_group_1`. + +There are more CLI administrative tools. For example we can try to get the default cluster configuration: + + bin/kafka-configs.sh --bootstrap-server kafka:9092 --command-config ~/team-a-client.properties \ + --entity-type brokers --describe --entity-default + +But that will fail with `Cluster authorization failed.` error, because this operation requires cluster level permissions which `team-a-client` does not have. + + +### Client with Different Permissions + +Let's prepare a configuration for `team-b-client`: + +``` +cat > ~/team-b-client.properties << EOF +security.protocol=SASL_PLAINTEXT +sasl.mechanism=OAUTHBEARER +sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ + oauth.client.id="team-b-client" \ + oauth.client.secret="team-b-client-secret" \ + oauth.token.endpoint.uri="http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/token" ; +sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler +EOF +``` + +If we look at `team-b-client` client configuration in Keycloak, under 'Service Account Roles' we can see that it has `Dev Team B` realm role assigned. +Looking in Keycloak Console at the `kafka` client's `Authorization` tab where `Permissions` are listed, we can see the permissions that start with 'Dev Team B ...'. +These match the users and service accounts that have the 'Dev Team B' realm role assigned to them. +The `Dev Team B` users have full access to topics beginning with 'b_' on Kafka cluster `cluster2` (which is the designated cluster name of the demo cluster we brought up), and read access on topics that start with 'x_'. + +Let's try produce some messages to topic 'a_messages' as `team-b-client`: + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic a_messages \ + --producer.config ~/team-b-client.properties +Message 1 +``` + +We get `Not authorized to access topics: [a_messages]` error as we expected. Let's try to produce to topic `b_messages`: + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic b_messages \ + --producer.config ~/team-b-client.properties +Message 1 +Message 2 +Message 3 +``` + +This should work fine. + +What about producing to topic `x_messages`. `team-b-client` is only supposed to be able to read from such a topic. + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic x_messages \ + --producer.config ~/team-b-client.properties +Message 1 +``` + +We get a `Not authorized to access topics: [x_messages]` error as we expected. +Client `team-a-client`, on the other hand, should be able to write to such a topic: + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic x_messages \ + --producer.config ~/team-a-client.properties +Message 1 +``` + +However, we again receive `Not authorized to access topics: [x_messages]`. What's going on? +The reason for failure is that while `team-a-client` can write to `x_messages` topic, it does not have a permission to create a topic if it does not yet exist. + +We now need a power user that can create a topic with all the proper settings - like the right number of partitions and replicas. + + +### Power User Can Do Anything + +Let's create a configuration for user `bob` who has full ability to manage everything on Kafka cluster `cluster2`. + +First, `bob` will authenticate to Keycloak server with his username and password and get a refresh token. + +``` +export TOKEN_ENDPOINT=http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/token +REFRESH_TOKEN=$(./oauth.sh -q bob) +``` + +This will prompt you for a password. Type 'bob-password'. + +We can inspect the refresh token: + + ./jwt.sh $REFRESH_TOKEN + +By default this is a long-lived refresh token that does not expire. + +Now we will create the configuration file for `bob`: + +``` +cat > ~/bob.properties << EOF +security.protocol=SASL_PLAINTEXT +sasl.mechanism=OAUTHBEARER +sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ + oauth.refresh.token="$REFRESH_TOKEN" \ + oauth.client.id="kafka-cli" \ + oauth.token.endpoint.uri="http://keycloak:8080/auth/realms/kafka-authz/protocol/openid-connect/token" ; +sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler +EOF +``` + +Note that we use the `kafka-cli` public client for the `oauth.client.id` in the `sasl.jaas.config`. +Since that is a public client it does not require any secret. +We can use this because we authenticate with a token directly (in this case a refresh token is used to request an access token behind the scenes which is then sent to Kafka broker for authentication, and we already did the authentication for the refresh token). + + +Let's now try to create the `x_messages` topic: + + bin/kafka-topics.sh --bootstrap-server kafka:9092 --command-config ~/bob.properties \ + --topic x_messages --create --replication-factor 1 --partitions 1 + +The operation should succeed. We can list the topics: + + bin/kafka-topics.sh --bootstrap-server kafka:9092 --command-config ~/bob.properties --list + +If we try the same as `team-a-client` or `team-b-client` we will get different responses. + + bin/kafka-topics.sh --bootstrap-server kafka:9092 --command-config ~/team-a-client.properties --list + bin/kafka-topics.sh --bootstrap-server kafka:9092 --command-config ~/team-b-client.properties --list + +Roles `Dev Team A`, and `Dev Team B` both have `Describe` permission on topics that start with 'x_', but they can't see the other team's topics as they don't have `Describe` permissions on them. + +We can now again try to produce to the topic as `team-a-client`. + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic x_messages \ + --producer.config ~/team-a-client.properties +Message 1 +Message 2 +Message 3 +``` + +This works. + +If we try the same as `team-b-client` it should fail. + +``` +bin/kafka-console-producer.sh --broker-list kafka:9092 --topic x_messages \ + --producer.config ~/team-b-client.properties +Message 4 +Message 5 +``` + +But `team-b-client` should be able to consume messages from the `x_messages` topic: + + bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic x_messages \ + --from-beginning --consumer.config ~/team-b-client.properties --group x_consumer_group_b + +Whereas `team-a-client` does not have permission to read, even though they can write: + + bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic x_messages \ + --from-beginning --consumer.config ~/team-a-client.properties --group x_consumer_group_a + +We get a `Not authorized to access group: x_consumer_group_a` error. +What if we try to use a consumer group name that starts with 'a_'? + + bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic x_messages \ + --from-beginning --consumer.config ~/team-a-client.properties --group a_consumer_group_a + +We now get a different error: `Not authorized to access topics: [x_messages]` + +It just won't work - `Dev Team A` has no `Read` access on topics that start with 'x_'. + +User `bob` should have no problem reading from or writing to any topic: + + bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic x_messages \ + --from-beginning --consumer.config ~/bob.properties + diff --git a/examples/consumer/src/main/java/io/strimzi/examples/consumer/ExampleConsumer.java b/examples/consumer/src/main/java/io/strimzi/examples/consumer/ExampleConsumer.java index 92810d00..80f6aada 100644 --- a/examples/consumer/src/main/java/io/strimzi/examples/consumer/ExampleConsumer.java +++ b/examples/consumer/src/main/java/io/strimzi/examples/consumer/ExampleConsumer.java @@ -22,7 +22,7 @@ public class ExampleConsumer { public static void main(String[] args) { - String topic = "Topic1"; + String topic = "a_Topic1"; Properties defaults = new Properties(); Config external = new Config(); @@ -50,8 +50,8 @@ public static void main(String[] args) { final String accessToken = external.getValue(ClientConfig.OAUTH_ACCESS_TOKEN, null); if (accessToken == null) { - defaults.setProperty(Config.OAUTH_CLIENT_ID, "kafka-producer-client"); - defaults.setProperty(Config.OAUTH_CLIENT_SECRET, "kafka-producer-client-secret"); + defaults.setProperty(Config.OAUTH_CLIENT_ID, "kafka-consumer-client"); + defaults.setProperty(Config.OAUTH_CLIENT_SECRET, "kafka-consumer-client-secret"); } // Use 'preferred_username' rather than 'sub' for principal name @@ -94,7 +94,7 @@ private static Properties buildConsumerConfig() { p.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); p.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); - p.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "consumer-group"); + p.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "a_consumer-group"); p.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "10"); p.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); diff --git a/examples/docker/README.md b/examples/docker/README.md index 57cfbdd7..b90ef80c 100644 --- a/examples/docker/README.md +++ b/examples/docker/README.md @@ -124,7 +124,7 @@ To regenerate Root CA run the following: You also have to regenerate keycloak and hydra server certificates otherwise clients won't be able to connect any more. - cd /opt/jboss/keycloak/certificates + cd keycloak/certificates rm *.srl *.p12 cert-* ./gen-keycloak-certs.sh @@ -132,6 +132,12 @@ You also have to regenerate keycloak and hydra server certificates otherwise cli rm *.srl *.crt *.key *.csr ./gen-hydra-certs.sh +And if CA has changed, then kafka broker certificates have to be regenerated as well: + + cd kafka-oauth-strimzi/kafka/certificates + rm *.p12 + ./gen-kafka-certs.sh + And finally make sure to rebuild the docker module again and re-run `docker-compose` to ensure new keys and certificates are used everywhere. mvn clean install diff --git a/examples/docker/kafka-oauth-strimzi/compose-authz.yml b/examples/docker/kafka-oauth-strimzi/compose-authz.yml new file mode 100644 index 00000000..212f8355 --- /dev/null +++ b/examples/docker/kafka-oauth-strimzi/compose-authz.yml @@ -0,0 +1,86 @@ +version: '3.5' + +services: + + #################################### KAFKA BROKER #################################### + kafka: + image: strimzi/example-kafka + build: kafka-oauth-strimzi/kafka/target + container_name: kafka + ports: + - 9092:9092 + + # javaagent debug port + - 5006:5006 + + environment: + + # Java Debug + KAFKA_DEBUG: y + DEBUG_SUSPEND_FLAG: y + JAVA_DEBUG_PORT: 5006 + + # + # KAFKA Configuration + # + LOG_DIR: /home/kafka/logs + + KAFKA_BROKER_ID: 1 + KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 + KAFKA_LISTENERS: REPLICATION://kafka:9091,CLIENT://kafka:9092 + KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: REPLICATION:SSL,CLIENT:SASL_PLAINTEXT + KAFKA_SASL_ENABLED_MECHANISMS: OAUTHBEARER + KAFKA_INTER_BROKER_LISTENER_NAME: REPLICATION + KAFKA_SSL_SECURE_RANDOM_IMPLEMENTATION: SHA1PRNG + KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: "" + + KAFKA_LISTENER_NAME_REPLICATION_SSL_KEYSTORE_LOCATION: /tmp/kafka/cluster.keystore.p12 + KAFKA_LISTENER_NAME_REPLICATION_SSL_KEYSTORE_PASSWORD: Z_pkTh9xgZovK4t34cGB2o6afT4zZg0L + KAFKA_LISTENER_NAME_REPLICATION_SSL_KEYSTORE_TYPE: PKCS12 + KAFKA_LISTENER_NAME_REPLICATION_SSL_TRUSTSTORE_LOCATION: /tmp/kafka/cluster.truststore.p12 + KAFKA_LISTENER_NAME_REPLICATION_SSL_TRUSTSTORE_PASSWORD: Z_pkTh9xgZovK4t34cGB2o6afT4zZg0L + KAFKA_LISTENER_NAME_REPLICATION_SSL_TRUSTSTORE_TYPE: PKCS12 + KAFKA_LISTENER_NAME_REPLICATION_SSL_CLIENT_AUTH: required + + KAFKA_LISTENER_NAME_CLIENT_OAUTHBEARER_SASL_JAAS_CONFIG: "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;" + KAFKA_LISTENER_NAME_CLIENT_OAUTHBEARER_SASL_LOGIN_CALLBACK_HANDLER_CLASS: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler + KAFKA_LISTENER_NAME_CLIENT_OAUTHBEARER_SASL_SERVER_CALLBACK_HANDLER_CLASS: io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler + + KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 + + KAFKA_AUTHORIZER_CLASS_NAME: io.strimzi.kafka.oauth.server.authorizer.KeycloakRBACAuthorizer + KAFKA_PRINCIPAL_BUILDER_CLASS: io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder + + KAFKA_STRIMZI_AUTHORIZATION_KAFKA_CLUSTER_NAME: cluster2 + KAFKA_STRIMZI_AUTHORIZATION_DELEGATE_TO_KAFKA_ACL: "true" + KAFKA_SUPER_USERS: User:CN=my-cluster-kafka,O=io.strimzi;User:CN=my-cluster-entity-operator,O=io.strimzi;User:CN=my-cluster-kafka-exporter,O=io.strimzi;User:service-account-kafka + + # + # Strimzi OAuth Configuration + # + + # Authentication config + OAUTH_CLIENT_ID: "kafka" + OAUTH_CLIENT_SECRET: "kafka-secret" + OAUTH_TOKEN_ENDPOINT_URI: "http://${KEYCLOAK_HOST:-keycloak}:8080/auth/realms/${REALM:-kafka-authz}/protocol/openid-connect/token" + + # Validation config + OAUTH_VALID_ISSUER_URI: "http://${KEYCLOAK_HOST:-keycloak}:8080/auth/realms/${REALM:-kafka-authz}" + OAUTH_JWKS_ENDPOINT_URI: "http://${KEYCLOAK_HOST:-keycloak}:8080/auth/realms/${REALM:-kafka-authz}/protocol/openid-connect/certs" + #OAUTH_INTROSPECTION_ENDPOINT_URI: "http://${KEYCLOAK_HOST}:8080/auth/realms/${REALM:-demo}/protocol/openid-connect/token/introspect" + + # username extraction from JWT token claim + OAUTH_USERNAME_CLAIM: preferred_username + + # For start.sh script to know where the keycloak is listening + KEYCLOAK_HOST: ${KEYCLOAK_HOST:-keycloak} + REALM: ${REALM:-kafka-authz} + + zookeeper: + image: strimzi/example-zookeeper + build: kafka-oauth-strimzi/zookeeper/target + container_name: zookeeper + ports: + - 2181:2181 + environment: + LOG_DIR: /home/kafka/logs \ No newline at end of file diff --git a/examples/docker/kafka-oauth-strimzi/kafka/Dockerfile b/examples/docker/kafka-oauth-strimzi/kafka/Dockerfile index 8be4975b..5b0774d7 100644 --- a/examples/docker/kafka-oauth-strimzi/kafka/Dockerfile +++ b/examples/docker/kafka-oauth-strimzi/kafka/Dockerfile @@ -1,8 +1,9 @@ -FROM strimzi/kafka:latest-kafka-2.3.0 +FROM strimzi/kafka:latest-kafka-2.4.0 COPY libs/* /opt/kafka/libs/strimzi/ COPY config/* /opt/kafka/config/ COPY *.sh /opt/kafka/ +COPY certificates/*.p12 /tmp/kafka/ USER root RUN chmod +x /opt/kafka/*.sh diff --git a/examples/docker/kafka-oauth-strimzi/kafka/certificates/cluster.keystore.p12 b/examples/docker/kafka-oauth-strimzi/kafka/certificates/cluster.keystore.p12 new file mode 100644 index 00000000..abfdbcb2 Binary files /dev/null and b/examples/docker/kafka-oauth-strimzi/kafka/certificates/cluster.keystore.p12 differ diff --git a/examples/docker/kafka-oauth-strimzi/kafka/certificates/cluster.truststore.p12 b/examples/docker/kafka-oauth-strimzi/kafka/certificates/cluster.truststore.p12 new file mode 100644 index 00000000..58ff7c59 Binary files /dev/null and b/examples/docker/kafka-oauth-strimzi/kafka/certificates/cluster.truststore.p12 differ diff --git a/examples/docker/kafka-oauth-strimzi/kafka/certificates/gen-kafka-certs.sh b/examples/docker/kafka-oauth-strimzi/kafka/certificates/gen-kafka-certs.sh new file mode 100755 index 00000000..d3266f9c --- /dev/null +++ b/examples/docker/kafka-oauth-strimzi/kafka/certificates/gen-kafka-certs.sh @@ -0,0 +1,21 @@ +#!/bin/sh + +set -e + +STOREPASS=Z_pkTh9xgZovK4t34cGB2o6afT4zZg0L + +echo "#### Generate broker keystore" +keytool -keystore cluster.keystore.p12 -alias localhost -validity 380 -genkey -keyalg RSA -ext SAN=DNS:kafka -dname "CN=my-cluster-kafka,O=io.strimzi" -deststoretype pkcs12 -storepass $STOREPASS -keypass $STOREPASS + +echo "#### Add the CA to the brokers’ truststore" +keytool -keystore cluster.truststore.p12 -deststoretype pkcs12 -storepass $STOREPASS -alias CARoot -importcert -file ../../../certificates/ca.crt -noprompt + +echo "#### Export the certificate from the keystore" +keytool -keystore cluster.keystore.p12 -storetype pkcs12 -alias localhost -certreq -file cert-file -storepass $STOREPASS + +echo "#### Sign the certificate with the CA" +openssl x509 -req -CA ../../../certificates/ca.crt -CAkey ../../../certificates/ca.key -in cert-file -out cert-signed -days 400 -CAcreateserial -passin pass:$STOREPASS + +echo "#### Import the CA and the signed certificate into the broker keystore" +keytool -keystore cluster.keystore.p12 -deststoretype pkcs12 -alias CARoot -import -file ../../../certificates/ca.crt -storepass $STOREPASS -noprompt +keytool -keystore cluster.keystore.p12 -deststoretype pkcs12 -alias localhost -import -file cert-signed -storepass $STOREPASS -noprompt diff --git a/examples/docker/kafka-oauth-strimzi/kafka/jwt.sh b/examples/docker/kafka-oauth-strimzi/kafka/jwt.sh new file mode 100644 index 00000000..4e3a62d6 --- /dev/null +++ b/examples/docker/kafka-oauth-strimzi/kafka/jwt.sh @@ -0,0 +1,14 @@ +#!/bin/bash + +if [ "$1" == "" ] || [ "$1" == "--help" ]; then + echo "Usage: $0 [JSON_WEB_TOKEN]" + exit 1 +fi + +IFS='.' read -r -a PARTS <<< "$1" + +echo "Head: " +echo $(echo -n "${PARTS[0]}" | base64 -d 2>/dev/null) +echo +echo "Payload: " +echo $(echo -n "${PARTS[1]}" | base64 -d 2>/dev/null) \ No newline at end of file diff --git a/examples/docker/kafka-oauth-strimzi/kafka/oauth.sh b/examples/docker/kafka-oauth-strimzi/kafka/oauth.sh new file mode 100644 index 00000000..75206edf --- /dev/null +++ b/examples/docker/kafka-oauth-strimzi/kafka/oauth.sh @@ -0,0 +1,121 @@ +#!/bin/bash + +usage() { + echo "Usage: $0 [USERNAME] [PASSWORD] [ARGUMENTS] ..." + echo + echo "$0 is a tool for obtaining an access token or a refresh token for the user or the client." + echo + echo " USERNAME The username for user authentication" + echo " PASSWORD The password for user authentication (prompted for if not specified)" + echo + echo " If USERNAME and PASSWORD are not specified, client credentials as specified by --client-id and --secret will be used for authentication." + echo + echo " ARGUMENTS:" + echo " --quiet, -q No informational outputs" + echo " --insecure Allow http:// in token endpoint url" + echo " --access Return access_token rather than refresh_token" + echo " --endpoint TOKEN_ENDPOINT_URL Authorization server token endpoint" + echo " --client-id CLIENT_ID Client id for client authentication - must be configured on authorization server" + echo " --secret CLIENT_SECRET Secret to authenticate the client" + echo " --scopes SCOPES Space separated list of scopes to request - default value: offline_access" +} + + +CLAIM=refresh_token +GRANT_TYPE=password +DEFAULT_SCOPES=offline_access + +while [ $# -gt 0 ] +do + case "$1" in + "-q" | "--quiet") + QUIET=1 + ;; + --endpoint) + shift + TOKEN_ENDPOINT="$1" + ;; + --insecure) + INSECURE=1 + ;; + --access) + CLAIM=access_token + DEFAULT_SCOPES="" + ;; + --client-id) + shift + CLIENT_ID="$1" + ;; + --secret) + shift + CLIENT_SECRET="$1" + ;; + --scopes) + shift + SCOPES="$1" + ;; + --help) + usage + exit 1 + ;; + *) + if [ "$UNAME" == "" ]; then + UNAME="$1" + elif [ "$PASS" == "" ]; then + PASS="$1" + else + >&2 echo "Unexpected argument!" + exit 1 + fi + ;; + esac + shift +done + +if [ "$TOKEN_ENDPOINT" == "" ]; then + >&2 echo "ENV variable TOKEN_ENDPOINT not set." + exit 1 +fi + +if [ "$UNAME" != "" ] && [ "$PASS" == "" ]; then + >&2 read -s -p "Password: " PASS + >&2 echo +fi + +if [ "$UNAME" == "" ] && [ "$CLIENT_ID" == "" ]; then + echo "USERNAME not specified. Use --client-id and --secret to authenticate with client credentials." + exit 1 +fi + +if [ "$CLIENT_ID" == "" ]; then + [ "$QUIET" == "" ] && >&2 echo "ENV var CLIENT_ID not set. Using default value: kafka-cli" + CLIENT_ID=kafka-cli +fi + +if [ "$UNAME" == "" ]; then + GRANT_TYPE=client_credentials +else + USER_PASS_CLIENT="&username=${UNAME}&password=${PASS}&client_id=${CLIENT_ID}" +fi + +if [ "$SCOPES" == "" ] && [ DEFAULT_SCOPES != "" ]; then + [ "$QUIET" == "" ] && >&2 echo "ENV var SCOPES not set. Using default value: ${DEFAULT_SCOPES}" + SCOPES="${DEFAULT_SCOPES}" +fi + +if [ "$CLIENT_SECRET" != "" ]; then + AUTH_VALUE=$(echo -n "$CLIENT_ID:$CLIENT_SECRET" | base64) + AUTHORIZATION="-H 'Authorization: Basic ""$AUTH_VALUE'" +fi + +[ "$QUIET" == "" ] && >&2 echo curl -s -X POST $TOKEN_ENDPOINT \ + $AUTHORIZATION \ + -H 'Content-Type: application/x-www-form-urlencoded' \ + -d "grant_type=${GRANT_TYPE}${USER_PASS_CLIENT}&scope=${SCOPES}" + +result=$(curl -s -X POST $TOKEN_ENDPOINT \ + $AUTHORIZATION \ + -H 'Content-Type: application/x-www-form-urlencoded' \ + -d "grant_type=${GRANT_TYPE}${USER_PASS_CLIENT}&scope=${SCOPES}") + +echo $result | awk -F "$CLAIM\":\"" '{printf $2}' | awk -F "\"" '{printf $1}' diff --git a/examples/docker/kafka-oauth-strimzi/kafka/pom.xml b/examples/docker/kafka-oauth-strimzi/kafka/pom.xml index 6e89f524..ade45446 100644 --- a/examples/docker/kafka-oauth-strimzi/kafka/pom.xml +++ b/examples/docker/kafka-oauth-strimzi/kafka/pom.xml @@ -33,9 +33,12 @@ functions.sh start.sh start_with_hydra.sh + jwt.sh + oauth.sh simple_kafka_config.sh Dockerfile config/ + certificates/ false @@ -70,6 +73,10 @@ io.strimzi kafka-oauth-common + + io.strimzi + kafka-oauth-keycloak-authorizer + org.keycloak keycloak-core @@ -90,4 +97,4 @@ - \ No newline at end of file + diff --git a/examples/docker/kafka-oauth-strimzi/kafka/start.sh b/examples/docker/kafka-oauth-strimzi/kafka/start.sh index d1c84a76..12d1a4b2 100755 --- a/examples/docker/kafka-oauth-strimzi/kafka/start.sh +++ b/examples/docker/kafka-oauth-strimzi/kafka/start.sh @@ -12,7 +12,13 @@ wait_for_url $URI "Waiting for Keycloak to start" wait_for_url "$URI/realms/${REALM:-demo}" "Waiting for realm '${REALM}' to be available" -./simple_kafka_config.sh | tee /tmp/strimzi.properties +if [ "$SERVER_PROPERTIES_FILE" == "" ]; then + echo "Generating a new strimzi.properties file using ENV vars" + ./simple_kafka_config.sh | tee /tmp/strimzi.properties +else + echo "Using provided server.properties file: $SERVER_PROPERTIES_FILE" + cp $SERVER_PROPERTIES_FILE /tmp/strimzi.properties +fi # add Strimzi kafka-oauth-* jars and their dependencies to classpath export CLASSPATH="/opt/kafka/libs/strimzi/*:$CLASSPATH" diff --git a/examples/docker/kafka-oauth-strimzi/zookeeper/Dockerfile b/examples/docker/kafka-oauth-strimzi/zookeeper/Dockerfile index 2fad64b3..0b246443 100644 --- a/examples/docker/kafka-oauth-strimzi/zookeeper/Dockerfile +++ b/examples/docker/kafka-oauth-strimzi/zookeeper/Dockerfile @@ -1,4 +1,4 @@ -FROM strimzi/zookeeper:0.11.4-kafka-2.1.0 +FROM strimzi/kafka:latest-kafka-2.4.0 COPY start.sh /opt/kafka/ COPY simple_zk_config.sh /opt/kafka/ diff --git a/examples/docker/keycloak-import/realms/demo.json b/examples/docker/keycloak-import/realms/demo-realm.json similarity index 100% rename from examples/docker/keycloak-import/realms/demo.json rename to examples/docker/keycloak-import/realms/demo-realm.json diff --git a/examples/docker/keycloak-import/realms/kafka-authz-realm.json b/examples/docker/keycloak-import/realms/kafka-authz-realm.json new file mode 100644 index 00000000..2b62aa2e --- /dev/null +++ b/examples/docker/keycloak-import/realms/kafka-authz-realm.json @@ -0,0 +1,662 @@ +{ + "realm": "kafka-authz", + "accessTokenLifespan": 300, + "ssoSessionIdleTimeout": 864000, + "ssoSessionMaxLifespan": 864000, + "enabled": true, + "sslRequired": "external", + "roles": { + "realm": [ + { + "name": "Dev Team A", + "description": "Developer on Dev Team A" + }, + { + "name": "Dev Team B", + "description": "Developer on Dev Team B" + }, + { + "name": "Ops Team", + "description": "Operations team member" + } + ], + "client": { + "team-a-client": [], + "team-b-client": [], + "kafka-cli": [], + "kafka": [ + { + "name": "uma_protection", + "clientRole": true + } + ] + } + }, + "groups" : [ + { + "name" : "ClusterManager Group", + "path" : "/ClusterManager Group" + }, { + "name" : "ClusterManager-cluster2 Group", + "path" : "/ClusterManager-cluster2 Group" + }, { + "name" : "Ops Team Group", + "path" : "/Ops Team Group" + } + ], + "users": [ + { + "username" : "alice", + "enabled" : true, + "totp" : false, + "emailVerified" : true, + "firstName" : "Alice", + "email" : "alice@strimzi.io", + "credentials" : [ { + "type" : "password", + "secretData" : "{\"value\":\"KqABIiReBuRWbP4pBct3W067pNvYzeN7ILBV+8vT8nuF5cgYs2fdl2QikJT/7bGTW/PBXg6CYLwJQFYrBK9MWg==\",\"salt\":\"EPgscX9CQz7UnuZDNZxtMw==\"}", + "credentialData" : "{\"hashIterations\":27500,\"algorithm\":\"pbkdf2-sha256\"}" + } ], + "disableableCredentialTypes" : [ ], + "requiredActions" : [ ], + "realmRoles" : [ "offline_access", "uma_authorization" ], + "clientRoles" : { + "account" : [ "view-profile", "manage-account" ] + }, + "groups" : [ "/ClusterManager Group" ] + }, { + "username" : "bob", + "enabled" : true, + "totp" : false, + "emailVerified" : true, + "firstName" : "Bob", + "email" : "bob@strimzi.io", + "credentials" : [ { + "type" : "password", + "secretData" : "{\"value\":\"QhK0uLsKuBDrMm9Z9XHvq4EungecFRnktPgutfjKtgVv2OTPd8D390RXFvJ8KGvqIF8pdoNxHYQyvDNNwMORpg==\",\"salt\":\"yxkgwEyTnCGLn42Yr9GxBQ==\"}", + "credentialData" : "{\"hashIterations\":27500,\"algorithm\":\"pbkdf2-sha256\"}" + } ], + "disableableCredentialTypes" : [ ], + "requiredActions" : [ ], + "realmRoles" : [ "offline_access", "uma_authorization" ], + "clientRoles" : { + "account" : [ "view-profile", "manage-account" ] + }, + "groups" : [ "/ClusterManager-cluster2 Group" ] + }, + { + "username" : "service-account-team-a-client", + "enabled" : true, + "serviceAccountClientId" : "team-a-client", + "realmRoles" : [ "offline_access", "Dev Team A" ], + "clientRoles" : { + "account" : [ "manage-account", "view-profile" ] + }, + "groups" : [ ] + }, + { + "username" : "service-account-team-b-client", + "enabled" : true, + "serviceAccountClientId" : "team-b-client", + "realmRoles" : [ "offline_access", "Dev Team B" ], + "clientRoles" : { + "account" : [ "manage-account", "view-profile" ] + }, + "groups" : [ ] + } + ], + "clients": [ + { + "clientId": "team-a-client", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "team-a-client-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": true, + "publicClient": false, + "fullScopeAllowed": true + }, + { + "clientId": "team-b-client", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "team-b-client-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": true, + "publicClient": false, + "fullScopeAllowed": true + }, + { + "clientId": "kafka", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "kafka-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": true, + "authorizationServicesEnabled": true, + "publicClient": false, + "fullScopeAllowed": true, + "authorizationSettings": { + "allowRemoteResourceManagement": true, + "policyEnforcementMode": "ENFORCING", + "resources": [ + { + "name": "Topic:a_*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Topics that start with a_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "Group:x_*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Consumer groups that start with x_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Describe" + }, + { + "name": "Delete" + }, + { + "name": "Read" + } + ] + }, + { + "name": "Topic:x_*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Topics that start with x_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Describe" + }, + { + "name": "Delete" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "Group:a_*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Groups that start with a_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Describe" + }, + { + "name": "Read" + } + ] + }, + { + "name": "Group:*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Any group", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Describe" + }, + { + "name": "Read" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "Topic:*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Any topic", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Topic:b_*", + "type": "Topic", + "ownerManagedAccess": false, + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Cluster:*", + "type": "Cluster", + "ownerManagedAccess": false, + "displayName": "Cluster scope on cluster2", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + }, + { + "name": "ClusterAction" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Group:*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Any group on cluster2", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Read" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Topic:*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Any topic on cluster2", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "IdempotentWrite" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name" : "Cluster:*", + "type" : "Cluster", + "ownerManagedAccess" : false, + "attributes" : { }, + "uris" : [ ] + } + ], + "policies": [ + { + "name": "Dev Team A", + "type": "role", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "roles": "[{\"id\":\"Dev Team A\",\"required\":true}]" + } + }, + { + "name": "Default Policy", + "description": "A policy that grants access only for users within this realm", + "type": "js", + "logic": "POSITIVE", + "decisionStrategy": "AFFIRMATIVE", + "config": { + "code": "// by default, grants any permission associated with this policy\n$evaluation.grant();\n" + } + }, + { + "name": "Dev Team B", + "type": "role", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "roles": "[{\"id\":\"Dev Team B\",\"required\":true}]" + } + }, + { + "name": "Ops Team", + "type": "role", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "roles": "[{\"id\":\"Ops Team\",\"required\":true}]" + } + }, + { + "name" : "ClusterManager Group", + "type" : "group", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "groups" : "[{\"path\":\"/ClusterManager Group\",\"extendChildren\":false}]" + } + }, { + "name" : "ClusterManager of cluster2 Group", + "type" : "group", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "groups" : "[{\"path\":\"/ClusterManager-cluster2 Group\",\"extendChildren\":false}]" + } + }, + { + "name": "Dev Team A owns topics that start with a_ on any cluster", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Topic:a_*\"]", + "applyPolicies": "[\"Dev Team A\"]" + } + }, + { + "name": "Dev Team A can write to topics that start with x_ on any cluster", + "type": "scope", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Topic:x_*\"]", + "scopes": "[\"Describe\",\"Write\"]", + "applyPolicies": "[\"Dev Team A\"]" + } + }, + { + "name": "Dev Team B owns topics that start with b_ on cluster cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Topic:b_*\"]", + "applyPolicies": "[\"Dev Team B\"]" + } + }, + { + "name": "Dev Team B can read from topics that start with x_ on any cluster", + "type": "scope", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Topic:x_*\"]", + "scopes": "[\"Describe\",\"Read\"]", + "applyPolicies": "[\"Dev Team B\"]" + } + }, + { + "name": "Dev Team B can update consumer group offsets that start with x_ on any cluster", + "type": "scope", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Group:x_*\"]", + "scopes": "[\"Describe\",\"Read\"]", + "applyPolicies": "[\"Dev Team B\"]" + } + }, + { + "name": "Dev Team A can use consumer groups that start with a_ on any cluster", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Group:a_*\"]", + "applyPolicies": "[\"Dev Team A\"]" + } + }, + { + "name": "ClusterManager of cluster2 Group has full access to topics on cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Topic:*\"]", + "applyPolicies": "[\"ClusterManager of cluster2 Group\"]" + } + }, + { + "name": "ClusterManager of cluster2 Group has full access to consumer groups on cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Group:*\"]", + "applyPolicies": "[\"ClusterManager of cluster2 Group\"]" + } + }, + { + "name": "ClusterManager of cluster2 Group has full access to cluster config on cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Cluster:*\"]", + "applyPolicies": "[\"ClusterManager of cluster2 Group\"]" + } + }, { + "name" : "ClusterManager Group has full access to manage and affect groups", + "type" : "resource", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "resources" : "[\"Group:*\"]", + "applyPolicies" : "[\"ClusterManager Group\"]" + } + }, { + "name" : "ClusterManager Group has full access to manage and affect topics", + "type" : "resource", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "resources" : "[\"Topic:*\"]", + "applyPolicies" : "[\"ClusterManager Group\"]" + } + }, { + "name" : "ClusterManager Group has full access to cluster config", + "type" : "resource", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "resources" : "[\"Cluster:*\"]", + "applyPolicies" : "[\"ClusterManager Group\"]" + } + } + ], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Read" + }, + { + "name": "Write" + }, + { + "name": "Delete" + }, + { + "name": "Alter" + }, + { + "name": "Describe" + }, + { + "name": "ClusterAction" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + }, + { + "name": "IdempotentWrite" + } + ], + "decisionStrategy": "AFFIRMATIVE" + } + }, + { + "clientId": "kafka-cli", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "kafka-cli-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": false, + "publicClient": true, + "fullScopeAllowed": true + } + ] +} \ No newline at end of file diff --git a/examples/docker/keycloak/compose.yml b/examples/docker/keycloak/compose.yml index 92923331..1d08e171 100644 --- a/examples/docker/keycloak/compose.yml +++ b/examples/docker/keycloak/compose.yml @@ -11,3 +11,4 @@ services: KEYCLOAK_USER: "admin" KEYCLOAK_PASSWORD: "admin" PROXY_ADDRESS_FORWARDING: "true" + command: "-Dkeycloak.profile.feature.upload_scripts=enabled" diff --git a/examples/docker/pom.xml b/examples/docker/pom.xml index 92e389ad..402b77cc 100644 --- a/examples/docker/pom.xml +++ b/examples/docker/pom.xml @@ -43,6 +43,11 @@ kafka-oauth-server ${strimzi-oauth.version} + + io.strimzi + kafka-oauth-keycloak-authorizer + ${strimzi-oauth.version} + org.keycloak keycloak-core diff --git a/examples/producer/src/main/java/io/strimzi/examples/producer/ExampleProducer.java b/examples/producer/src/main/java/io/strimzi/examples/producer/ExampleProducer.java index bfce5480..cd98d2ad 100644 --- a/examples/producer/src/main/java/io/strimzi/examples/producer/ExampleProducer.java +++ b/examples/producer/src/main/java/io/strimzi/examples/producer/ExampleProducer.java @@ -20,7 +20,7 @@ public class ExampleProducer { public static void main(String[] args) { - String topic = "Topic1"; + String topic = "a_Topic1"; Properties defaults = new Properties(); Config external = new Config(); diff --git a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/BearerTokenWithPayload.java b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/BearerTokenWithPayload.java new file mode 100644 index 00000000..334d86d8 --- /dev/null +++ b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/BearerTokenWithPayload.java @@ -0,0 +1,22 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.common; + +import org.apache.kafka.common.security.oauthbearer.OAuthBearerToken; + +/** + * This extension of OAuthBearerToken provides a way to associate any additional information with the token + * at run time, that is cached for the duration of the client session. + * + * Token is instanciated during authentication, but the 'payload' methods can be accessed later by custom extensions. + * For example, it can be used by a custom authorizer to cache a parsed JWT token payload or to cache authorization grants for current session. + */ +public interface BearerTokenWithPayload extends OAuthBearerToken { + + Object getPayload(); + + void setPayload(Object payload); + +} diff --git a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/ConfigUtil.java b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/ConfigUtil.java index 3300ef05..2e8ed2e1 100644 --- a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/ConfigUtil.java +++ b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/ConfigUtil.java @@ -6,6 +6,7 @@ import javax.net.ssl.HostnameVerifier; import javax.net.ssl.SSLSocketFactory; +import java.util.Properties; public class ConfigUtil { @@ -24,4 +25,18 @@ public static HostnameVerifier createHostnameVerifier(Config config) { // Following Kafka convention for skipping hostname validation (when set to ) return "".equals(hostCheck) ? SSLUtil.createAnyHostHostnameVerifier() : null; } + + public static void putIfNotNull(Properties p, String key, Object value) { + if (value != null) { + p.put(key, value); + } + } + + public static String getConfigWithFallbackLookup(Config c, String key, String fallbackKey) { + String result = c.getValue(key); + if (result == null) { + result = c.getValue(fallbackKey); + } + return result; + } } diff --git a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/HttpException.java b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/HttpException.java new file mode 100644 index 00000000..44eca1fb --- /dev/null +++ b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/HttpException.java @@ -0,0 +1,40 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.common; + +import java.net.URI; + +public class HttpException extends RuntimeException { + + private final String method; + private final URI uri; + private final int status; + private final String response; + + public HttpException(String method, URI uri, int status, String response) { + super(method + " request to " + uri + " failed with status " + status + ": " + response); + + this.method = method; + this.uri = uri; + this.status = status; + this.response = response; + } + + public String getMethod() { + return method; + } + + public URI getUri() { + return uri; + } + + public int getStatus() { + return status; + } + + public String getResponse() { + return response; + } +} diff --git a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/HttpUtil.java b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/HttpUtil.java index a2048d28..1c54c6cb 100644 --- a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/HttpUtil.java +++ b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/HttpUtil.java @@ -38,33 +38,44 @@ public class HttpUtil { private static final Logger log = LoggerFactory.getLogger(HttpUtil.class); public static T get(URI uri, String authorization, Class responseType) throws IOException { - return postOrGet(uri, null, null, authorization, null, null, responseType); + return request(uri, null, null, authorization, null, null, responseType); } public static T get(URI uri, SSLSocketFactory socketFactory, String authorization, Class responseType) throws IOException { - return postOrGet(uri, socketFactory, null, authorization, null, null, responseType); + return request(uri, socketFactory, null, authorization, null, null, responseType); } public static T get(URI uri, SSLSocketFactory socketFactory, HostnameVerifier hostnameVerifier, String authorization, Class responseType) throws IOException { - return postOrGet(uri, socketFactory, hostnameVerifier, authorization, null, null, responseType); + return request(uri, socketFactory, hostnameVerifier, authorization, null, null, responseType); } public static T post(URI uri, String authorization, String contentType, String body, Class responseType) throws IOException { - return postOrGet(uri, null, null, authorization, contentType, body, responseType); + return request(uri, null, null, authorization, contentType, body, responseType); } public static T post(URI uri, SSLSocketFactory socketFactory, String authorization, String contentType, String body, Class responseType) throws IOException { - return postOrGet(uri, socketFactory, null, authorization, contentType, body, responseType); + return request(uri, socketFactory, null, authorization, contentType, body, responseType); } public static T post(URI uri, SSLSocketFactory socketFactory, HostnameVerifier verifier, String authorization, String contentType, String body, Class responseType) throws IOException { - return postOrGet(uri, socketFactory, verifier, authorization, contentType, body, responseType); + return request(uri, socketFactory, verifier, authorization, contentType, body, responseType); + } + + public static void put(URI uri, String authorization, String contentType, String body) throws IOException { + request(uri, null, null, authorization, contentType, body, null); + } + + public static void put(URI uri, SSLSocketFactory socketFactory, String authorization, String contentType, String body) throws IOException { + request(uri, socketFactory, null, authorization, contentType, body, null); + } + + public static void put(URI uri, SSLSocketFactory socketFactory, HostnameVerifier verifier, String authorization, String contentType, String body) throws IOException { + request(uri, socketFactory, verifier, authorization, contentType, body, null); } - @SuppressWarnings("checkstyle:NPathComplexity") // Surpressed because of Spotbugs Java 11 bug - https://github.com/spotbugs/spotbugs/issues/756 @SuppressFBWarnings("RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE") - public static T postOrGet(URI uri, SSLSocketFactory socketFactory, HostnameVerifier hostnameVerifier, String authorization, String contentType, String body, Class responseType) throws IOException { + public static T request(URI uri, SSLSocketFactory socketFactory, HostnameVerifier hostnameVerifier, String authorization, String contentType, String body, Class responseType) throws IOException { HttpURLConnection con; try { con = (HttpURLConnection) uri.toURL().openConnection(); @@ -89,7 +100,8 @@ public static T postOrGet(URI uri, SSLSocketFactory socketFactory, HostnameV con.setDoOutput(true); } - con.setRequestMethod(body != null ? "POST" : "GET"); + String method = body == null ? "GET" : responseType != null ? "POST" : "PUT"; + con.setRequestMethod(method); if (authorization != null) { con.setRequestProperty("Authorization", authorization); } @@ -113,8 +125,14 @@ public static T postOrGet(URI uri, SSLSocketFactory socketFactory, HostnameV } } + return handleResponse(con, method, uri, responseType); + } + + // Surpressed because of Spotbugs Java 11 bug - https://github.com/spotbugs/spotbugs/issues/756 + @SuppressFBWarnings("RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE") + private static T handleResponse(HttpURLConnection con, String method, URI uri, Class responseType) throws IOException { int code = con.getResponseCode(); - if (code != 200) { + if (code != 200 && code != 201 && code != 204) { InputStream err = con.getErrorStream(); if (err != null) { ByteArrayOutputStream errbuf = new ByteArrayOutputStream(4096); @@ -124,13 +142,17 @@ public static T postOrGet(URI uri, SSLSocketFactory socketFactory, HostnameV log.warn("[IGNORED] Failed to read response body", e); } - throw new RuntimeException("Request to " + uri + " failed with status " + code + ": " + errbuf.toString(StandardCharsets.UTF_8.name())); + throw new HttpException(method, uri, code, errbuf.toString(StandardCharsets.UTF_8.name())); } else { - throw new RuntimeException("Request to " + uri + " failed with status " + code + " " + con.getResponseMessage()); + throw new HttpException(method, uri, code, con.getResponseMessage()); } } try (InputStream response = con.getInputStream()) { + if (responseType == null) { + response.close(); + return null; + } return JSONUtil.readJSON(response, responseType); } diff --git a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/JSONUtil.java b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/JSONUtil.java index 08e1a5d2..711fe720 100644 --- a/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/JSONUtil.java +++ b/oauth-common/src/main/java/io/strimzi/kafka/oauth/common/JSONUtil.java @@ -10,6 +10,9 @@ import java.io.IOException; import java.io.InputStream; +import java.util.ArrayList; +import java.util.Iterator; +import java.util.List; public class JSONUtil { @@ -19,23 +22,69 @@ public static T readJSON(InputStream is, Class clazz) throws IOException return MAPPER.readValue(is, clazz); } - public static String getClaimFromJWT(String claim, Object token) { + /** + * Convert object to JsonNode + * + * @param value Json-serializable object + * @return Object as JsonNode + */ + public static JsonNode asJson(Object value) { + if (value instanceof JsonNode) + return (JsonNode) value; + + // We re-serialise and deserialize into generic json object try { - // No nice way to get arbitrary claim from already parsed token - // therefore we re-serialise and deserialize into generic json object - String jsonString = JsonSerialization.writeValueAsString(token); - JsonNode node = JsonSerialization.readValue(jsonString, JsonNode.class); - JsonNode claimNode = node.get(claim); - - if (claimNode == null) { - throw new RuntimeException("Access token contains no '" + claim + "' claim: " + jsonString); - } + String jsonString = JsonSerialization.writeValueAsString(value); + return JsonSerialization.readValue(jsonString, JsonNode.class); + } catch (IOException e) { + throw new RuntimeException("Failed to convert value to JSON (" + value + ")", e); + } + } - return claimNode.asText(); + /** + * Get specific claim from token. + * + * @param claim jq style query where nested names are specified using '.' as separator + * @param token parsed object + * @return Value of the specific claim as String or null if claim not present + */ + public static String getClaimFromJWT(String claim, Object token) { + // No nice way to get arbitrary claim from already parsed token + JsonNode node = asJson(token); + return getClaimFromJWT(node, claim.split("\\.")); + } - } catch (IOException e) { - throw new RuntimeException("Failed to read '" + claim + "' claim from token", e); + /** + * Get specific claim from token. + * + * @param node parsed JWT token payload + * @param path name segments where all but last should each point to the next nested object + * @return Value of the specific claim as String or null if claim not present + */ + public static String getClaimFromJWT(JsonNode node, String... path) { + for (String p: path) { + node = node.get(p); + if (node == null) { + return null; + } } + return node.asText(); } + public static List asListOfString(JsonNode arrayNode) { + if (!arrayNode.isArray()) { + throw new IllegalArgumentException("JsonNode not an array node: " + arrayNode); + } + ArrayList result = new ArrayList<>(); + Iterator it = arrayNode.iterator(); + while (it.hasNext()) { + JsonNode n = it.next(); + if (n.isTextual()) { + result.add(n.asText()); + } else { + result.add(n.toString()); + } + } + return result; + } } diff --git a/oauth-common/src/main/java/io/strimzi/kafka/oauth/validator/OAuthIntrospectionValidator.java b/oauth-common/src/main/java/io/strimzi/kafka/oauth/validator/OAuthIntrospectionValidator.java index 27d98b75..387ce97e 100644 --- a/oauth-common/src/main/java/io/strimzi/kafka/oauth/validator/OAuthIntrospectionValidator.java +++ b/oauth-common/src/main/java/io/strimzi/kafka/oauth/validator/OAuthIntrospectionValidator.java @@ -89,11 +89,6 @@ public OAuthIntrospectionValidator(String introspectionEndpointUri, @SuppressWarnings("checkstyle:NPathComplexity") public TokenInfo validate(String token) { - // TODO: remove this debug code - if ("ignore".equals(token)) { - return new TokenInfo(token, null, "ignore", System.currentTimeMillis(), System.currentTimeMillis() + 1000 * 60 * 60 * 365); - } - String authorization = clientSecret != null ? "Basic " + base64encode(clientId + ':' + clientSecret) : null; diff --git a/oauth-keycloak-authorizer/etc/authorization-scopes.json b/oauth-keycloak-authorizer/etc/authorization-scopes.json new file mode 100644 index 00000000..15dd31c3 --- /dev/null +++ b/oauth-keycloak-authorizer/etc/authorization-scopes.json @@ -0,0 +1,37 @@ +{ + "allowRemoteResourceManagement": true, + "policyEnforcementMode": "ENFORCING", + "scopes": [ + { + "name": "Create" + }, + { + "name": "Read" + }, + { + "name": "Write" + }, + { + "name": "Delete" + }, + { + "name": "Alter" + }, + { + "name": "Describe" + }, + { + "name": "ClusterAction" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + }, + { + "name": "IdempotentWrite" + } + ], + "decisionStrategy": "AFFIRMATIVE" +} \ No newline at end of file diff --git a/oauth-keycloak-authorizer/pom.xml b/oauth-keycloak-authorizer/pom.xml new file mode 100644 index 00000000..22b582fa --- /dev/null +++ b/oauth-keycloak-authorizer/pom.xml @@ -0,0 +1,62 @@ + + + 4.0.0 + + + io.strimzi + oauth + 1.0.0-SNAPSHOT + + + kafka-oauth-keycloak-authorizer + + + + io.strimzi + kafka-oauth-common + + + io.strimzi + kafka-oauth-client + + + com.fasterxml.jackson.core + jackson-databind + + + org.slf4j + slf4j-api + provided + + + org.apache.kafka + kafka-clients + provided + + + org.apache.kafka + kafka_2.12 + provided + + + org.scala-lang + scala-library + provided + + + + + + + org.apache.maven.plugins + maven-compiler-plugin + + ${maven.compiler.source} + ${maven.compiler.target} + + + + + + diff --git a/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/AuthzConfig.java b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/AuthzConfig.java new file mode 100644 index 00000000..73672a87 --- /dev/null +++ b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/AuthzConfig.java @@ -0,0 +1,30 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.server.authorizer; + +import io.strimzi.kafka.oauth.common.Config; + +import java.util.Properties; + +public class AuthzConfig extends Config { + + public static final String STRIMZI_AUTHORIZATION_CLIENT_ID = "strimzi.authorization.client.id"; + public static final String STRIMZI_AUTHORIZATION_TOKEN_ENDPOINT_URI = "strimzi.authorization.token.endpoint.uri"; + + public static final String STRIMZI_AUTHORIZATION_KAFKA_CLUSTER_NAME = "strimzi.authorization.kafka.cluster.name"; + public static final String STRIMZI_AUTHORIZATION_DELEGATE_TO_KAFKA_ACL = "strimzi.authorization.delegate.to.kafka.acl"; + + public static final String STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_LOCATION = "strimzi.authorization.ssl.truststore.location"; + public static final String STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_PASSWORD = "strimzi.authorization.ssl.truststore.password"; + public static final String STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_TYPE = "strimzi.authorization.ssl.truststore.type"; + public static final String STRIMZI_AUTHORIZATION_SSL_SECURE_RANDOM_IMPLEMENTATION = "strimzi.authorization.ssl.secure.random.implementation"; + public static final String STRIMZI_AUTHORIZATION_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM = "strimzi.authorization.ssl.endpoint.identification.algorithm"; + + AuthzConfig() {} + + AuthzConfig(Properties p) { + super(p); + } +} diff --git a/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/JwtKafkaPrincipal.java b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/JwtKafkaPrincipal.java new file mode 100644 index 00000000..0001d0f8 --- /dev/null +++ b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/JwtKafkaPrincipal.java @@ -0,0 +1,44 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.server.authorizer; + +import io.strimzi.kafka.oauth.common.BearerTokenWithPayload; +import org.apache.kafka.common.security.auth.KafkaPrincipal; + +import java.util.Objects; + +public class JwtKafkaPrincipal extends KafkaPrincipal { + + private final BearerTokenWithPayload jwt; + + public JwtKafkaPrincipal(String principalType, String name) { + this(principalType, name, null); + } + + public JwtKafkaPrincipal(String principalType, String name, BearerTokenWithPayload jwt) { + super(principalType, name); + this.jwt = jwt; + } + + public BearerTokenWithPayload getJwt() { + return jwt; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + if (!super.equals(o)) return false; + + JwtKafkaPrincipal that = (JwtKafkaPrincipal) o; + return Objects.equals(jwt, that.jwt); + } + + @Override + public int hashCode() { + return Objects.hash(super.hashCode(), jwt); + } +} diff --git a/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/JwtKafkaPrincipalBuilder.java b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/JwtKafkaPrincipalBuilder.java new file mode 100644 index 00000000..f0645177 --- /dev/null +++ b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/JwtKafkaPrincipalBuilder.java @@ -0,0 +1,127 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.server.authorizer; + +import io.strimzi.kafka.oauth.common.BearerTokenWithPayload; +import org.apache.kafka.common.Configurable; +import org.apache.kafka.common.config.internals.BrokerSecurityConfigs; +import org.apache.kafka.common.security.auth.AuthenticationContext; +import org.apache.kafka.common.security.auth.KafkaPrincipal; +import org.apache.kafka.common.security.auth.SaslAuthenticationContext; +import org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder; +import org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule; +import org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslServer; + +import java.lang.reflect.Field; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.security.AccessController; +import java.security.PrivilegedAction; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +/** + * This class needs to be enabled as the PrincipalBuilder on Kafka Broker. + *

+ * It ensures that the generated Principal is instance of {@link io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipal}, + * containing the OAuthBearerToken token produced by io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler. + *

+ *

+ * You can use 'principal.builder.class=io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder' + * property definition in server.properties to install it. + *

+ */ +public class JwtKafkaPrincipalBuilder extends DefaultKafkaPrincipalBuilder implements Configurable { + + private static final SetAccessibleAction SET_PRINCIPAL_MAPPER = SetAccessibleAction.newInstance(); + + private static class SetAccessibleAction implements PrivilegedAction { + + private Field field; + + SetAccessibleAction(Field field) { + this.field = field; + } + + @Override + public Void run() { + field.setAccessible(true); + return null; + } + + void invoke(DefaultKafkaPrincipalBuilder target, Object value) throws IllegalAccessException { + AccessController.doPrivileged(this); + field.set(target, value); + } + + static SetAccessibleAction newInstance() { + try { + return new SetAccessibleAction(DefaultKafkaPrincipalBuilder.class.getDeclaredField("sslPrincipalMapper")); + } catch (NoSuchFieldException e) { + throw new IllegalStateException("Failed to install JwtKafkaPrincipalBuilder. This Kafka version does not seem to be supported", e); + } + } + } + + + public JwtKafkaPrincipalBuilder() { + super(null, null); + } + + @Override + public void configure(Map configs) { + + Object sslPrincipalMappingRules = configs.get(BrokerSecurityConfigs.SSL_PRINCIPAL_MAPPING_RULES_CONFIG); + Object sslPrincipalMapper; + + try { + Class clazz = Class.forName("org.apache.kafka.common.security.ssl.SslPrincipalMapper"); + try { + Method m = clazz.getMethod("fromRules", List.class); + if (sslPrincipalMappingRules == null) { + sslPrincipalMappingRules = Collections.singletonList("DEFAULT"); + } + sslPrincipalMapper = m.invoke(null, sslPrincipalMappingRules); + + } catch (NoSuchMethodException ex) { + Method m = clazz.getMethod("fromRules", String.class); + if (sslPrincipalMappingRules == null) { + sslPrincipalMappingRules = "DEFAULT"; + } + sslPrincipalMapper = m.invoke(null, sslPrincipalMappingRules); + } + + // Hack setting sslPrincipalMapper to DefaultKafkaPrincipalBuilder + // An alternative would be to copy paste the complete DefaultKafkaPrincipalBuilder implementation + // into this class and extend it + + SET_PRINCIPAL_MAPPER.invoke(this, sslPrincipalMapper); + + } catch (RuntimeException e) { + throw new RuntimeException("Failed to initialize JwtKafkaPrincioalBuilder", e); + + } catch (ClassNotFoundException + | NoSuchMethodException + | IllegalAccessException + | InvocationTargetException e) { + throw new RuntimeException("Failed to initialize JwtKafkaPrincioalBuilder", e); + } + } + + @Override + public KafkaPrincipal build(AuthenticationContext context) { + if (context instanceof SaslAuthenticationContext) { + OAuthBearerSaslServer server = (OAuthBearerSaslServer) ((SaslAuthenticationContext) context).server(); + if (OAuthBearerLoginModule.OAUTHBEARER_MECHANISM.equals(server.getMechanismName())) { + return new JwtKafkaPrincipal(KafkaPrincipal.USER_TYPE, + server.getAuthorizationID(), + (BearerTokenWithPayload) server.getNegotiatedProperty("OAUTHBEARER.token")); + } + } + + return super.build(context); + } +} diff --git a/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/KeycloakRBACAuthorizer.java b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/KeycloakRBACAuthorizer.java new file mode 100644 index 00000000..1a7c307b --- /dev/null +++ b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/KeycloakRBACAuthorizer.java @@ -0,0 +1,470 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.server.authorizer; + +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.node.ObjectNode; +import io.strimzi.kafka.oauth.client.ClientConfig; +import io.strimzi.kafka.oauth.common.Config; +import io.strimzi.kafka.oauth.common.ConfigUtil; +import io.strimzi.kafka.oauth.common.HttpException; +import io.strimzi.kafka.oauth.common.JSONUtil; +import io.strimzi.kafka.oauth.common.BearerTokenWithPayload; +import io.strimzi.kafka.oauth.common.SSLUtil; +import kafka.network.RequestChannel; +import kafka.security.auth.Acl; +import kafka.security.auth.Operation; +import kafka.security.auth.Resource; +import org.apache.kafka.common.security.auth.KafkaPrincipal; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import scala.collection.immutable.Set; + +import javax.net.ssl.HostnameVerifier; +import javax.net.ssl.SSLSocketFactory; +import java.net.URI; +import java.net.URISyntaxException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Properties; +import java.util.stream.Collectors; + +import static io.strimzi.kafka.oauth.common.HttpUtil.post; +import static io.strimzi.kafka.oauth.common.OAuthAuthenticator.urlencode; + +/** + * An authorizer that grants access based on security policies managed in Keycloak Authorization Services. + * It works in conjunction with JaasServerOauthValidatorCallbackHandler, and requires + * {@link io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder} to be configured as + * 'principal.builder.class' in 'server.properties' file. + *

+ * To install this authorizer in Kafka, specify the following in your 'server.properties': + *

+ *
+ *     authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakRBACAuthorizer
+ *     principal.builder.class=io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder
+ * 
+ *

+ * There is additional configuration that needs to be specified in order for this authorizer to work. + *

+ *
+ * Note: The following configuration keys can be specified as properties in Kafka `server.properties` file, or as + * ENV vars in which case an all-uppercase key name is also attempted with '.' replaced by '_' (e.g. STRIMZI_AUTHORIZATION_TOKEN_ENDPOINT_URI). + * They can also be specified as system properties. The priority is in reverse - system property overrides the ENV var, which overrides + * `server.properties`. + *
+ *

+ * Required configuration: + *

+ *
    + *
  • strimzi.authorization.token.endpoint.uri A URL of the Keycloak's token endpoint (e.g. https://keycloak:8443/auth/realms/master/protocol/openid-connect/token).
    + * If not present, oauth.token.endpoint.uri is used as a fallback configuration key to avoid unnecessary duplication when already present for the purpose of client authentication. + *
  • + *
  • strimzi.authorization.client.id A client id of the OAuth client definition in Keycloak, that has Authorization Services enabled.
    + * Typically it is called 'kafka'. + * If not present, oauth.client.id is used as a fallback configuration key to avoid unnecessary duplication when already present for the purpose of client authentication. + *
  • + *
+ *

+ * Optional configuration: + *

+ *
    + *
  • strimzi.authorization.kafka.cluster.name The name of this cluster, used to target permissions to specific Kafka cluster, making it possible to manage multiple clusters within the same Keycloak realm.
    + * The default value is kafka-cluster + *
  • + *
  • strimzi.authorization.delegate.to.kafka.acl Whether authorization decision should be delegated to SimpleACLAuthorizer if DENIED by Keycloak Authorization Services policies.
    + * The default value is false + *
  • + *
+ *

+ * TLS configuration: + *

+ *
    + *
  • strimzi.authorization.ssl.truststore.location The location of the truststore file on the filesystem.
    + * If not present, oauth.ssl.truststore.location is used as a fallback configuration key to avoid unnecessary duplication when already present for the purpose of client authentication. + *
  • + *
  • strimzi.authorization.ssl.truststore.password The password for the truststore.
    + * If not present, oauth.ssl.truststore.password is used as a fallback configuration key to avoid unnecessary duplication when already present for the purpose of client authentication. + *
  • + *
  • strimzi.authorization.ssl.truststore.type The truststore type.
    + * If not present, oauth.ssl.truststore.type is used as a fallback configuration key to avoid unnecessary duplication when already present for the purpose of client authentication. + * If not set, the Java KeyStore default type is used. + *
  • + *
  • strimzi.authorization.ssl.secure.random.implementation The random number generator implementation. See Java SDK documentation.
    + * If not present, oauth.ssl.secure.random.implementation is used as a fallback configuration key to avoid unnecessary duplication when already present for the purpose of client authentication. + * If not set, the Java platform SDK default is used. + *
  • + *
  • strimzi.authorization.ssl.endpoint.identification.algorithm Specify how to perform hostname verification. If set to empty string the hostname verification is turned off.
    + * If not present, oauth.ssl.endpoint.identification.algorithm is used as a fallback configuration key to avoid unnecessary duplication when already present for the purpose of client authentication. + * If not set, the default value is HTTPS which enforces hostname verification for server certificates. + *
  • + *
+ *

+ * This authorizer honors the super.users configuration. Super users are automatically granted any authorization request. + *

+ */ +@SuppressWarnings("deprecation") +public class KeycloakRBACAuthorizer extends kafka.security.auth.SimpleAclAuthorizer { + + private static final String PRINCIPAL_BUILDER_CLASS = "io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder"; + + static final Logger log = LoggerFactory.getLogger(KeycloakRBACAuthorizer.class); + static final Logger GRANT_LOG = LoggerFactory.getLogger(KeycloakRBACAuthorizer.class.getName() + ".grant"); + static final Logger DENY_LOG = LoggerFactory.getLogger(KeycloakRBACAuthorizer.class.getName() + ".deny"); + + private URI tokenEndpointUrl; + private String clientId; + private String clusterName; + private SSLSocketFactory socketFactory; + private HostnameVerifier hostnameVerifier; + private List superUsers = Collections.emptyList(); + private boolean delegateToKafkaACL = false; + + + public KeycloakRBACAuthorizer() { + super(); + } + + @Override + public void configure(Map configs) { + super.configure(configs); + + AuthzConfig config = convertToCommonConfig(configs); + + String pbclass = (String) configs.get("principal.builder.class"); + if (!PRINCIPAL_BUILDER_CLASS.equals(pbclass)) { + throw new RuntimeException("KeycloakRBACAuthorizer requires " + PRINCIPAL_BUILDER_CLASS + " as 'principal.builder.class'"); + } + + String endpoint = ConfigUtil.getConfigWithFallbackLookup(config, AuthzConfig.STRIMZI_AUTHORIZATION_TOKEN_ENDPOINT_URI, + ClientConfig.OAUTH_TOKEN_ENDPOINT_URI); + if (endpoint == null) { + throw new RuntimeException("OAuth2 Token Endpoint ('strimzi.authorization.token.endpoint.uri') not set."); + } + + try { + tokenEndpointUrl = new URI(endpoint); + } catch (URISyntaxException e) { + throw new RuntimeException("Specified token endpoint uri is invalid: " + endpoint); + } + + clientId = ConfigUtil.getConfigWithFallbackLookup(config, AuthzConfig.STRIMZI_AUTHORIZATION_CLIENT_ID, ClientConfig.OAUTH_CLIENT_ID); + if (clientId == null) { + throw new RuntimeException("OAuth2 Client Id ('strimzi.authorization.client.id') not set."); + } + + socketFactory = createSSLFactory(config); + hostnameVerifier = createHostnameVerifier(config); + + clusterName = config.getValue(AuthzConfig.STRIMZI_AUTHORIZATION_KAFKA_CLUSTER_NAME); + if (clusterName == null) { + clusterName = "kafka-cluster"; + } + + delegateToKafkaACL = config.getValueAsBoolean(AuthzConfig.STRIMZI_AUTHORIZATION_DELEGATE_TO_KAFKA_ACL, false); + + String users = (String) configs.get("super.users"); + if (users != null) { + superUsers = Arrays.asList(users.split(";")) + .stream() + .map(s -> UserSpec.of(s)) + .collect(Collectors.toList()); + } + + if (log.isDebugEnabled()) { + log.debug("Configured KeycloakRBACAuthorizer:\n tokenEndpointUri: " + tokenEndpointUrl + + "\n sslSocketFactory: " + socketFactory + + "\n hostnameVerifier: " + hostnameVerifier + + "\n clientId: " + clientId + + "\n clusterName: " + clusterName + + "\n delegateToKafkaACL: " + delegateToKafkaACL + + "\n superUsers: " + superUsers.stream().map(u -> u.getType() + ":" + u.getName()).collect(Collectors.toList())); + } + } + + /** + * This method transforms strimzi.authorization.* entries into oauth.* entries in order to be able to use existing ConfigUtil + * methods for setting up certificate truststore and hostname verification. + * + * It also makes sure to copy over 'as-is' all the config keys expected in server.properties for configuring + * this authorizer. + * + * @param configs Kafka configs map + * @return Config object + */ + static AuthzConfig convertToCommonConfig(Map configs) { + Properties p = new Properties(); + + String[] keys = { + AuthzConfig.STRIMZI_AUTHORIZATION_DELEGATE_TO_KAFKA_ACL, + AuthzConfig.STRIMZI_AUTHORIZATION_KAFKA_CLUSTER_NAME, + AuthzConfig.STRIMZI_AUTHORIZATION_CLIENT_ID, + AuthzConfig.OAUTH_CLIENT_ID, + AuthzConfig.STRIMZI_AUTHORIZATION_TOKEN_ENDPOINT_URI, + ClientConfig.OAUTH_TOKEN_ENDPOINT_URI, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_LOCATION, + Config.OAUTH_SSL_TRUSTSTORE_LOCATION, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_PASSWORD, + Config.OAUTH_SSL_TRUSTSTORE_PASSWORD, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_TYPE, + Config.OAUTH_SSL_TRUSTSTORE_TYPE, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_SECURE_RANDOM_IMPLEMENTATION, + Config.OAUTH_SSL_SECURE_RANDOM_IMPLEMENTATION, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM, + Config.OAUTH_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM + }; + + // copy over the keys + for (String key: keys) { + ConfigUtil.putIfNotNull(p, key, configs.get(key)); + } + + return new AuthzConfig(p); + } + + static SSLSocketFactory createSSLFactory(Config config) { + String truststore = ConfigUtil.getConfigWithFallbackLookup(config, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_LOCATION, Config.OAUTH_SSL_TRUSTSTORE_LOCATION); + String password = ConfigUtil.getConfigWithFallbackLookup(config, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_PASSWORD, Config.OAUTH_SSL_TRUSTSTORE_PASSWORD); + String type = ConfigUtil.getConfigWithFallbackLookup(config, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_TRUSTSTORE_TYPE, Config.OAUTH_SSL_TRUSTSTORE_TYPE); + String rnd = ConfigUtil.getConfigWithFallbackLookup(config, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_SECURE_RANDOM_IMPLEMENTATION, Config.OAUTH_SSL_SECURE_RANDOM_IMPLEMENTATION); + + return SSLUtil.createSSLFactory(truststore, password, type, rnd); + } + + static HostnameVerifier createHostnameVerifier(Config config) { + String hostCheck = ConfigUtil.getConfigWithFallbackLookup(config, + AuthzConfig.STRIMZI_AUTHORIZATION_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM, Config.OAUTH_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM); + + if (hostCheck == null) { + hostCheck = "HTTPS"; + } + // Following Kafka convention for skipping hostname validation (when set to ) + return "".equals(hostCheck) ? SSLUtil.createAnyHostHostnameVerifier() : null; + } + + /** + * The method that makes the authorization decision. + * + * We assume authorize() is thread-safe in a sense that there will not be two concurrent threads + * calling it at the same time for the same session. + * + * Should that not be the case, the side effect could be to make more calls to token endpoint than necessary. + * Other than that it should not affect proper functioning of this authorizer. + * + * @param session Current session + * @param operation Operation to authorize + * @param resource Resource to authorize + * @return true if permission is granted + */ + @Override + public boolean authorize(RequestChannel.Session session, Operation operation, Resource resource) { + + KafkaPrincipal principal = session.principal(); + + for (UserSpec u: superUsers) { + if (principal.getPrincipalType().equals(u.getType()) + && principal.getName().equals(u.getName())) { + + // it's a super user. super users are granted everything + if (GRANT_LOG.isDebugEnabled()) { + GRANT_LOG.debug("Authorization GRANTED - user is a superuser: " + session.principal() + ", operation: " + operation + ", resource: " + resource); + } + return true; + } + } + + if (!(principal instanceof JwtKafkaPrincipal)) { + // if user wasn't authenticated over OAuth, and simple ACL delegation is enabled + // we delegate to simple ACL + return delegateIfRequested(session, operation, resource, null); + } + + // + // Check if authorization grants are available + // If not, fetch authorization grants and store them in the token + // + + JwtKafkaPrincipal jwtPrincipal = (JwtKafkaPrincipal) principal; + + BearerTokenWithPayload token = jwtPrincipal.getJwt(); + JsonNode authz = (JsonNode) token.getPayload(); + + if (authz == null) { + // fetch authorization grants + try { + authz = fetchAuthorizationGrants(token.value()); + if (authz == null) { + authz = new ObjectNode(JSONUtil.MAPPER.getNodeFactory()); + } + } catch (HttpException e) { + if (e.getStatus() == 403) { + authz = new ObjectNode(JSONUtil.MAPPER.getNodeFactory()); + } else { + log.warn("Unexpected status while fetching authorization data - will retry next time: " + e.getMessage()); + } + } + if (authz != null) { + // store authz grants in the token so they are available for subsequent requests + token.setPayload(authz); + } + } + + if (log.isDebugEnabled()) { + log.debug("authorize(): " + authz); + } + + // + // Iterate authorization rules and try to find a match + // + + if (authz != null) { + Iterator it = authz.iterator(); + while (it.hasNext()) { + JsonNode permission = it.next(); + String name = permission.get("rsname").asText(); + ResourceSpec resourceSpec = ResourceSpec.of(name); + if (resourceSpec.match(clusterName, resource.resourceType().name(), resource.name())) { + + ScopesSpec grantedScopes = ScopesSpec.of( + validateScopes( + JSONUtil.asListOfString(permission.get("scopes")))); + + if (grantedScopes.isGranted(operation.name())) { + if (GRANT_LOG.isDebugEnabled()) { + GRANT_LOG.debug("Authorization GRANTED - cluster: " + clusterName + ",user: " + session.principal() + ", operation: " + operation + + ", resource: " + resource + "\nGranted scopes for resource (" + resourceSpec + "): " + grantedScopes); + } + return true; + } + } + } + } + return delegateIfRequested(session, operation, resource, authz); + } + + static List validateScopes(List scopes) { + List enumScopes = new ArrayList<>(scopes.size()); + for (String name: scopes) { + try { + enumScopes.add(ScopesSpec.AuthzScope.valueOf(name)); + } catch (Exception e) { + log.warn("[IGNORED] Invalid scope detected in authorization scopes list: " + name); + } + } + return enumScopes; + } + + boolean delegateIfRequested(RequestChannel.Session session, Operation operation, Resource resource, JsonNode authz) { + String nonAuthMessageFragment = session.principal() instanceof JwtKafkaPrincipal ? "" : " non-oauth"; + if (delegateToKafkaACL) { + boolean granted = super.authorize(session, operation, resource); + + boolean grantLogOn = granted && GRANT_LOG.isDebugEnabled(); + boolean denyLogOn = !granted && DENY_LOG.isDebugEnabled(); + + if (grantLogOn || denyLogOn) { + String status = granted ? "GRANTED" : "DENIED"; + String message = "Authorization " + status + " by ACL -" + nonAuthMessageFragment + " user: " + session.principal() + ", operation: " + operation + ", resource: " + resource; + + if (grantLogOn) { + GRANT_LOG.debug(message); + } else if (denyLogOn) { + DENY_LOG.debug(message); + } + } + return granted; + } + + if (DENY_LOG.isDebugEnabled()) { + DENY_LOG.debug("Authorization DENIED -" + nonAuthMessageFragment + " user: " + session.principal() + + " cluster: " + clusterName + ", operation: " + operation + ", resource: " + resource + "\n permissions: " + authz); + } + return false; + } + + JsonNode fetchAuthorizationGrants(String token) { + + String authorization = "Bearer " + token; + + StringBuilder body = new StringBuilder("audience=").append(urlencode(clientId)) + .append("&grant_type=").append(urlencode("urn:ietf:params:oauth:grant-type:uma-ticket")) + .append("&response_mode=permissions"); + + JsonNode response; + + try { + response = post(tokenEndpointUrl, socketFactory, hostnameVerifier, authorization, + "application/x-www-form-urlencoded", body.toString(), JsonNode.class); + + } catch (HttpException e) { + throw e; + } catch (Exception e) { + throw new RuntimeException("Failed to fetch authorization data from authorization server: ", e); + } + + return response; + } + + @Override + public void addAcls(Set acls, Resource resource) { + if (!delegateToKafkaACL) { + throw new RuntimeException("Simple ACL delegation not enabled"); + } + super.addAcls(acls, resource); + } + + @Override + public boolean removeAcls(Set aclsTobeRemoved, Resource resource) { + if (!delegateToKafkaACL) { + throw new RuntimeException("Simple ACL delegation not enabled"); + } + return super.removeAcls(aclsTobeRemoved, resource); + } + + @Override + public boolean removeAcls(Resource resource) { + if (!delegateToKafkaACL) { + throw new RuntimeException("Simple ACL delegation not enabled"); + } + return super.removeAcls(resource); + } + + @Override + public Set getAcls(Resource resource) { + if (!delegateToKafkaACL) { + throw new RuntimeException("Simple ACL delegation not enabled"); + } + return super.getAcls(resource); + } + + @Override + public scala.collection.immutable.Map> getAcls(KafkaPrincipal principal) { + if (!delegateToKafkaACL) { + throw new RuntimeException("Simple ACL delegation not enabled"); + } + return super.getAcls(principal); + } + + @Override + public scala.collection.immutable.Map> getAcls() { + if (!delegateToKafkaACL) { + throw new RuntimeException("Simple ACL delegation not enabled"); + } + return super.getAcls(); + } + + @Override + public void close() { + super.close(); + } +} diff --git a/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/ResourceSpec.java b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/ResourceSpec.java new file mode 100644 index 00000000..04b5f299 --- /dev/null +++ b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/ResourceSpec.java @@ -0,0 +1,159 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.server.authorizer; + +import java.util.Locale; + +/** + * ResourceSpec is used to parse resource matching pattern and to perform matching to specific resource. + */ +public class ResourceSpec { + + public enum ResourceType { + Topic, + Group, + Cluster, + TransactionalId, + DelegationToken + } + + private String clusterName; + private boolean clusterStartsWith; + + private ResourceType resourceType; + private String resourceName; + private boolean resourceStartsWith; + + + public String getClusterName() { + return clusterName; + } + + public boolean isClusterStartsWith() { + return clusterStartsWith; + } + + public ResourceType getResourceType() { + return resourceType; + } + + public String getResourceName() { + return resourceName; + } + + public boolean isResourceStartsWith() { + return resourceStartsWith; + } + + /** + * Match specific resource's cluster, type and name to this ResourceSpec + * + * If clusterName is set then cluster must match, otherwise cluster match is ignored. + * Type and name are always matched. + * + * @param cluster Kafka cluster name such as: my-kafka + * @param type Resource type such as: Topic, Group + * @param name Resource name such as: my-topic + * @return true if cluster, type and name match this resource spec + */ + public boolean match(String cluster, String type, String name) { + if (clusterName != null) { + if (cluster == null) { + throw new IllegalArgumentException("cluster == null"); + } + if (clusterStartsWith) { + if (!cluster.startsWith(clusterName)) { + return false; + } + } else if (!cluster.equals(clusterName)) { + return false; + } + } + + if (type == null) { + throw new IllegalArgumentException("type == null"); + } + if (resourceType == null || !type.equals(resourceType.name())) { + return false; + } + + if (name == null) { + throw new IllegalArgumentException("name == null"); + } + if (resourceStartsWith) { + if (!name.startsWith(resourceName)) { + return false; + } + } else if (!name.equals(resourceName)) { + return false; + } + + return true; + } + + public static ResourceSpec of(String name) { + ResourceSpec spec = new ResourceSpec(); + + String[] parts = name.split(","); + for (String part: parts) { + String[] subSpec = part.split(":"); + if (subSpec.length != 2) { + throw new RuntimeException("Failed to parse Resource: " + name + " - part doesn't follow TYPE:NAME pattern: " + part); + } + + String type = subSpec[0].toLowerCase(Locale.US); + String pat = subSpec[1]; + if (type.equals("kafka-cluster")) { + if (spec.clusterName != null) { + throw new RuntimeException("Failed to parse Resource: " + name + " - cluster part specified multiple times"); + } + if (pat.endsWith("*")) { + spec.clusterName = pat.substring(pat.length() - 1); + spec.clusterStartsWith = true; + } else { + spec.clusterName = pat; + } + continue; + } + + if (spec.resourceName != null) { + throw new RuntimeException("Failed to parse Resource: " + name + " - resource part specified multiple times"); + } + + if (type.equals("topic")) { + spec.resourceType = ResourceType.Topic; + } else if (type.equals("group")) { + spec.resourceType = ResourceType.Group; + } else if (type.equals("cluster")) { + spec.resourceType = ResourceType.Cluster; + } else if (type.equals("transactionalid")) { + spec.resourceType = ResourceType.TransactionalId; + } else if (type.equals("delegationtoken")) { + spec.resourceType = ResourceType.DelegationToken; + } else { + throw new RuntimeException("Failed to parse Resource: " + name + " - unsupported segment type: " + subSpec[0]); + } + + if (pat.endsWith("*")) { + spec.resourceName = pat.substring(0, pat.length() - 1); + spec.resourceStartsWith = true; + } else { + spec.resourceName = pat; + } + } + + return spec; + } + + @Override + public String toString() { + return (clusterName != null ? + ("kafka-cluster:" + clusterName + (clusterStartsWith ? "*" : "") + ",") + : "") + + (resourceName != null ? + (resourceType + ":" + resourceName + (resourceStartsWith ? "*" : ":")) + : ""); + } +} diff --git a/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/ScopesSpec.java b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/ScopesSpec.java new file mode 100644 index 00000000..81e30a2e --- /dev/null +++ b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/ScopesSpec.java @@ -0,0 +1,45 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.server.authorizer; + +import java.util.EnumSet; +import java.util.List; + +public class ScopesSpec { + + public enum AuthzScope { + Create, + Read, + Write, + Delete, + Alter, + Describe, + AlterConfigs, + DescribeConfigs, + ClusterAction, + IdempotentWrite + } + + private EnumSet granted; + + private ScopesSpec(EnumSet grants) { + this.granted = grants; + } + + + static ScopesSpec of(List scopes) { + return new ScopesSpec(EnumSet.copyOf(scopes)); + } + + public boolean isGranted(String operation) { + AuthzScope scope = AuthzScope.valueOf(operation); + return granted.contains(scope); + } + + @Override + public String toString() { + return String.valueOf(granted); + } +} diff --git a/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/UserSpec.java b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/UserSpec.java new file mode 100644 index 00000000..644a985b --- /dev/null +++ b/oauth-keycloak-authorizer/src/main/java/io/strimzi/kafka/oauth/server/authorizer/UserSpec.java @@ -0,0 +1,37 @@ +/* + * Copyright 2017-2019, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.kafka.oauth.server.authorizer; + +public class UserSpec { + + private final String type; + private final String name; + + private UserSpec(String type, String name) { + this.type = type; + this.name = name; + } + + public String getType() { + return type; + } + + public String getName() { + return name; + } + + + public static UserSpec of(String principal) { + int pos = principal.indexOf(':'); + if (pos <= 0) { + throw new IllegalArgumentException("Invalid user specification: " + principal); + } + return new UserSpec(principal.substring(0, pos), principal.substring(pos + 1)); + } + + public String toString() { + return super.toString() + " " + type + ":" + name; + } +} diff --git a/oauth-server/src/main/java/io/strimzi/kafka/oauth/server/JaasServerOauthValidatorCallbackHandler.java b/oauth-server/src/main/java/io/strimzi/kafka/oauth/server/JaasServerOauthValidatorCallbackHandler.java index 236b9bc6..e0b17a5b 100644 --- a/oauth-server/src/main/java/io/strimzi/kafka/oauth/server/JaasServerOauthValidatorCallbackHandler.java +++ b/oauth-server/src/main/java/io/strimzi/kafka/oauth/server/JaasServerOauthValidatorCallbackHandler.java @@ -6,6 +6,7 @@ import io.strimzi.kafka.oauth.common.Config; import io.strimzi.kafka.oauth.common.ConfigUtil; +import io.strimzi.kafka.oauth.common.BearerTokenWithPayload; import io.strimzi.kafka.oauth.validator.JWTSignatureValidator; import io.strimzi.kafka.oauth.validator.OAuthIntrospectionValidator; import io.strimzi.kafka.oauth.common.TokenInfo; @@ -14,7 +15,6 @@ import org.apache.kafka.common.errors.AuthenticationException; import org.apache.kafka.common.security.auth.AuthenticateCallbackHandler; import org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule; -import org.apache.kafka.common.security.oauthbearer.OAuthBearerToken; import org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback; import org.keycloak.jose.jws.JWSInput; import org.keycloak.jose.jws.JWSInputException; @@ -156,7 +156,19 @@ private void handleCallback(OAuthBearerValidatorCallback callback) { try { TokenInfo ti = validateToken(token); - callback.token(new OAuthBearerToken() { + callback.token(new BearerTokenWithPayload() { + + private Object payload; + + @Override + public Object getPayload() { + return payload; + } + + @Override + public void setPayload(Object value) { + payload = value; + } @Override public String value() { @@ -189,6 +201,7 @@ public String principalName() { public Long startTimeMs() { return ti.issuedAtMs(); } + }); } catch (TokenValidationException e) { diff --git a/pom.xml b/pom.xml index 1fffcb7a..b1ee1d70 100644 --- a/pom.xml +++ b/pom.xml @@ -68,6 +68,7 @@ 1.6.3 2.3.0 + 2.12.8 2.9.10.2 4.12 1.7.26 @@ -90,6 +91,7 @@ oauth-common oauth-client oauth-server + oauth-keycloak-authorizer examples/consumer examples/producer @@ -111,6 +113,16 @@ kafka-clients ${kafka.version}
+ + org.apache.kafka + kafka_2.12 + ${kafka.version} + + + org.scala-lang + scala-library + ${scala.version} + io.strimzi kafka-oauth-common @@ -126,6 +138,11 @@ kafka-oauth-server ${project.version} + + io.strimzi + kafka-oauth-keycloak-authorizer + ${project.version} + com.fasterxml.jackson.core jackson-databind @@ -339,6 +356,18 @@ + + kafka-2_4 + + + kafka_2_4 + + + + 2.4.0 + 2.12.10 + + diff --git a/testsuite/access-token-introspection-keycloak-test/docker-compose.yml b/testsuite/access-token-introspection-keycloak-test/docker-compose.yml index cc2e3165..b454764c 100644 --- a/testsuite/access-token-introspection-keycloak-test/docker-compose.yml +++ b/testsuite/access-token-introspection-keycloak-test/docker-compose.yml @@ -24,7 +24,7 @@ services: - KEYCLOAK_PASSWORD=admin - KEYCLOAK_HTTPS_PORT=8443 - PROXY_ADDRESS_FORWARDING=true - - KEYCLOAK_IMPORT=/opt/jboss/keycloak/realms/demo.json + - KEYCLOAK_IMPORT=/opt/jboss/keycloak/realms/demo-realm.json kafka: image: strimzi/kafka:latest-kafka-2.3.0 diff --git a/testsuite/client-secret-jwt-keycloak-authz-test/arquillian.xml b/testsuite/client-secret-jwt-keycloak-authz-test/arquillian.xml new file mode 100644 index 00000000..9e1b2fc5 --- /dev/null +++ b/testsuite/client-secret-jwt-keycloak-authz-test/arquillian.xml @@ -0,0 +1,26 @@ + + + + + docker-compose.yml + + zookeeper: + await: + strategy: sleeping + sleepTime: 5 s + kafka: + await: + strategy: log + match: '[KafkaServer id=1] started' + timeout: 120 + keycloak: + await: + strategy: log + match: 'regexp:.* Keycloak .* started in .*' + timeout: 120 + + + \ No newline at end of file diff --git a/testsuite/client-secret-jwt-keycloak-authz-test/docker-compose.yml b/testsuite/client-secret-jwt-keycloak-authz-test/docker-compose.yml new file mode 100644 index 00000000..1910322d --- /dev/null +++ b/testsuite/client-secret-jwt-keycloak-authz-test/docker-compose.yml @@ -0,0 +1,94 @@ +version: '3' + +services: + keycloak: + image: jboss/keycloak + container_name: keycloak + ports: + - 8080:8080 + - 8443:8443 + volumes: + - ${PWD}/../docker/keycloak/scripts:/opt/jboss/keycloak/ssl + - ${PWD}/../target/keycloak/certs:/opt/jboss/keycloak/standalone/configuration/certs + - ${PWD}/../docker/keycloak/realms:/opt/jboss/keycloak/realms + + entrypoint: "" + + command: + - /bin/bash + - -c + - cd /opt/jboss/keycloak && bin/jboss-cli.sh --file=ssl/keycloak-ssl.cli && rm -rf standalone/configuration/standalone_xml_history/current && cd .. && /opt/jboss/tools/docker-entrypoint.sh -Dkeycloak.profile.feature.upload_scripts=enabled -b 0.0.0.0 + + environment: + - KEYCLOAK_USER=admin + - KEYCLOAK_PASSWORD=admin + - KEYCLOAK_HTTPS_PORT=8443 + - PROXY_ADDRESS_FORWARDING=true + - KEYCLOAK_IMPORT=/opt/jboss/keycloak/realms/kafka-authz-realm.json + + # Wait for up to 90 seconds for service to be up before ARQ Cube's wait_for_it gives up + - TIMEOUT=90 + + kafka: + image: strimzi/kafka:latest-kafka-2.3.0 + container_name: kafka + ports: + - 9092:9092 + volumes: + - ${PWD}/target/kafka/libs:/opt/kafka/libs/strimzi + - ${PWD}/../docker/kafka/config:/opt/kafka/config/strimzi + - ${PWD}/../docker/kafka/scripts:/opt/kafka/strimzi + command: + - /bin/bash + - -c + - cd /opt/kafka/strimzi && ./start.sh + environment: + - KAFKA_BROKER_ID=1 + - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 + - KAFKA_LISTENERS=CLIENT://kafka:9092 + - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:SASL_PLAINTEXT + - KAFKA_SASL_ENABLED_MECHANISMS=OAUTHBEARER + - KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT + - KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL=OAUTHBEARER + - KAFKA_LISTENER_NAME_CLIENT_OAUTHBEARER_SASL_JAAS_CONFIG=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required; + - KAFKA_LISTENER_NAME_CLIENT_OAUTHBEARER_SASL_LOGIN_CALLBACK_HANDLER_CLASS=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler + - KAFKA_LISTENER_NAME_CLIENT_OAUTHBEARER_SASL_SERVER_CALLBACK_HANDLER_CLASS=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler + - KAFKA_SUPER_USERS=User:service-account-kafka + - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 + + - KAFKA_AUTHORIZER_CLASS_NAME=io.strimzi.kafka.oauth.server.authorizer.KeycloakRBACAuthorizer + - KAFKA_PRINCIPAL_BUILDER_CLASS=io.strimzi.kafka.oauth.server.authorizer.JwtKafkaPrincipalBuilder + - KAFKA_STRIMZI_AUTHORIZATION_KAFKA_CLUSTER_NAME=cluster2 + + # Authentication config + - OAUTH_CLIENT_ID=kafka + - OAUTH_CLIENT_SECRET=kafka-secret + - OAUTH_TOKEN_ENDPOINT_URI=http://${KEYCLOAK_HOST:-keycloak}:8080/auth/realms/${REALM:-kafka-authz}/protocol/openid-connect/token + + # Validation config + - OAUTH_VALID_ISSUER_URI=http://${KEYCLOAK_HOST:-keycloak}:8080/auth/realms/${REALM:-kafka-authz} + - OAUTH_JWKS_ENDPOINT_URI=http://${KEYCLOAK_HOST:-keycloak}:8080/auth/realms/${REALM:-kafka-authz}/protocol/openid-connect/certs + + # username extraction from JWT token claim + - OAUTH_USERNAME_CLAIM=preferred_username + + # For start.sh script to know where the keycloak is listening + - KEYCLOAK_HOST=${KEYCLOAK_HOST:-keycloak} + - REALM=${REALM:-kafka-authz} + + # Wait for up to 90 seconds for service to be up before ARQ Cube's wait_for_it gives up + - TIMEOUT=90 + + zookeeper: + image: strimzi/zookeeper:0.11.4-kafka-2.1.0 + container_name: zookeeper + ports: + - 2181:2181 + volumes: + - ${PWD}/../docker/zookeeper/scripts:/opt/kafka/strimzi + command: + - /bin/bash + - -c + - cd /opt/kafka/strimzi && ./start.sh + environment: + - LOG_DIR=/tmp/logs diff --git a/testsuite/client-secret-jwt-keycloak-authz-test/pom.xml b/testsuite/client-secret-jwt-keycloak-authz-test/pom.xml new file mode 100644 index 00000000..89db03c0 --- /dev/null +++ b/testsuite/client-secret-jwt-keycloak-authz-test/pom.xml @@ -0,0 +1,114 @@ + + + + 4.0.0 + + + io.strimzi.oauth.testsuite + kafka-oauth-testsuite + 1.0.0-SNAPSHOT + + + client-secret-jwt-keycloak-authz-test + + + + Apache License, Version 2.0 + https://www.apache.org/licenses/LICENSE-2.0.txt + + + + + ../.. + + + + + org.arquillian.universe + arquillian-junit-standalone + pom + + + junit + junit + ${version.junit} + + + org.arquillian.universe + arquillian-cube-docker + pom + + + + io.strimzi + kafka-oauth-common + + + io.strimzi + kafka-oauth-client + + + org.apache.kafka + kafka-clients + + + org.slf4j + slf4j-simple + + + + + + + org.apache.maven.plugins + maven-dependency-plugin + ${maven.dependency.version} + + + copy + validate + + copy + + + + + + + io.strimzi + kafka-oauth-keycloak-authorizer + + + io.strimzi + kafka-oauth-client + + + io.strimzi + kafka-oauth-server + + + io.strimzi + kafka-oauth-common + + + org.keycloak + keycloak-core + + + org.keycloak + keycloak-common + + + org.bouncycastle + bcprov-jdk15on + + + target/kafka/libs + false + true + + + + + \ No newline at end of file diff --git a/testsuite/client-secret-jwt-keycloak-authz-test/src/test/java/io/strimzi/testsuite/oauth/KeycloakClientCredentialsWithJwtValidationAuthzTest.java b/testsuite/client-secret-jwt-keycloak-authz-test/src/test/java/io/strimzi/testsuite/oauth/KeycloakClientCredentialsWithJwtValidationAuthzTest.java new file mode 100644 index 00000000..8ac7f6da --- /dev/null +++ b/testsuite/client-secret-jwt-keycloak-authz-test/src/test/java/io/strimzi/testsuite/oauth/KeycloakClientCredentialsWithJwtValidationAuthzTest.java @@ -0,0 +1,531 @@ +/* + * Copyright 2017-2020, Strimzi authors. + * License: Apache License 2.0 (see the file LICENSE or http://apache.org/licenses/LICENSE-2.0.html). + */ +package io.strimzi.testsuite.oauth; + +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.node.ObjectNode; +import io.strimzi.kafka.oauth.client.ClientConfig; +import io.strimzi.kafka.oauth.common.ConfigProperties; +import io.strimzi.kafka.oauth.common.HttpUtil; +import org.apache.kafka.clients.admin.AdminClient; +import org.apache.kafka.clients.admin.NewTopic; +import org.apache.kafka.clients.consumer.Consumer; +import org.apache.kafka.clients.consumer.ConsumerConfig; +import org.apache.kafka.clients.consumer.ConsumerRecords; +import org.apache.kafka.clients.consumer.KafkaConsumer; +import org.apache.kafka.clients.producer.KafkaProducer; +import org.apache.kafka.clients.producer.Producer; +import org.apache.kafka.clients.producer.ProducerConfig; +import org.apache.kafka.clients.producer.ProducerRecord; +import org.apache.kafka.common.TopicPartition; +import org.apache.kafka.common.errors.TopicAuthorizationException; +import org.apache.kafka.common.serialization.StringDeserializer; +import org.apache.kafka.common.serialization.StringSerializer; +import org.jboss.arquillian.junit.Arquillian; +import org.junit.Assert; +import org.junit.Test; +import org.junit.runner.RunWith; + +import java.io.IOException; +import java.net.URI; +import java.time.Duration; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Iterator; +import java.util.Properties; +import java.util.concurrent.ExecutionException; + +import static io.strimzi.kafka.oauth.common.OAuthAuthenticator.loginWithClientSecret; +import static io.strimzi.kafka.oauth.common.OAuthAuthenticator.urlencode; + +@RunWith(Arquillian.class) +public class KeycloakClientCredentialsWithJwtValidationAuthzTest { + + private static final String HOST = "keycloak"; + private static final String REALM = "kafka-authz"; + private static final String TOKEN_ENDPOINT_URI = "http://" + HOST + ":8080/auth/realms/" + REALM + "/protocol/openid-connect/token"; + + private static final String TEAM_A_CLIENT = "team-a-client"; + private static final String TEAM_B_CLIENT = "team-b-client"; + private static final String BOB = "bob"; + + private static final String TOPIC_A = "a_messages"; + private static final String TOPIC_B = "b_messages"; + private static final String TOPIC_X = "x_messages"; + + + private HashMap tokens; + + private Producer teamAProducer; + private Consumer teamAConsumer; + + private Producer teamBProducer; + private Consumer teamBConsumer; + + + @Test + public void doTest() throws Exception { + + System.out.println("==== KeycloakClientCredentialsWithJwtValidationAuthzTest - Tests Authorization ===="); + + Properties defaults = new Properties(); + defaults.setProperty(ClientConfig.OAUTH_TOKEN_ENDPOINT_URI, TOKEN_ENDPOINT_URI); + defaults.setProperty(ClientConfig.OAUTH_USERNAME_CLAIM, "preferred_username"); + + ConfigProperties.resolveAndExportToSystemProperties(defaults); + + Properties p = System.getProperties(); + for (Object key: p.keySet()) { + System.out.println("" + key + "=" + p.get(key)); + } + + fixBadlyImportedAuthzSettings(); + + tokens = authenticateAllActors(); + + testTeamAClientPart1(); + + testTeamBClientPart1(); + + createTopicAsClusterManager(); + + testTeamAClientPart2(); + + testTeamBClientPart2(); + + testClusterManager(); + } + + + Producer getProducer(final String name) { + return recycleProducer(name, true); + } + + Producer newProducer(final String name) { + return recycleProducer(name, false); + } + + Producer recycleProducer(final String name, boolean recycle) { + switch (name) { + case TEAM_A_CLIENT: + if (teamAProducer != null) { + if (recycle) { + return teamAProducer; + } else { + teamAProducer.close(); + } + } + break; + case TEAM_B_CLIENT: + if (teamBProducer != null) { + if (recycle) { + return teamBProducer; + } else { + teamBProducer.close(); + } + } + break; + default: + throw new IllegalArgumentException("Unsupported producer: " + name); + } + + Properties producerProps = buildProducerConfig(tokens.get(name)); + Producer producer = new KafkaProducer<>(producerProps); + + if (TEAM_A_CLIENT.equals(name)) { + teamAProducer = producer; + } else { + teamBProducer = producer; + } + return producer; + } + + Consumer newConsumer(final String name, String topic) { + switch (name) { + case TEAM_A_CLIENT: + if (teamAConsumer != null) { + teamAConsumer.close(); + } + break; + case TEAM_B_CLIENT: + if (teamBConsumer != null) { + teamBConsumer.close(); + } + break; + default: + throw new IllegalArgumentException("Unsupported consumer: " + name); + } + + Properties consumerProps = buildConsumerConfig(tokens.get(name)); + consumerProps.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupFor(topic)); + Consumer consumer = new KafkaConsumer<>(consumerProps); + + if (TEAM_A_CLIENT.equals(name)) { + teamAConsumer = consumer; + } else { + teamBConsumer = consumer; + } + return consumer; + } + + void createTopicAsClusterManager() throws Exception { + + Properties bobAdminProps = buildAdminClientConfig(tokens.get(BOB)); + AdminClient admin = AdminClient.create(bobAdminProps); + + // + // Create x_* topic + // + admin.createTopics(Arrays.asList(new NewTopic[]{ + new NewTopic(TOPIC_X, 1, (short) 1) + })).all().get(); + } + + private void testClusterManager() throws Exception { + + Properties bobAdminProps = buildProducerConfig(tokens.get(BOB)); + Producer producer = new KafkaProducer<>(bobAdminProps); + + Properties consumerProps = buildConsumerConfig(tokens.get(BOB)); + Consumer consumer = new KafkaConsumer<>(consumerProps); + + // + // bob should succeed producing to x_* topic + // + produce(producer, TOPIC_X); + + // + // bob should succeed producing to a_* topic + // + produce(producer, TOPIC_A); + + // + // bob should succeed producing to b_* topic + // + produce(producer, TOPIC_B); + + // + // bob should succeed producing to non-existing topic + // + produce(producer, "non-existing-topic"); + + // + // bob should succeed consuming from x_* topic + // + consume(consumer, TOPIC_X); + + // + // bob should succeed consuming from a_* topic + // + consume(consumer, TOPIC_A); + + // + // bob should succeed consuming from b_* topic + // + consume(consumer, TOPIC_B); + + // + // bob should succeed consuming from "non-existing-topic" - which now exists + // + consume(consumer, "non-existing-topic"); + } + + + private void testTeamAClientPart1() throws Exception { + + Producer teamAProducer = getProducer(TEAM_A_CLIENT); + + // + // team-a-client should fail to produce to b_* topic + // + produceFail(teamAProducer, TOPIC_B); + + // Re-init producer because message to topicB is stuck in the queue, and any subsequent message to another queue + // won't be handled until first message makes it through. + teamAProducer = newProducer(TEAM_A_CLIENT); + + // + // team-a-client should succeed producing to a_* topic + // + produce(teamAProducer, TOPIC_A); + + // + // team-a-client should also fail producing to non-existing x_* topic (fails to create it) + // + produceFail(teamAProducer, TOPIC_X); + + Consumer teamAConsumer = newConsumer(TEAM_A_CLIENT, TOPIC_B); + + // + // team-a-client should fail consuming from b_* topic + // + consumeFail(teamAConsumer, TOPIC_B); + + + // Close and re-init consumer + teamAConsumer = newConsumer(TEAM_A_CLIENT, TOPIC_A); + + // + // team-a-client should succeed consuming from a_* topic + // + consume(teamAConsumer, TOPIC_A); + + // + // team-a-client should fail consuming from x_* topic - it doesn't exist + // + consumeFail(teamAConsumer, TOPIC_X); + } + + + private void testTeamBClientPart1() throws Exception { + + Producer teamBProducer = getProducer(TEAM_B_CLIENT); + + // + // team-b-client should fail to produce to a_* topic + // + produceFail(teamBProducer, TOPIC_A); + + // Re-init producer because message to topicA is stuck in the queue, and any subsequent message to another queue + // won't be handled until first message makes it through. + teamBProducer = newProducer(TEAM_B_CLIENT); + + // + // team-b-client should succeed producing to b_* topic + // + produce(teamBProducer, TOPIC_B); + + // + // team-b-client should fail to produce to x_* topic + // + produceFail(teamBProducer, TOPIC_X); + + + Consumer teamBConsumer = newConsumer(TEAM_B_CLIENT, TOPIC_A); + + // + // team-b-client should fail consuming from a_* topic + // + consumeFail(teamBConsumer, TOPIC_A); + + // Close and re-init consumer + teamBConsumer = newConsumer(TEAM_B_CLIENT, TOPIC_B); + + // + // team-b-client should succeed consuming from b_* topic + // + consume(teamBConsumer, TOPIC_B); + } + + private void testTeamAClientPart2() throws Exception { + + // + // team-a-client should succeed producing to existing x_* topic + // + Producer teamAProducer = newProducer(TEAM_A_CLIENT); + + produce(teamAProducer, TOPIC_X); + + // + // team-a-client should fail reading from x_* topic + // + Consumer teamAConsumer = newConsumer(TEAM_A_CLIENT, TOPIC_A); + consumeFail(teamAConsumer, TOPIC_X); + } + + + private void testTeamBClientPart2() throws Exception { + // + // team-b-client should succeed consuming from x_* topic + // + Consumer teamBConsumer = newConsumer(TEAM_B_CLIENT, TOPIC_B); + consume(teamBConsumer, TOPIC_X); + + + // + // team-b-client should fail producing to x_* topic + // + Producer teamBProducer = newProducer(TEAM_B_CLIENT); + produceFail(teamBProducer, TOPIC_X); + } + + + /** + * Use Keycloak Admin API to update Authorization Services 'decisionStrategy' on 'kafka' client to AFFIRMATIVE + * + * @throws IOException + */ + static void fixBadlyImportedAuthzSettings() throws IOException { + + URI masterTokenEndpoint = URI.create("http://" + HOST + ":8080/auth/realms/master/protocol/openid-connect/token"); + + String token = loginWithUsernamePassword(masterTokenEndpoint, + "admin", "admin", "admin-cli"); + + String authorization = "Bearer " + token; + + // This is quite a round-about way but here it goes + + // We first need to identify the 'id' of the 'kafka' client by fetching the clients + JsonNode clients = HttpUtil.get(URI.create("http://" + HOST + ":8080/auth/admin/realms/kafka-authz/clients"), + authorization, JsonNode.class); + + String id = null; + + // iterate over clients + Iterator it = clients.iterator(); + while (it.hasNext()) { + JsonNode client = it.next(); + String clientId = client.get("clientId").asText(); + if ("kafka".equals(clientId)) { + id = client.get("id").asText(); + break; + } + } + + if (id == null) { + throw new IllegalStateException("It seems that 'kafka' client isn't configured"); + } + + URI authzUri = URI.create("http://" + HOST + ":8080/auth/admin/realms/kafka-authz/clients/" + id + "/authz/resource-server"); + + // Now we fetch from this client's resource-server the current configuration + ObjectNode authzConf = (ObjectNode) HttpUtil.get(authzUri, authorization, JsonNode.class); + + // And we update the configuration and send it back + authzConf.put("decisionStrategy", "AFFIRMATIVE"); + HttpUtil.put(authzUri, authorization, "application/json", authzConf.toString()); + } + + + static String groupFor(String topic) { + return topic + "-group"; + } + + static HashMap authenticateAllActors() throws IOException { + + HashMap tokens = new HashMap<>(); + tokens.put(TEAM_A_CLIENT, loginWithClientSecret(URI.create(TOKEN_ENDPOINT_URI), null, null, + TEAM_A_CLIENT, TEAM_A_CLIENT + "-secret", true).token()); + tokens.put(TEAM_B_CLIENT, loginWithClientSecret(URI.create(TOKEN_ENDPOINT_URI), null, null, + TEAM_B_CLIENT, TEAM_B_CLIENT + "-secret", true).token()); + tokens.put(BOB, loginWithUsernamePassword(URI.create(TOKEN_ENDPOINT_URI), + BOB, BOB + "-password", "kafka-cli")); + return tokens; + } + + static void consume(Consumer consumer, String topic) { + TopicPartition partition = new TopicPartition(topic, 0); + consumer.assign(Arrays.asList(partition)); + + while (consumer.partitionsFor(topic, Duration.ofSeconds(1)).size() == 0) { + System.out.println("No assignment yet for consumer"); + } + + consumer.seekToBeginning(Arrays.asList(partition)); + ConsumerRecords records = consumer.poll(Duration.ofSeconds(10)); + + Assert.assertTrue("Got message", records.count() >= 1); + } + + static void consumeFail(Consumer consumer, String topic) { + TopicPartition partition = new TopicPartition(topic, 0); + consumer.assign(Arrays.asList(partition)); + + try { + while (consumer.partitionsFor(topic, Duration.ofSeconds(1)).size() == 0) { + System.out.println("No assignment yet for consumer"); + } + + consumer.seekToBeginning(Arrays.asList(partition)); + consumer.poll(Duration.ofSeconds(1)); + + Assert.fail("Should fail with TopicAuthorizationException"); + } catch (TopicAuthorizationException e) { + } + } + + static void produce(Producer producer, String topic) throws Exception { + producer.send(new ProducerRecord<>(topic, "The Message")).get(); + } + + static void produceFail(Producer producer, String topic) throws Exception { + try { + produce(producer, topic); + Assert.fail("Should not be able to send message"); + } catch (ExecutionException e) { + // should get authorization exception + Assert.assertTrue("Should fail with TopicAuthorizationException", e.getCause() instanceof TopicAuthorizationException); + } + } + + static Properties buildProducerConfig(String accessToken) { + Properties p = new Properties(); + p.setProperty("security.protocol", "SASL_PLAINTEXT"); + p.setProperty("sasl.mechanism", "OAUTHBEARER"); + p.setProperty("sasl.jaas.config", "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required " + + " oauth.access.token=\"" + accessToken + "\";"); + p.setProperty("sasl.login.callback.handler.class", "io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler"); + + p.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092"); + p.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); + p.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); + p.setProperty(ProducerConfig.ACKS_CONFIG, "all"); + + return p; + } + + static Properties buildAdminClientConfig(String accessToken) { + return buildProducerConfig(accessToken); + } + + static Properties buildConsumerConfig(String accessToken) { + Properties p = new Properties(); + p.setProperty("security.protocol", "SASL_PLAINTEXT"); + p.setProperty("sasl.mechanism", "OAUTHBEARER"); + p.setProperty("sasl.jaas.config", "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required " + + " oauth.access.token=\"" + accessToken + "\";"); + p.setProperty("sasl.login.callback.handler.class", "io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler"); + + p.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka:9092"); + p.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); + p.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); + + p.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "consumer-group"); + p.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "10"); + p.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true"); + + return p; + } + + + static String loginWithUsernamePassword(URI tokenEndpointUri, String username, String password, String clientId) throws IOException { + + StringBuilder body = new StringBuilder("grant_type=password&username=" + urlencode(username) + + "&password=" + urlencode(password) + "&client_id=" + urlencode(clientId)); + + JsonNode result = HttpUtil.post(tokenEndpointUri, + null, + null, + null, + "application/x-www-form-urlencoded", + body.toString(), + JsonNode.class); + + JsonNode token = result.get("access_token"); + if (token == null) { + throw new IllegalStateException("Invalid response from authorization server: no access_token"); + } + return token.asText(); + } + + void cleanup() throws Exception { + Properties bobAdminProps = buildAdminClientConfig(tokens.get(BOB)); + AdminClient admin = AdminClient.create(bobAdminProps); + + admin.deleteTopics(Arrays.asList(TOPIC_A, TOPIC_B, TOPIC_X, "non-existing-topic")); + admin.deleteConsumerGroups(Arrays.asList(groupFor(TOPIC_A), groupFor(TOPIC_B), groupFor(TOPIC_X), groupFor("non-existing-topic"))); + } +} diff --git a/testsuite/docker/keycloak/realms/demo.json b/testsuite/docker/keycloak/realms/demo-realm.json similarity index 100% rename from testsuite/docker/keycloak/realms/demo.json rename to testsuite/docker/keycloak/realms/demo-realm.json diff --git a/testsuite/docker/keycloak/realms/kafka-authz-realm.json b/testsuite/docker/keycloak/realms/kafka-authz-realm.json new file mode 100644 index 00000000..2b62aa2e --- /dev/null +++ b/testsuite/docker/keycloak/realms/kafka-authz-realm.json @@ -0,0 +1,662 @@ +{ + "realm": "kafka-authz", + "accessTokenLifespan": 300, + "ssoSessionIdleTimeout": 864000, + "ssoSessionMaxLifespan": 864000, + "enabled": true, + "sslRequired": "external", + "roles": { + "realm": [ + { + "name": "Dev Team A", + "description": "Developer on Dev Team A" + }, + { + "name": "Dev Team B", + "description": "Developer on Dev Team B" + }, + { + "name": "Ops Team", + "description": "Operations team member" + } + ], + "client": { + "team-a-client": [], + "team-b-client": [], + "kafka-cli": [], + "kafka": [ + { + "name": "uma_protection", + "clientRole": true + } + ] + } + }, + "groups" : [ + { + "name" : "ClusterManager Group", + "path" : "/ClusterManager Group" + }, { + "name" : "ClusterManager-cluster2 Group", + "path" : "/ClusterManager-cluster2 Group" + }, { + "name" : "Ops Team Group", + "path" : "/Ops Team Group" + } + ], + "users": [ + { + "username" : "alice", + "enabled" : true, + "totp" : false, + "emailVerified" : true, + "firstName" : "Alice", + "email" : "alice@strimzi.io", + "credentials" : [ { + "type" : "password", + "secretData" : "{\"value\":\"KqABIiReBuRWbP4pBct3W067pNvYzeN7ILBV+8vT8nuF5cgYs2fdl2QikJT/7bGTW/PBXg6CYLwJQFYrBK9MWg==\",\"salt\":\"EPgscX9CQz7UnuZDNZxtMw==\"}", + "credentialData" : "{\"hashIterations\":27500,\"algorithm\":\"pbkdf2-sha256\"}" + } ], + "disableableCredentialTypes" : [ ], + "requiredActions" : [ ], + "realmRoles" : [ "offline_access", "uma_authorization" ], + "clientRoles" : { + "account" : [ "view-profile", "manage-account" ] + }, + "groups" : [ "/ClusterManager Group" ] + }, { + "username" : "bob", + "enabled" : true, + "totp" : false, + "emailVerified" : true, + "firstName" : "Bob", + "email" : "bob@strimzi.io", + "credentials" : [ { + "type" : "password", + "secretData" : "{\"value\":\"QhK0uLsKuBDrMm9Z9XHvq4EungecFRnktPgutfjKtgVv2OTPd8D390RXFvJ8KGvqIF8pdoNxHYQyvDNNwMORpg==\",\"salt\":\"yxkgwEyTnCGLn42Yr9GxBQ==\"}", + "credentialData" : "{\"hashIterations\":27500,\"algorithm\":\"pbkdf2-sha256\"}" + } ], + "disableableCredentialTypes" : [ ], + "requiredActions" : [ ], + "realmRoles" : [ "offline_access", "uma_authorization" ], + "clientRoles" : { + "account" : [ "view-profile", "manage-account" ] + }, + "groups" : [ "/ClusterManager-cluster2 Group" ] + }, + { + "username" : "service-account-team-a-client", + "enabled" : true, + "serviceAccountClientId" : "team-a-client", + "realmRoles" : [ "offline_access", "Dev Team A" ], + "clientRoles" : { + "account" : [ "manage-account", "view-profile" ] + }, + "groups" : [ ] + }, + { + "username" : "service-account-team-b-client", + "enabled" : true, + "serviceAccountClientId" : "team-b-client", + "realmRoles" : [ "offline_access", "Dev Team B" ], + "clientRoles" : { + "account" : [ "manage-account", "view-profile" ] + }, + "groups" : [ ] + } + ], + "clients": [ + { + "clientId": "team-a-client", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "team-a-client-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": true, + "publicClient": false, + "fullScopeAllowed": true + }, + { + "clientId": "team-b-client", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "team-b-client-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": true, + "publicClient": false, + "fullScopeAllowed": true + }, + { + "clientId": "kafka", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "kafka-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": true, + "authorizationServicesEnabled": true, + "publicClient": false, + "fullScopeAllowed": true, + "authorizationSettings": { + "allowRemoteResourceManagement": true, + "policyEnforcementMode": "ENFORCING", + "resources": [ + { + "name": "Topic:a_*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Topics that start with a_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "Group:x_*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Consumer groups that start with x_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Describe" + }, + { + "name": "Delete" + }, + { + "name": "Read" + } + ] + }, + { + "name": "Topic:x_*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Topics that start with x_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Describe" + }, + { + "name": "Delete" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "Group:a_*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Groups that start with a_", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Describe" + }, + { + "name": "Read" + } + ] + }, + { + "name": "Group:*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Any group", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Describe" + }, + { + "name": "Read" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "Topic:*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Any topic", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Topic:b_*", + "type": "Topic", + "ownerManagedAccess": false, + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Cluster:*", + "type": "Cluster", + "ownerManagedAccess": false, + "displayName": "Cluster scope on cluster2", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + }, + { + "name": "ClusterAction" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Group:*", + "type": "Group", + "ownerManagedAccess": false, + "displayName": "Any group on cluster2", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Read" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name": "kafka-cluster:cluster2,Topic:*", + "type": "Topic", + "ownerManagedAccess": false, + "displayName": "Any topic on cluster2", + "attributes": {}, + "uris": [], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Delete" + }, + { + "name": "Describe" + }, + { + "name": "Write" + }, + { + "name": "IdempotentWrite" + }, + { + "name": "Read" + }, + { + "name": "Alter" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + } + ] + }, + { + "name" : "Cluster:*", + "type" : "Cluster", + "ownerManagedAccess" : false, + "attributes" : { }, + "uris" : [ ] + } + ], + "policies": [ + { + "name": "Dev Team A", + "type": "role", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "roles": "[{\"id\":\"Dev Team A\",\"required\":true}]" + } + }, + { + "name": "Default Policy", + "description": "A policy that grants access only for users within this realm", + "type": "js", + "logic": "POSITIVE", + "decisionStrategy": "AFFIRMATIVE", + "config": { + "code": "// by default, grants any permission associated with this policy\n$evaluation.grant();\n" + } + }, + { + "name": "Dev Team B", + "type": "role", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "roles": "[{\"id\":\"Dev Team B\",\"required\":true}]" + } + }, + { + "name": "Ops Team", + "type": "role", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "roles": "[{\"id\":\"Ops Team\",\"required\":true}]" + } + }, + { + "name" : "ClusterManager Group", + "type" : "group", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "groups" : "[{\"path\":\"/ClusterManager Group\",\"extendChildren\":false}]" + } + }, { + "name" : "ClusterManager of cluster2 Group", + "type" : "group", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "groups" : "[{\"path\":\"/ClusterManager-cluster2 Group\",\"extendChildren\":false}]" + } + }, + { + "name": "Dev Team A owns topics that start with a_ on any cluster", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Topic:a_*\"]", + "applyPolicies": "[\"Dev Team A\"]" + } + }, + { + "name": "Dev Team A can write to topics that start with x_ on any cluster", + "type": "scope", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Topic:x_*\"]", + "scopes": "[\"Describe\",\"Write\"]", + "applyPolicies": "[\"Dev Team A\"]" + } + }, + { + "name": "Dev Team B owns topics that start with b_ on cluster cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Topic:b_*\"]", + "applyPolicies": "[\"Dev Team B\"]" + } + }, + { + "name": "Dev Team B can read from topics that start with x_ on any cluster", + "type": "scope", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Topic:x_*\"]", + "scopes": "[\"Describe\",\"Read\"]", + "applyPolicies": "[\"Dev Team B\"]" + } + }, + { + "name": "Dev Team B can update consumer group offsets that start with x_ on any cluster", + "type": "scope", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Group:x_*\"]", + "scopes": "[\"Describe\",\"Read\"]", + "applyPolicies": "[\"Dev Team B\"]" + } + }, + { + "name": "Dev Team A can use consumer groups that start with a_ on any cluster", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"Group:a_*\"]", + "applyPolicies": "[\"Dev Team A\"]" + } + }, + { + "name": "ClusterManager of cluster2 Group has full access to topics on cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Topic:*\"]", + "applyPolicies": "[\"ClusterManager of cluster2 Group\"]" + } + }, + { + "name": "ClusterManager of cluster2 Group has full access to consumer groups on cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Group:*\"]", + "applyPolicies": "[\"ClusterManager of cluster2 Group\"]" + } + }, + { + "name": "ClusterManager of cluster2 Group has full access to cluster config on cluster2", + "type": "resource", + "logic": "POSITIVE", + "decisionStrategy": "UNANIMOUS", + "config": { + "resources": "[\"kafka-cluster:cluster2,Cluster:*\"]", + "applyPolicies": "[\"ClusterManager of cluster2 Group\"]" + } + }, { + "name" : "ClusterManager Group has full access to manage and affect groups", + "type" : "resource", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "resources" : "[\"Group:*\"]", + "applyPolicies" : "[\"ClusterManager Group\"]" + } + }, { + "name" : "ClusterManager Group has full access to manage and affect topics", + "type" : "resource", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "resources" : "[\"Topic:*\"]", + "applyPolicies" : "[\"ClusterManager Group\"]" + } + }, { + "name" : "ClusterManager Group has full access to cluster config", + "type" : "resource", + "logic" : "POSITIVE", + "decisionStrategy" : "UNANIMOUS", + "config" : { + "resources" : "[\"Cluster:*\"]", + "applyPolicies" : "[\"ClusterManager Group\"]" + } + } + ], + "scopes": [ + { + "name": "Create" + }, + { + "name": "Read" + }, + { + "name": "Write" + }, + { + "name": "Delete" + }, + { + "name": "Alter" + }, + { + "name": "Describe" + }, + { + "name": "ClusterAction" + }, + { + "name": "DescribeConfigs" + }, + { + "name": "AlterConfigs" + }, + { + "name": "IdempotentWrite" + } + ], + "decisionStrategy": "AFFIRMATIVE" + } + }, + { + "clientId": "kafka-cli", + "enabled": true, + "clientAuthenticatorType": "client-secret", + "secret": "kafka-cli-secret", + "bearerOnly": false, + "consentRequired": false, + "standardFlowEnabled": false, + "implicitFlowEnabled": false, + "directAccessGrantsEnabled": true, + "serviceAccountsEnabled": false, + "publicClient": true, + "fullScopeAllowed": true + } + ] +} \ No newline at end of file diff --git a/testsuite/pom.xml b/testsuite/pom.xml index a553171e..77dd49e4 100644 --- a/testsuite/pom.xml +++ b/testsuite/pom.xml @@ -17,6 +17,7 @@ refresh-token-jwt-keycloak-test access-token-introspection-hydra-test client-secret-jwt-hydra-test + client-secret-jwt-keycloak-authz-test @@ -77,6 +78,10 @@ io.strimzi kafka-oauth-common + + io.strimzi + kafka-oauth-keycloak-authorizer + org.keycloak keycloak-core diff --git a/testsuite/refresh-token-jwt-keycloak-test/docker-compose.yml b/testsuite/refresh-token-jwt-keycloak-test/docker-compose.yml index 20bde40a..4c6b626d 100644 --- a/testsuite/refresh-token-jwt-keycloak-test/docker-compose.yml +++ b/testsuite/refresh-token-jwt-keycloak-test/docker-compose.yml @@ -24,7 +24,7 @@ services: - KEYCLOAK_PASSWORD=admin - KEYCLOAK_HTTPS_PORT=8443 - PROXY_ADDRESS_FORWARDING=true - - KEYCLOAK_IMPORT=/opt/jboss/keycloak/realms/demo.json + - KEYCLOAK_IMPORT=/opt/jboss/keycloak/realms/demo-realm.json kafka: image: strimzi/kafka:latest-kafka-2.3.0