Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[major] Support Strimzi as default Kafka provider #451

Merged
merged 52 commits into from
Sep 12, 2023
Merged
Show file tree
Hide file tree
Changes from 49 commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
dead0a3
[minor] build ansible with strimzi
andrercm Aug 30, 2023
24ea8f4
[patch] rebuild ansible devops
andrercm Aug 31, 2023
20a541a
[patch] set strimzi as default
andrercm Aug 31, 2023
28fe788
[patch] check if amq is installed prior strimzi
andrercm Aug 31, 2023
64b642c
[patch] add kafka function
andrercm Aug 31, 2023
5d10ce6
[patch] add kafka arguments
andrercm Aug 31, 2023
41efd3f
[patch] fix install help
andrercm Aug 31, 2023
b3e76ab
[patch] fix enclosing if
andrercm Aug 31, 2023
b912dda
[patch] fix for testing
andrercm Sep 1, 2023
607007e
[patch] standard debug changes
andrercm Sep 1, 2023
cc9bf55
[patch] minor adjustment
andrercm Sep 1, 2023
075c931
[patch] minor adjustments
andrercm Sep 1, 2023
e82019c
[patch] fix kafka selection
andrercm Sep 1, 2023
cdc1264
[patch] assert amq not installed
andrercm Sep 1, 2023
b8ccc34
[patch] remove gencfg for kafka
andrercm Sep 1, 2023
39f885e
[patch] remove misplaced piece of code
andrercm Sep 1, 2023
ece49d6
[patch] fix kafka config file logic
andrercm Sep 1, 2023
d8014ed
[patch] add kafka_action_system
andrercm Sep 1, 2023
6074ca2
[patch] fix kafka_action_system value
andrercm Sep 2, 2023
812d484
[patch] fix logic kafka_action_system
andrercm Sep 2, 2023
cc98879
[patch] exit if kafka_cfg_file not set
andrercm Sep 2, 2023
6a86508
[patch] simplify kafka_cfg_file logic
andrercm Sep 2, 2023
7748725
[patch] fix kafka args
andrercm Sep 2, 2023
0feab9e
[patch] fix install for kafka args
andrercm Sep 2, 2023
a95e6f6
[patch] fix show_config
andrercm Sep 2, 2023
519e548
[patch] replace eventstreams default vars
andrercm Sep 2, 2023
b909869
[patch] fix EVENTSTREAMS_RETENTION
andrercm Sep 2, 2023
a843631
[patch] i think i got it now
andrercm Sep 2, 2023
5a8a6f8
[patch] ykw? remove retention as not used in iot
andrercm Sep 2, 2023
adef2a5
[patch] fix Instance location
andrercm Sep 2, 2023
6afa23f
[patch] add support for msk
andrercm Sep 4, 2023
b0b6840
[patch] rebuild ansible-devops
andrercm Sep 4, 2023
f648993
[patch] fix defaults msk kafka version
andrercm Sep 4, 2023
2a7bb7a
[patch] add msk in show config
andrercm Sep 4, 2023
8ba9335
[patch] remove dup kafka_cluster_name
andrercm Sep 4, 2023
5e5b5f8
[patch] remove dup kafka_cluster_name 2
andrercm Sep 4, 2023
6347dcd
[patch] fix bad copy paste
andrercm Sep 4, 2023
402e5de
[patch] fix save config
andrercm Sep 4, 2023
4ebe1cf
[patch] add debug
andrercm Sep 4, 2023
5c75882
[patch] fix params
andrercm Sep 4, 2023
8e4c247
[patch] fix default values
andrercm Sep 4, 2023
024d6a4
[patch] rebuild ansible
andrercm Sep 4, 2023
878cee1
[patch] define kafka arguments for msk
andrercm Sep 5, 2023
a2e38bd
Merge branch 'master' of github.com:ibm-mas/cli into strimzi
andrercm Sep 5, 2023
6d1cd41
[patch] change wording
andrercm Sep 5, 2023
0b040bf
[patch] remove tar.gz
andrercm Sep 5, 2023
b043371
[patch] simplify logic
andrercm Sep 5, 2023
7be1309
[patch] fix typo
andrercm Sep 5, 2023
dd05637
[patch] include msk in the supported kafka text
andrercm Sep 5, 2023
4155b13
[major] changes due code review
andrercm Sep 6, 2023
da4c4d5
Merge branch 'master' into strimzi
andrercm Sep 11, 2023
8412743
Merge branch 'master' into strimzi
durera Sep 12, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions docs/commands/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,33 @@ Usage
### IBM Cloud Pak for Data (Required when installing Predict or Assist):
- `--cp4d-version CP4D_VERSION` Product version of CP4D to use

### Kafka - Common Arguments (Optional, required to install Maximo IoT):
- `--kafka-provider KAFKA_PROVIDER` Required. Set Kafka provider. Supported options are `redhat` (Red Hat AMQ Streams), `strimzi` and `ibm` (IBM Event Streams)
- `--kafka-namespace KAFKA_NAMESPACE` Optional. Set Kafka namespace. Only applicable if installing `redhat` (Red Hat AMQ Streams) or `strimzi`
- `--kafka-cluster-name KAFKA_CLUSTER_NAME` Optional. Set Kafka cluster name. Only applicable if installing `redhat` (Red Hat AMQ Streams), `strimzi` or `aws` (AWS MSK)
- `--kafka-username KAFKA_USER_NAME` Required. Set Kafka instance username. Only applicable if installing `redhat` (Red Hat AMQ Streams), `strimzi` or `aws` (AWS MSK)
- `--kafka-password KAFKA_USER_PASSWORD` Required. Set Kafka instance password. Only applicable if installing `redhat` (Red Hat AMQ Streams), `strimzi` or `aws` (AWS MSK)

### Kafka - AWS MSK:
- `--aws-region AWS_REGION` Required. Set target AWS region for the MSK instance
- `--aws-access-key-id AWS_ACCESS_KEY_ID` Required. Set AWS access key ID for the target AWS account
- `--aws-secret-access-key AWS_SECRET_ACCESS_KEY` Required. Set AWS secret access key for the target AWS account
- `--aws-vpc-id VPC_ID` Required. Set target Virtual Private Cloud ID for the MSK instance
- `--msk-instance-type AWS_MSK_INSTANCE_TYPE` Optional. Set the MSK instance type
- `--msk-instance-nodes AWS_MSK_INSTANCE_NUMBER` Optional. Set total number of MSK instance nodes
- `--msk-instance-volume-size AWS_MSK_VOLUME_SIZE` Optional. Set storage/volume size for the MSK instance
- `--msk-cidr-az1 AWS_MSK_CIDR_AZ1` Required. Set the CIDR subnet for availability zone 1 for the MSK instance
- `--msk-cidr-az2 AWS_MSK_CIDR_AZ2` Required. Set the CIDR subnet for availability zone 2 for the MSK instance
- `--msk-cidr-az3 AWS_MSK_CIDR_AZ3` Required. Set the CIDR subnet for availability zone 3 for the MSK instance
- `--msk-cidr-ingress AWS_MSK_INGRESS_CIDR` Required. Set the CIDR for ingress connectivity
- `--msk-cidr-egress AWS_MSK_EGRESS_CIDR` Required. Set the CIDR for egress connectivity

### Kafka - IBM Cloud Event Streams:
- `--ibmcloud-apikey IBMCLOUD_APIKEY` Required. Set IBM Cloud API Key.
- `--eventstreams-resource-group EVENTSTREAMS_RESOURCEGROUP` Optional. Set IBM Cloud resource group to target the Event Streams instance provisioning.
- `--eventstreams-instance-name EVENTSTREAMS_NAME` Optional. Set IBM Event Streams instance name.
- `--eventstreams-instance-location EVENTSTREAMS_LOCATION` Optional. Set IBM Event Streams instance location.

### IBM Db2 (Optional, required to use IBM Db2 Universal Operator):
- `--db2u-channel DB2_CHANNEL` Subscription channel for Db2u (e.g. v110508.0)
- `--db2u-system` Install a shared Db2u instance for MAS (required by IoT & Monitor, supported by Manage)
Expand Down
107 changes: 107 additions & 0 deletions image/cli/mascli/functions/install
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,18 @@ Maximo Application Suite Application Selection (Optional):
IBM Cloud Pak for Data (Required when installing Predict or Assist):
--cp4d-version ${COLOR_YELLOW}CP4D_VERSION${TEXT_RESET} Product version of CP4D to use

Kafka (Required to install Maximo IoT thus if not set, a default Kafka provider will be installed):
--kafka-provider ${COLOR_YELLOW}KAFKA_PROVIDER${TEXT_RESET} Set Kafka provider. Supported options are 'redhat' (Red Hat AMQ Streams), 'strimzi' and 'ibm' (IBM Cloud Event Streams)

Kafka (Optional, applicable for Strimzi and Red Hat AMQ Streams only):
--kafka-namespace ${COLOR_YELLOW}KAFKA_NAMESPACE${TEXT_RESET} Set Strimzi and Red Hat AMQ Streams namespace

Kafka (Required for IBM Cloud Event Streams only):
--ibmcloud-apikey ${COLOR_YELLOW}IBMCLOUD_APIKEY${TEXT_RESET} Set IBM Cloud API Key. Required to provision IBM Cloud services
--eventstreams-resource-group ${COLOR_YELLOW}EVENTSTREAMS_RESOURCEGROUP${TEXT_RESET} Set IBM Cloud resource group to target the Event Streams instance provisioning (Only applicable if installing IBM Cloud Event Streams)
--eventstreams-instance-name ${COLOR_YELLOW}EVENTSTREAMS_NAME${TEXT_RESET} Set IBM Event Streams instance name (Only applicable if installing IBM Event Streams)
--eventstreams-instance-location ${COLOR_YELLOW}EVENTSTREAMS_LOCATION${TEXT_RESET} Set IBM Event Streams instance location (Only applicable if installing IBM Event Streams)

IBM Db2 (Optional, required to use IBM Db2 Universal Operator):
--db2u-channel ${COLOR_YELLOW}DB2_CHANNEL${TEXT_RESET} Subscription channel for Db2u (e.g. v110508.0)
--db2u-system Install a shared Db2u instance for MAS (required by IoT & Monitor, supported by Manage)
Expand Down Expand Up @@ -82,6 +94,8 @@ Advanced Db2u Universal Operator Configuration - Storage (Optional):
Advanced MongoDB Configuration (Optional):
--mongodb-namespace ${COLOR_YELLOW}MONGODB_NAMESPACE${TEXT_RESET} Change namespace where MongoCE operator and instance will be created

Cloud Provider Commands:
--ibmcloud-apikey ${COLOR_YELLOW}IBMCLOUD_APIKEY${TEXT_RESET} Set IBM Cloud API Key (Required to provision IBM Cloud services)

Other Commands:
--dev-mode Enable developer mode (e.g. for access to pre-release builds)
Expand All @@ -101,6 +115,7 @@ function install_noninteractive() {
# Defaults
DB2_ACTION_SYSTEM=none
DB2_ACTION_MANAGE=none
KAFKA_ACTION_SYSTEM=none

LOCAL_MAS_CONFIG_DIR=""
LOCAL_MAS_CONFIG_DIR_ALREADY_CHOSEN=yes
Expand Down Expand Up @@ -255,6 +270,75 @@ function install_noninteractive() {
export DB2_TEMP_STORAGE_SIZE=$1 && shift
;;

# Dependencies - Kafka common arguments
--kafka-provider)
export KAFKA_ACTION_SYSTEM=install
export KAFKA_PROVIDER=$1 && shift
;;

--kafka-cluster-name)
export KAFKA_CLUSTER_NAME=$1 && shift
;;

--kafka-username)
export AWS_KAFKA_USER_NAME=$1
export KAFKA_USER_NAME=$1 && shift
;;

--kafka-password)
export AWS_KAFKA_USER_PASSWORD=$1
export KAFKA_USER_PASSWORD=$1 && shift
;;

# Dependencies - Kafka (AMQ & Strimzi)
--kafka-namespace)
export KAFKA_NAMESPACE=$1 && shift
;;

# Dependencies - Kafka (AWS MSK)
--msk-instance-type)
export AWS_MSK_INSTANCE_TYPE=$1 && shift
;;

--msk-instance-nodes)
export AWS_MSK_INSTANCE_NUMBER=$1 && shift
;;

--msk-instance-volume-size)
export AWS_MSK_VOLUME_SIZE=$1 && shift
;;

--msk-cidr-az1)
export AWS_MSK_CIDR_AZ1=$1 && shift
;;

--msk-cidr-az2)
export AWS_MSK_CIDR_AZ2=$1 && shift
;;

--msk-cidr-az3)
export AWS_MSK_CIDR_AZ3=$1 && shift
;;

--msk-cidr-ingress)
export AWS_MSK_INGRESS_CIDR=$1 && shift
;;

--msk-cidr-egress)
export AWS_MSK_EGRESS_CIDR=$1 && shift
;;

# Dependencies - Kafka (IBM Cloud Event Streams)
--eventstreams-resource-group)
export EVENTSTREAMS_RESOURCEGROUP=$1 && shift
;;
--eventstreams-instance-name)
export EVENTSTREAMS_NAME=$1 && shift
;;
--eventstreams-instance-location)
export EVENTSTREAMS_LOCATION=$1 && shift
;;

# Licensing & Entitlement
--ibm-entitlement-key)
export IBM_ENTITLEMENT_KEY=$1 && shift
Expand Down Expand Up @@ -333,6 +417,29 @@ function install_noninteractive() {
export MAS_APP_SETTINGS_OVERRIDE_ENCRYPTION_SECRETS_FLAG=true
;;

# Cloud Provider Commands - IBM Cloud
--ibmcloud-apikey)
export IBMCLOUD_APIKEY=$1 && shift
;;

# Cloud Provider Commands - AWS
--aws-region)
export AWS_REGION=$1 && shift
;;

--aws-access-key-id)
export AWS_ACCESS_KEY_ID=$1 && shift
;;

--aws-secret-access-key)
export AWS_SECRET_ACCESS_KEY=$1 && shift
;;

--aws-vpc-id)
export VPC_ID=$1 && shift
;;


# Other Commands
--dev-mode)
DEV_MODE=true
Expand Down
2 changes: 1 addition & 1 deletion image/cli/mascli/functions/mirror_to_registry
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ function mirror_to_registry_interactive() {

echo
echo_h2 "Configure Authentication"
prompt_for_input "IBM Entitlement Key" IBM_ENTITLEMENT_KEY
prompt_for_secret "IBM Entitlement Key" IBM_ENTITLEMENT_KEY

if [[ $MIRROR_UDS == "true" ]]; then
prompt_for_input "Red Hat Connect Username" REDHAT_CONNECT_USERNAME
Expand Down
4 changes: 3 additions & 1 deletion image/cli/mascli/functions/pipeline_config
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
. $DIR/functions/pipeline_config_additional_configs
. $DIR/functions/pipeline_config_applications
. $DIR/functions/pipeline_config_db2
. $DIR/functions/pipeline_config_kafka
. $DIR/functions/pipeline_config_dns
. $DIR/functions/pipeline_config_sno
. $DIR/functions/pipeline_config_storage_classes
Expand Down Expand Up @@ -137,6 +138,7 @@ function pipeline_config() {
config_pipeline_dns
config_pipeline_applications
config_pipeline_db2
config_pipeline_kafka
config_pipeline_turbonomic
config_pipeline_additional_configs

Expand All @@ -158,7 +160,7 @@ function pipeline_config() {

echo
echo_h2 "Configure IBM Container Registry"
prompt_for_input "IBM Entitlement Key" IBM_ENTITLEMENT_KEY $IBM_ENTITLEMENT_KEY
prompt_for_secret "IBM Entitlement Key" IBM_ENTITLEMENT_KEY $IBM_ENTITLEMENT_KEY

echo
echo_h2 "Configure IBM Container Registry (MAS)"
Expand Down
8 changes: 8 additions & 0 deletions image/cli/mascli/functions/pipeline_config_advanced
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,14 @@ function config_pipeline_advanced() {
if [[ "$DB2_ACTION_SYSTEM" == "install" || "$DB2_ACTION_MANAGE" == "install" ]]; then
prompt_for_input "+ Db2 Namespace" DB2_NAMESPACE "db2u"
fi
if [[ "$KAFKA_ACTION_SYSTEM" == "install" && ("$KAFKA_PROVIDER" == "strimzi" || "$KAFKA_PROVIDER" == "redhat") ]]; then
if [[ "$KAFKA_PROVIDER" == "strimzi" ]]; then
KAFKA_NAMESPACE="strimzi"
else
KAFKA_NAMESPACE="amq-streams"
fi
prompt_for_input "+ Kafka Namespace" KAFKA_NAMESPACE $KAFKA_NAMESPACE
fi
if [[ "$CLUSTER_MONITORING_INCLUDE_GRAFANA" == "True" ]]; then
prompt_for_input "+ Grafana Namespace" GRAFANA_NAMESPACE "grafana"
fi
Expand Down
4 changes: 2 additions & 2 deletions image/cli/mascli/functions/pipeline_config_dns
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ function config_pipeline_dns() {
echo "Provide your AWS account access key ID & secret access key."
echo "This will be used to authenticate into the AWS account where your AWS Route 53 hosted zone instance is located."
echo ""
prompt_for_input "AWS access key ID" AWS_ACCESS_KEY_ID && export AWS_ACCESS_KEY_ID
prompt_for_input "AWS secret access key" AWS_SECRET_ACCESS_KEY && export AWS_SECRET_ACCESS_KEY
prompt_for_secret "AWS Access Key ID" AWS_ACCESS_KEY_ID "Re-use saved AWS Access Key ID?"
prompt_for_secret "AWS Secret Access Key" AWS_SECRET_ACCESS_KEY "Re-use saved AWS Secret Access Key?"
echo ""
echo "Provide your AWS Route 53 hosted zone instance details."
echo "This information will be used to create webhook resources between your cluster and your AWS Route 53 instance (cluster issuer and cname records)"
Expand Down
141 changes: 141 additions & 0 deletions image/cli/mascli/functions/pipeline_config_kafka
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
#!/bin/bash

# Do we need to set up an IoT kafka?
# -------------------------------------------------------------------------
function kafka_for_iot() {
if [ "$MAS_APP_CHANNEL_IOT" != "" ]; then
# Set up system kafka - using providers or external source
echo
echo_h3 "Kafka configuration for Maximo IoT"
echo "${TEXT_DIM}Maximo IoT requires a shared system-scope Kafka instance."
echo " - Supported Kafka providers: Strimzi, Red Hat AMQ Streams, IBM Cloud Event Streams and AWS MSK."
echo ""
reset_colors

if prompt_for_confirm_default_yes "Create system Kafka instance using one of the supported providers?"; then
KAFKA_ACTION_SYSTEM=install
echo
echo -e "${COLOR_YELLOW}Select the Kafka provider to be installed:"
echo
echo " 1) Strimzi (opensource)"
echo " 2) Red Hat AMQ Streams (requires a separate license)"
echo " 3) IBM Cloud Event Streams (paid IBM Cloud service)"
echo " 4) AWS MSK (paid AWS service)"
reset_colors
echo
prompt_for_input "Kafka provider" KAFKA_SELECTION "1"
echo

case $KAFKA_SELECTION in
1)
KAFKA_PROVIDER="strimzi"
;;
2)
KAFKA_PROVIDER=redhat
;;
3)
# kafka defaults - event streams
if [ ! -z $EVENTSTREAMS_RESOURCEGROUP ]; then
EVENTSTREAMS_RESOURCEGROUP=Default
fi
if [ ! -z $EVENTSTREAMS_NAME ]; then
EVENTSTREAMS_NAME=event-streams-$MAS_INSTANCE_ID
fi
if [ ! -z $EVENTSTREAMS_LOCATION ]; then
EVENTSTREAMS_LOCATION=us-east
fi

KAFKA_PROVIDER="ibm"
prompt_for_secret "IBM Cloud API Key" IBMCLOUD_APIKEY "Re-use saved IBM Cloud API Key?"
prompt_for_input "IBM Event Streams resource group" EVENTSTREAMS_RESOURCEGROUP $EVENTSTREAMS_RESOURCEGROUP
prompt_for_input "IBM Event Streams instance name" EVENTSTREAMS_NAME $EVENTSTREAMS_NAME
prompt_for_input "IBM Event Streams location" EVENTSTREAMS_LOCATION $EVENTSTREAMS_LOCATION
;;
4)
# kafka defaults - aws msk
if [ ! -z $KAFKA_CLUSTER_NAME ]; then
KAFKA_CLUSTER_NAME=aws-msk-$MAS_INSTANCE_ID
fi
if [ ! -z $AWS_KAFKA_USER_NAME ]; then
AWS_KAFKA_USER_NAME=masuser
fi
if [ ! -z $AWS_MSK_INSTANCE_TYPE ]; then
AWS_MSK_INSTANCE_TYPE=kafka.m5.large
fi
if [ ! -z $AWS_MSK_VOLUME_SIZE ]; then
AWS_MSK_VOLUME_SIZE=100
fi
if [ ! -z $AWS_MSK_INSTANCE_NUMBER ]; then
AWS_MSK_INSTANCE_NUMBER=3
fi
if [ ! -z $AWS_REGION ]; then
AWS_REGION=us-east-1
fi

KAFKA_PROVIDER="aws"
echo "${TEXT_DIM}"
echo "While provisioning the AWS MSK instance, you will be required to provide the AWS Virtual Private Cloud ID and subnet details"
echo "where your instance will be deployed to properly configure inbound and outbound connectivity."
echo "You should be able to find these information inside your VPC and subnet configurations in the target AWS account."
echo "For more details about AWS subnet/CIDR configuration, refer: https://docs.aws.amazon.com/vpc/latest/userguide/subnet-sizing.html"
echo ""
reset_colors
prompt_for_secret "AWS Access Key ID" AWS_ACCESS_KEY_ID "Re-use saved AWS Access Key ID?"
prompt_for_secret "AWS Secret Access Key" AWS_SECRET_ACCESS_KEY "Re-use saved AWS Secret Access Key?"
prompt_for_input "AWS Region" AWS_REGION $AWS_REGION
prompt_for_input "Virtual Private Cloud (VPC) ID" VPC_ID $VPC_ID
prompt_for_input "MSK Cluster Name" KAFKA_CLUSTER_NAME $KAFKA_CLUSTER_NAME
prompt_for_input "MSK Instance Username" AWS_KAFKA_USER_NAME $AWS_KAFKA_USER_NAME
prompt_for_secret "MSK Instance Password" AWS_KAFKA_USER_PASSWORD "Re-use saved MSK Instance Password?"
prompt_for_input "MSK Instance Type" AWS_MSK_INSTANCE_TYPE $AWS_MSK_INSTANCE_TYPE
prompt_for_input "MSK Total Number of Broker Nodes" AWS_MSK_INSTANCE_NUMBER $AWS_MSK_INSTANCE_NUMBER
prompt_for_input "MSK Storage Size (in GB)" AWS_MSK_VOLUME_SIZE $AWS_MSK_VOLUME_SIZE
prompt_for_input "Availability Zone 1 CIDR" AWS_MSK_CIDR_AZ1 $AWS_MSK_CIDR_AZ1
prompt_for_input "Availability Zone 2 CIDR" AWS_MSK_CIDR_AZ2 $AWS_MSK_CIDR_AZ2
prompt_for_input "Availability Zone 3 CIDR" AWS_MSK_CIDR_AZ3 $AWS_MSK_CIDR_AZ3
prompt_for_input "Ingress CIDR" AWS_MSK_INGRESS_CIDR $AWS_MSK_INGRESS_CIDR
prompt_for_input "Egress CIDR" AWS_MSK_EGRESS_CIDR $AWS_MSK_EGRESS_CIDR
;;
*)
echo_warning "Invalid selection"
exit 1
;;
esac

else
KAFKA_ACTION_SYSTEM=byo

select_local_config_dir

# Check if a configuration already exists
kafka_cfg_file=$LOCAL_MAS_CONFIG_DIR/kafka-$MAS_INSTANCE_ID-system.yaml
echo
if [ ! -e "$kafka_cfg_file" ]; then
echo_warning "Error: Kafka configuration file does not exist: '$kafka_cfg_file'"
echo_warning "In order to continue, provide an existing Kafka configuration file ($kafka_cfg_file) or choose one of the supported Kafka providers to be installed."
exit 1
else
echo "Provided Kafka configuration file '$kafka_cfg_file' will be applied."
fi
fi
else
# We don't need a system kafka, IoT is not being installed
KAFKA_ACTION_SYSTEM=none
fi
}

function config_pipeline_kafka() {
echo
echo_h2 "Configure Kafka"
echo "${TEXT_DIM}The installer can setup one Kafka provider instance (in your OpenShift cluster or in an IBM Cloud account) for the use of applications that require a Kafka configuration (e.g IoT) or you may choose to configure MAS to use an existing Kafka instance."
echo
reset_colors

# Unless we are installing IoT we have nothing to do
if [[ "$MAS_APP_CHANNEL_IOT" != "" ]]; then
kafka_for_iot
else
echo_highlight "No applications have been selected that require a Kafka installation"
KAFKA_ACTION_SYSTEM=none
fi
}
Loading