Automated installation for Ezmeral Container Platform and MLOps on various platforms (available on AWS and Azure) for demo purposes.
You need a container runtime to run the tool. It should work on any container runtime and tested on Docker. Podman doesn't work if you try to map volumes (should work fine without the mounts).
Download the start script, or copy/paste below to start the container.
#!/usr/bin/env bash
VOLUMES=()
CONFIG_FILES=("aws_config.json" "azure_config.json" "vmware_config.json" "kvm_config.json" "ovirt_config.json")
for file in "${CONFIG_FILES[@]}"
do
target="${file%_*}"
# [[ -f "./${file}" ]] && VOLUMES="--mount=type=bind,source="$(pwd)"/${file},target=/app/server/${target}/config.json ${VOLUMES}"
[[ -f "./${file}" ]] && VOLUMES+=("$(pwd)/${file}:/app/server/${target}/config.json:rw")
done
[[ -f "./user.settings" ]] && VOLUMES+=("$(pwd)/user.settings:/app/server/user.settings:rw")
[[ ! -f "./user.settings" ]] && echo "{}" > ./user.settings
printf -v joined ' -v %s' "${VOLUMES[@]}"
## run at the background with web service exposed at 4000, mapr grafana at 3000, mcs at 8443, mcs installer at 9443
docker run --name ezdemo --pull always -d -p 3000:3000 -p 4000:4000 -p 8443:8443 -p 9443:9443 ${joined} erdincka/ezdemo:latest
Create your user settings in a separate file named "user.settings" in following format:
{
"project_id": "",
"user": "",
"admin_password": "ChangeMe!",
"is_mlops": false,
"is_mapr": false,
"is_gpu": false,
"is_ha": false,
"is_runtime" : true,
"is_verbose": true,
"install_ad" : true
}
Create "aws_config.json" or "azure_config.json" in the same folder with your settings and credentials. Template provided below:
AWS Template;
{
"aws_access_key": "",
"aws_secret_key": "",
"region": ""
}
Azure Template;
{
"az_subscription": "",
"az_appId": "",
"az_password": "",
"az_tenant": "",
"region": ""
}
Once the container starts, you can either use the WebUI on http://localhost:4000/ or run scripts manually within the container.
Exec into the container and use scripts provided.
docker exec -it "$(docker ps -f "status=running" -f "ancestor=erdincka/ezdemo" -q)" /bin/bash
./00-run_all.sh aws|azure|vmware|kvm
At any stage if script fails or if you wish to update your environment, you can restart the process wherever needed;
./01-init.sh aws|azure|vmware|kvm
./02-apply.sh aws|azure|vmware|kvm
./03-install.sh aws|azure|vmware|kvm
./04-configure.sh aws|azure|vmware|kvm
Deployed resources will be available in ./server/ansible/inventory.ini file
-
All access to the environment is possible only through the gateway
-
Use
ssh [email protected]
to access hosts within the container, using their internal IP address (~/.ssh/config setup for jump host via gateway) -
You can copy "./generated/controller.prv_key" and "~/.ssh/config" to your workstation to access the deployed nodes directly
-
Copy and install "./generated/*/minica.pem" into your browser to prevent SSL certificate errors
- AWS CLI - Download from AWS
- Azure-CLI - Download from Azure
- Terraform - Download from Terraform
- Ansible - Install from Ansible or simply via pip (sudo pip3 install ansible)
- python3 (apt/yum/brew install python3)
- jq (apt/yum/brew install jq)
- hpecp (pip3 install hpecp)
- kubectl from K8s
- minica (apt/yum/brew install minica)
- 00-run_all.sh: Runs all scripts at once (unattended install)
- 01-init.sh: Initialize Terraform, create SSH keys & certificates
- 02-apply.sh: Runs
terraform apply
to deploy resources - 03-install.sh: Run Ansible scripts to install ECP
- 04-configure.sh: Run Ansible scripts to configure ECP for demo
- 99-destroy.sh: Destroy all created resources (DANGER: All resources will be destroyed, except the generated keys and certificates)
Courtesy of Dirk Derichsweiler (https://github.com/dderichswei).
- prepare_centos: Updates packages and requirements for ECP installation
- install_falco: Updates kernel and install falco service
- install_ecp: Initial installation and setup for ECP
- import_hosts: Collects node information and update them as ECP worker nodes
- create_k8s: Installs Kubernetes Cluster (if MLOps is not selected)
- create_picasso: Installs Kubernetes Cluster and Picasso (Data Fabric on Kubernetes)
- configure_picasso: Enables Picasso (Data Fabric on Kubernetes) for all tenants
- configure_mlops: Configures MLOps tenant and life-cycle tools (Kubeflow, Minio, Jupyter NB etc)
Deployment defaults to EU-WEST-2 (EU - London) region on AWS, UK South (EU - London) region on Azure.
Please use following format to choose your region on AWS (config.json);
"us-east-1" // N.Virginia
"us-east-2" // Ohio
"us-west-1" // N.California
"us-west-2" // Oregon
"ap-southeast-1" // Singapore
"eu-central-1" // Frankfurt
"eu-west-1" // Ireland
"eu-west-2" // London
"eu-west-3" // Paris
"eu-north-1" // Stockholm
"ca-central-1" // Montréal, Québec
This format should be used to select a region on Azure;
"eastus"
"eastus2"
"centralus"
"westus"
"westus2"
"canadacentral"
"canadaeast"
"northeurope"
"westeurope"
"ukwest"
"uksouth"
"francecentral"
"germanynorth"
"centralindia"
"japaneast"
"australiacentral"
"uaenorth"
"southafricawest"
** Not all regions are tested, please provide feedback if you have an issue with a region.