https://github.com/MucahidAydin/k8s-cluster.git
kubectl create namespace test
kubectl create namespace production
3: Create roles for the "junior" and "senior" groups in the "test" and "production" namespaces with specific permissions, and bind those roles to the respective groups.
- "junior" group should have full permissions in the "test" namespace and read/list permissions in the "production" namespace.
- "senior" group should have full permissions in both "test" and "production" namespaces and read/list permissions for cluster-wide resources.
kubectl apply -f ./RBAC/role/junior-role.yaml
kubectl apply -f ./RBAC/role/senior-role.yaml
kubectl apply -f ./RBAC/rolebinding/junior-rolebinding.yaml
kubectl apply -f ./RBAC/rolebinding/senior-rolebinding.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0-beta.0/deploy/static/provider/cloud/deploy.yaml
kubectl get all -n ingress-nginx
5: Ensure that three selected worker nodes only schedule pods for the "production" environment, while preventing other pods from being scheduled on them.
kubectl taint nodes <node-name> app=production:NoExecute
6: Deploy the Wordpress application in both the "test" and "production" namespaces using the "wordpress:latest" and "mysql:5.6" images.
- MySQL should be accessible within the cluster as a "ClusterIP" service.
- Persistent volumes should be used for long-term storage of data.
- No sensitive information (e.g., passwords) should be stored in application or yaml files.
- Both applications should be scheduled on the same worker node.
- CPU and memory resource limits should be set for both applications.
To deploy in the test namespace:
cd ./wordpress/test
kubectl apply -f .
To deploy in the production namespace:
cd ./wordpress/production
kubectl apply -f .
7: Expose the "test" and "production" Wordpress applications to the external world using Ingress, with the following domains:
- "test" namespace application: "testblog.example.com"
- "production" namespace application: "companyblog.example.com"
To deploy the ingress in the test namespace:
cd ./wordpress/test
kubectl apply -f wordpress-ingress.yaml
To deploy the ingress in the production namespace:
cd ./wordpress/production
kubectl apply -f wordpress-ingress.yaml
8: Create a 5-replica deployment in the "production" namespace from the image "ozgurozturknet/k8s:v1", with a rolling update strategy that allows a maximum of 2 pods to be updated simultaneously. Also, define a "liveness probe" for the "/healthcheck" endpoint and a "readiness probe" for the "/ready" endpoint.
cd ./Deployment
kubectl apply -f d-prod.yaml
cd ./Deployment
kubectl apply -f d-svc.yaml
10: Scale the previous deployment to 3 replicas, then scale it up to 10 replicas. After that, update the deployment to use the "ozgurozturknet/k8s:v2" image.
kubectl scale deployment d-prod --replicas=3
kubectl scale deployment d-prod --replicas=10
kubectl set image deployment/d-prod d-container=ozgurozturknet/k8s:v2
cd ./DaemonSet
kubectl apply -f .
12: Deploy a 2-node "mongodb" cluster as a "statefulset" and ensure it is working. Connect to the MongoDB pod and initiate a replica set.
cd ./MongoDB
kubectl apply -f .
To connect to the MongoDB pod:
kubectl exec -it mongodb-0 -- mongosh
To initiate the MongoDB replica set:
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongodb-0.mongodb-headless.test.svc.cluster.local:27017" },
{ _id: 1, host: "mongodb-1.mongodb-headless.test.svc.cluster.local:27017" },
{ _id: 2, host: "mongodb-2.mongodb-headless.test.svc.cluster.local:27017" }
]
});
13: Create a service account with "read" and "list" permissions for all resources across the cluster, then create a pod that uses this service account to list all pods using "curl".
To create the service account:
cd ./ServiceAccount
kubectl apply -f .
To test:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl --insecure https://kubernetes.default.svc.cluster.local/api/v1/namespaces/test/pods --header "Authorization: Bearer $TOKEN"
To drain a node:
kubectl drain <node-name> --ignore-daemonsets --delete-local-data
To uncordon the node:
kubectl uncordon <node-name>