You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
WARNING Release Image Architecture not detected. Release Image Architecture is unknown
INFO Credentials loaded from the "default" profile in file "/home/ec2-user/.aws/credentials"
WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
INFO Consuming Install Config from target directory
INFO Adding clusters...
INFO Creating infrastructure resources...
INFO Reconciling IAM roles for control-plane and compute nodes
INFO Creating IAM role for master
INFO Creating IAM role for worker
INFO Started local control plane with envtest
INFO Stored kubeconfig for envtest in: /home/ec2-user/okd/install-config/.clusterapi_output/envtest.kubeconfig
INFO Running process: Cluster API with args [-v=2 --diagnostics-address=0 --health-addr=127.0.0.1:42687 --webhook-port=33045 --webhook-cert-dir=/tmp/envtest-serving-certs-3860374580 --kubeconfig=/home/ec2-user/okd/install-config/.clusterapi_output/envtest.kubeconfig]
INFO Running process: aws infrastructure provider with args [-v=4 --diagnostics-address=0 --health-addr=127.0.0.1:42031 --webhook-port=37465 --webhook-cert-dir=/tmp/envtest-serving-certs-1388629115 --feature-gates=BootstrapFormatIgnition=true,ExternalResourceGC=true,TagUnmanagedNetworkResources=false,EKS=false --kubeconfig=/home/ec2-user/okd/install-config/.clusterapi_output/envtest.kubeconfig]
INFO Creating infra manifests...
INFO Created manifest *v1.Namespace, namespace= name=openshift-cluster-api-guests
INFO Created manifest *v1beta2.AWSClusterControllerIdentity, namespace= name=default
INFO Created manifest *v1beta1.Cluster, namespace=openshift-cluster-api-guests name=sno-q8jt9
INFO Created manifest *v1beta2.AWSCluster, namespace=openshift-cluster-api-guests name=sno-q8jt9
INFO Done creating infra manifests
INFO Creating kubeconfig entry for capi cluster sno-q8jt9
INFO Waiting up to 15m0s (until 9:25AM UTC) for network infrastructure to become ready...
INFO Network infrastructure is ready
INFO Creating private Hosted Zone
INFO Creating Route53 records for control plane load balancer
INFO Created manifest *v1beta2.AWSMachine, namespace=openshift-cluster-api-guests name=sno-q8jt9-bootstrap
INFO Created manifest *v1beta2.AWSMachine, namespace=openshift-cluster-api-guests name=sno-q8jt9-master-0
INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=sno-q8jt9-bootstrap
INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=sno-q8jt9-master-0
INFO Created manifest *v1.Secret, namespace=openshift-cluster-api-guests name=sno-q8jt9-bootstrap
INFO Created manifest *v1.Secret, namespace=openshift-cluster-api-guests name=sno-q8jt9-master
INFO Waiting up to 15m0s (until 9:33AM UTC) for machines [sno-q8jt9-bootstrap sno-q8jt9-master-0] to provision...
INFO Control-plane machines are ready
INFO Cluster API resources have been created. Waiting for cluster to become ready...
INFO Waiting up to 20m0s (until 9:39AM UTC) for the Kubernetes API at https://api.sno.okd-lab.ioannisgk.com:6443...
INFO API v1.30.6-dirty up
INFO Waiting up to 45m0s (until 10:10AM UTC) for bootstrapping to complete...
INFO Detected Single Node deployment
INFO Waiting up to 5m0s (until 9:48AM UTC) for the bootstrap etcd member to be removed...
INFO Destroying the bootstrap resources...
INFO Waiting up to 5m0s for bootstrap machine deletion openshift-cluster-api-guests/sno-q8jt9-bootstrap...
INFO Shutting down local Cluster API controllers...
INFO Stopped controller: Cluster API
INFO Stopped controller: aws infrastructure provider
INFO Shutting down local Cluster API control plane...
INFO Local Cluster API system has completed operations
INFO Finished destroying bootstrap resources
INFO Waiting up to 40m0s (until 10:24AM UTC) for the cluster at https://api.sno.okd-lab.ioannisgk.com:6443 to initialize...
INFO Waiting up to 30m0s (until 10:30AM UTC) to ensure each cluster operator has finished progressing...
INFO All cluster operators have completed progressing
INFO Checking to see if there is a route at openshift-console/console...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/ec2-user/okd/install-config/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sno.okd-lab.ioannisgk.com
INFO Login to the console with user: "kubeadmin", and password: "ERTv5-YTjfK-iRuK5-99FSv"
INFO Time elapsed: 53m12s
So far, everything seems good.
The Bug:
When I login to the OKD web console, I see in the Cluster Utilization section that only 4 CPU cores are available, instead of 8. I tried the same process with EC2 type of "m5.2xlarge", still only 4 cores are recognized instead of 8.
This is indeed a problem, because I also tested the same install configuration and EC2 type with an OpenShift installation, and OpenShift recognizes all 8 CPU cores.
Could you suggest on how to fix this issue?
The text was updated successfully, but these errors were encountered:
I have a bastion host on AWS and I did the following, to prepare for the creation of a new SNO OKD v4.17 SCOS node:
I used this install configuration yaml file (EC2 type of "t3a.2xlarge" has 8 vCPUs and 32GB of memory):
Then, I created the SNO OKD node:
Result output:
So far, everything seems good.
The Bug:
When I login to the OKD web console, I see in the Cluster Utilization section that only 4 CPU cores are available, instead of 8. I tried the same process with EC2 type of "m5.2xlarge", still only 4 cores are recognized instead of 8.
This is indeed a problem, because I also tested the same install configuration and EC2 type with an OpenShift installation, and OpenShift recognizes all 8 CPU cores.
Could you suggest on how to fix this issue?
The text was updated successfully, but these errors were encountered: