-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change max pods per node on cli #13420
Comments
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @Azure/aks-pm. |
aks |
routing to appropriate team |
This is by design, the pods per node have to represent the actual count of pods viewed from k8s. The system pod count is dependent on the cluster config, if you add multiple addons the "system-pods" will increase and while these pods are managed by the AKS service, they still reside in user-space so you can debug and transparently view them as needed. It is also by design that there must be 1 system pool, you can delete the whole cluster if you need all the system pools removed. Closing as a result that this is by design, but comment if this doesnt' clarify and we can revisit. |
I understand, displaying the system nodes in the total of pods makes sense, but, I did not understand how you determine that the maximum pod in the node is 30. A cluster with nodes DS12-V2 only run 30 pods equals the cluster with B2MS? This problem should not be closed without action, I reported 2 problems with the CLI.
|
The max pod setting per node is user defined and done so at create time only for each agent pool. There is no update functionality for it as seen from the possible commands on update:
Clusters require at least 1 system pool, as returned in err msg. This is detailed here: |
But why can't you update the maximum number of pods for an existing nodepool? Could this be added to the cli, please? |
Because that would involve re-wiring the pod CIDR which require a node reboot across the node pool at which point it makes the scenario to create a new pool and delete the previous one a bit more of controlled one. Happy to evaluate your scenario is it's a major pain for you to do this. |
Okay, I fully understand. My original problem is that the AKS collection for Ansible doesn't support specifying So now I'm trying to build a workaround:
But now I think I should wait for an action on the other issue. |
Hi @palma21 and anyone: is this still the case, after AKS cluster creation, we cannot later increase the maxPod to 250 worker nodes? Because I see default value of 30 in portal for worker nodes, even though in my bicep config says 250?
Why is it not showing 250 ? |
bump |
250 is the kubernetes max, 110 is the default, but on AKS the default is 30. I'm more interested in why is AKS defaulting to 30. This makes it very easy to get the scenario where the node pool autoscaler doesn't work as there is no memory or cpu pressure at 30 pods. Perhaps you should align with the kubernetes default of 110 or fix the autoscaler to also pay attention to max node count and the current allocation? |
I believe this is because AKS doesn't support changing this value after creation |
I'm sure this is the same for any cloud hosted K8s cluster, but GKE etc stick to that default of 110, and really is 80 address reseverations extra per node that big of a deal, especially when they recommend an address space of /16 on your vnet? Either way 30 is very low and so the autoscaler just doesn't scale when you hit that 30 pod limit and there's nothing you can do about it. So the way I see it is set a sensible default that is likely to get some pressure to trigger the autoscaler, or probably better still make the autoscaler also watch the pod allocation and limit to trigger scaling that way. |
30 is the default only when using Azure CNI they don't explicitly state why I'm just making an educated guess, although apparently if you deploy from the portal it defaults to 110 even with Azure CNI. |
Change max pods per node on cli
Problem
i'm facing problem in my AKS, as i found max pods number is 30 per node, when i checked the deployed pods I found +-10 pods are related to AKS system itself, i think max pods number should not include AKS system pods as this reduced my limit to 20 not 30 as mentioned in MS doc.
Expectative:
run
az aks nodepool update --cluster-name <you aks cluster name> --name <pool name> --max-pods=$maxpodsize
(currently it doesn't work)Workaround:
I created an new node pool
az aks nodepool add --cluster-name <you aks cluster name> -g <your-aks-resource-group> --max-pods $maxpodsize --name newpool --enable-cluster-autoscaler --min-count 1 --max-count 1 -c 1 -s <machine size>
Scale down the node of agentpool one per one and scale up newpool as necessary
Workaround problem:
I tried to remove the default pool, but the cli show an error
The message is: Operation failed with status: 'Bad Request'. Details: There has to be at least one system agent pool.
Workaround Expectative:
Cleaning the default pool if other pool was work
The text was updated successfully, but these errors were encountered: