From daab4b85743337ae428724aa1487c1bf5216dd92 Mon Sep 17 00:00:00 2001 From: Alexey Perevalov Date: Fri, 3 Jul 2020 12:21:35 +0300 Subject: [PATCH] Add numaid and cpus into PodResources interface This change necessary for resource with topology exporting daemon, which used in topology aware scheduling. Information about CPU is keeping in cpu_ids, since it's enough to represent both quantity and numaid. NUMAid can be obtained from cadvisor MachineInfo, since id in cpus_ids is a thread_id. This API doesn't provide cpu fraction, since it could be obtainded from Pod's request/limits and in case of non-integer CPU quantity and non-guaranteed QoS cpu assigned is not exclusive and NUMA id is not interesting. Signed-off-by: Alexey Perevalov --- keps/sig-node/compute-device-assignment.md | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/keps/sig-node/compute-device-assignment.md b/keps/sig-node/compute-device-assignment.md index 36228a6dff2d..f9b1ec61fb6c 100644 --- a/keps/sig-node/compute-device-assignment.md +++ b/keps/sig-node/compute-device-assignment.md @@ -16,7 +16,7 @@ creation-date: "2018-07-19" last-updated: "2019-04-30" status: implementable --- -# Kubelet endpoint for device assignment observation details +# Kubelet endpoint for device assignment observation details ## Table of Contents @@ -57,10 +57,15 @@ In this document we will discuss the motivation and code changes required for in ![device monitoring architecture](https://user-images.githubusercontent.com/3262098/43926483-44331496-9bdf-11e8-82a0-14b47583b103.png) +### Device aware CNI plugin +After this interface has been introduced it was used by CNI plugins like [kuryr-kubernetes](https://review.opendev.org/#/c/651580/) in couple with [intel-sriov-device-plugin](https://github.com/intel/sriov-network-device-plugin) to correctly define which devices were assigned to the pod. + +### Topology aware scheduling +This interface can be used to collect allocated resources with information about the NUMA topology of the worker node. This information can then be used in NUMA aware scheduling. ## Changes -Add a v1alpha1 Kubelet GRPC service, at `/var/lib/kubelet/pod-resources/kubelet.sock`, which returns information about the kubelet's assignment of devices to containers. It obtains this information from the internal state of the kubelet's Device Manager. The GRPC Service returns a single PodResourcesResponse, which is shown in proto below: +Add a v1alpha1 Kubelet GRPC service, at `/var/lib/kubelet/pod-resources/kubelet.sock`, which returns information about the kubelet's assignment of devices and cpus to containers with NUMA id. It obtains this information from the internal state of the kubelet's Device Manager and CPU Manager respectively. The GRPC Service returns a single PodResourcesResponse, which is shown in proto below: ```protobuf // PodResources is a service provided by the kubelet that provides information about the // node resources consumed by pods and containers on the node @@ -87,12 +92,14 @@ message PodResources { message ContainerResources { string name = 1; repeated ContainerDevices devices = 2; + repeated uint32 cpu_ids = 3; } // ContainerDevices contains information about the devices assigned to a container message ContainerDevices { string resource_name = 1; repeated string device_ids = 2; + uint32 numaid = 3; } ``` @@ -113,7 +120,7 @@ message ContainerDevices { * Notes: * Does not include any reference to resource names. Monitoring agentes must identify devices by the device or environment variables passed to the pod or container. -### Add a field to Pod Status. +### Add a field to Pod Status. * Pros: * Allows for observation of container to device bindings local to the node through the `/pods` endpoint * Cons: @@ -148,7 +155,7 @@ type Container struct { } ``` * During Kubelet pod admission, if `ComputeDevices` is found non-empty, specified devices will be allocated otherwise behaviour will remain same as it is today. -* Before starting the pod, the kubelet writes the assigned `ComputeDevices` back to the pod spec. +* Before starting the pod, the kubelet writes the assigned `ComputeDevices` back to the pod spec. * Note: Writing to the Api Server and waiting to observe the updated pod spec in the kubelet's pod watch may add significant latency to pod startup. * Allows devices to potentially be assigned by a custom scheduler. * Serves as a permanent record of device assignments for the kubelet, and eliminates the need for the kubelet to maintain this state locally.