AzureのAKSのkubectl describe nodeを見る
Kubernetesで実際のメモリを超えるコンテナアプリを動かすと、どうなるか? - あさのひとりごとにスケジューリングリソースの解説がされていますが、一番重要なkubectl describe node
の生のデータが掲載されていないのがちょっともったいないかなと思ったので一応貼っておきます。
aksの準備から終了まではこんな感じです。
az login az provider register -n Microsoft.ContainerService az provider register -n Microsoft.Network az provider register -n Microsoft.Compute az provider register -n Microsoft.Storage az group list az group create -n testaks -l eastus az aks get-versions -l eastus az aks create -g testaks -n testaks --node-count 2 --kubernetes-version 1.9.6 az aks install-cli az aks get-credentials -g testaks -n testaks kubectl get node kubectl describe node az aks delete -g testaks -n testaks
kubectl describe node
の結果は以下です。Allocatableのmemoryが3319Mi、node末尾0のAllocated resourcesのMemory Requestsが290Mi、node末尾1が294Miなので、この例では両ノード3000Mi程度アロケーション可能です。この状態でrequests.memoryが1.5Gi==1536Miのpodを複数スケジュールしようとすると恐らく3つめでFailedSchedulingとなると思います。1500Miであれば4 podスケジューリングできます。最初からデプロイされるkube-systemのpod群の配置は不定なので、場合によっては少し偏ってしまい元記事のように3 podスケジューリングできて4 pod目がfailするという状況にもなるでしょう。
Name: aks-nodepool1-16184948-0 Roles: agent Labels: agentpool=nodepool1 beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=Standard_DS1_v2 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=eastus failure-domain.beta.kubernetes.io/zone=0 kubernetes.azure.com/cluster=MC_testaks_testaks_eastus kubernetes.io/hostname=aks-nodepool1-16184948-0 kubernetes.io/role=agent storageprofile=managed storagetier=Premium_LRS Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true CreationTimestamp: Fri, 13 Apr 2018 05:10:06 +0000 Taints: <none> Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Fri, 13 Apr 2018 05:11:13 +0000 Fri, 13 Apr 2018 05:11:13 +0000 RouteCreated RouteController created a route OutOfDisk False Fri, 13 Apr 2018 05:23:02 +0000 Fri, 13 Apr 2018 05:10:06 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 13 Apr 2018 05:23:02 +0000 Fri, 13 Apr 2018 05:10:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 13 Apr 2018 05:23:02 +0000 Fri, 13 Apr 2018 05:10:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Fri, 13 Apr 2018 05:23:02 +0000 Fri, 13 Apr 2018 05:11:08 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.240.0.4 Hostname: aks-nodepool1-16184948-0 Capacity: alpha.kubernetes.io/nvidia-gpu: 0 cpu: 1 memory: 3501592Ki pods: 110 Allocatable: alpha.kubernetes.io/nvidia-gpu: 0 cpu: 1 memory: 3399192Ki pods: 110 System Info: Machine ID: 2c3d39f8fac841cb9df23cf4453420a9 System UUID: 707CF566-AC50-D649-9A23-F6A03C86DE52 Boot ID: ec9a15b2-af2e-4f84-92a5-5525d78c20f8 Kernel Version: 4.13.0-1011-azure OS Image: Debian GNU/Linux 9 (stretch) Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.13.1 Kubelet Version: v1.9.6 Kube-Proxy Version: v1.9.6 PodCIDR: 10.244.0.0/24 ExternalID: /subscriptions/31c0faff-6b3e-4b51-86e2-6c9595e05454/resourceGroups/MC_testaks_testaks_eastus/providers/Microsoft.Compute/virtualMachines/aks-nodepool1-16184948-0 ProviderID: azure:///subscriptions/31c0faff-6b3e-4b51-86e2-6c9595e05454/resourceGroups/MC_testaks_testaks_eastus/providers/Microsoft.Compute/virtualMachines/aks-nodepool1-16184948-0 Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system kube-dns-v20-7c556f89c5-9grpn 110m (11%) 0 (0%) 120Mi (3%) 220Mi (6%) kube-system kube-dns-v20-7c556f89c5-s8x25 110m (11%) 0 (0%) 120Mi (3%) 220Mi (6%) kube-system kube-proxy-4n2xd 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-svc-redirect-qwrtf 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kubernetes-dashboard-546f987686-khjvt 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%) kube-system tunnelfront-6f9ff58869-jxcfn 0 (0%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 420m (42%) 100m (10%) 290Mi (8%) 490Mi (14%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 15m kubelet, aks-nodepool1-16184948-0 Starting kubelet. Normal NodeAllocatableEnforced 15m kubelet, aks-nodepool1-16184948-0 Updated Node Allocatable limit across pods Normal NodeHasNoDiskPressure 14m (x7 over 15m) kubelet, aks-nodepool1-16184948-0 Node aks-nodepool1-16184948-0 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 13m (x8 over 15m) kubelet, aks-nodepool1-16184948-0 Node aks-nodepool1-16184948-0 status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 13m (x8 over 15m) kubelet, aks-nodepool1-16184948-0 Node aks-nodepool1-16184948-0 status is now: NodeHasSufficientMemory Normal Starting 12m kube-proxy, aks-nodepool1-16184948-0 Starting kube-proxy. Name: aks-nodepool1-16184948-1 Roles: agent Labels: agentpool=nodepool1 beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=Standard_DS1_v2 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=eastus failure-domain.beta.kubernetes.io/zone=1 kubernetes.azure.com/cluster=MC_testaks_testaks_eastus kubernetes.io/hostname=aks-nodepool1-16184948-1 kubernetes.io/role=agent storageprofile=managed storagetier=Premium_LRS Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true CreationTimestamp: Fri, 13 Apr 2018 05:10:11 +0000 Taints: <none> Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Fri, 13 Apr 2018 05:11:13 +0000 Fri, 13 Apr 2018 05:11:13 +0000 RouteCreated RouteController created a route OutOfDisk False Fri, 13 Apr 2018 05:23:03 +0000 Fri, 13 Apr 2018 05:10:11 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Fri, 13 Apr 2018 05:23:03 +0000 Fri, 13 Apr 2018 05:10:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 13 Apr 2018 05:23:03 +0000 Fri, 13 Apr 2018 05:10:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure Ready True Fri, 13 Apr 2018 05:23:03 +0000 Fri, 13 Apr 2018 05:11:11 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Addresses: InternalIP: 10.240.0.5 Hostname: aks-nodepool1-16184948-1 Capacity: alpha.kubernetes.io/nvidia-gpu: 0 cpu: 1 memory: 3501592Ki pods: 110 Allocatable: alpha.kubernetes.io/nvidia-gpu: 0 cpu: 1 memory: 3399192Ki pods: 110 System Info: Machine ID: 5b4fb70dfc744759821a92694f9d5993 System UUID: 45C3EA49-8E12-0146-964D-5EFB5A4E3E8A Boot ID: e7160774-de86-4a16-9bd3-9f43af1cdd57 Kernel Version: 4.13.0-1011-azure OS Image: Debian GNU/Linux 9 (stretch) Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.13.1 Kubelet Version: v1.9.6 Kube-Proxy Version: v1.9.6 PodCIDR: 10.244.1.0/24 ExternalID: /subscriptions/31c0faff-6b3e-4b51-86e2-6c9595e05454/resourceGroups/MC_testaks_testaks_eastus/providers/Microsoft.Compute/virtualMachines/aks-nodepool1-16184948-1 ProviderID: azure:///subscriptions/31c0faff-6b3e-4b51-86e2-6c9595e05454/resourceGroups/MC_testaks_testaks_eastus/providers/Microsoft.Compute/virtualMachines/aks-nodepool1-16184948-1 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- kube-system heapster-6599f48877-mhr5s 138m (13%) 138m (13%) 294Mi (8%) 294Mi (8%) kube-system kube-proxy-2lv22 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-svc-redirect-q54pd 0 (0%) 0 (0%) 0 (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 238m (23%) 138m (13%) 294Mi (8%) 294Mi (8%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 15m kubelet, aks-nodepool1-16184948-1 Starting kubelet. Normal NodeAllocatableEnforced 15m kubelet, aks-nodepool1-16184948-1 Updated Node Allocatable limit across pods Normal NodeHasNoDiskPressure 14m (x7 over 15m) kubelet, aks-nodepool1-16184948-1 Node aks-nodepool1-16184948-1 status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 14m (x8 over 15m) kubelet, aks-nodepool1-16184948-1 Node aks-nodepool1-16184948-1 status is now: NodeHasSufficientDisk Normal NodeHasSufficientMemory 14m (x8 over 15m) kubelet, aks-nodepool1-16184948-1 Node aks-nodepool1-16184948-1 status is now: NodeHasSufficientMemory Normal Starting 12m kube-proxy, aks-nodepool1-16184948-1 Starting kube-proxy.
Resource requestとlimitの基本はOpenShiftのResource requestとlimitをどうぞ。