kubectl describe nodes

(Kubectl describe no에서 넘어옴)

1 개요[ | ]

kubectl describe node
kubectl describe nodes

2 1.29[ | ]

$ kubectl get no
NAME           STATUS   ROLES           AGE   VERSION
controlplane   Ready    control-plane   14h   v1.29.0
node01         Ready    <none>          13h   v1.29.0
$ kubectl describe no node01
Name:               node01
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node01
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"72:1d:23:cb:bd:e6"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.30.2.2
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 172.30.2.2/24
                    projectcalico.org/IPv4IPIPTunnelAddr: 192.168.1.1
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 03 Mar 2024 15:32:07 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node01
  AcquireTime:     <unset>
  RenewTime:       Mon, 04 Mar 2024 05:12:37 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 04 Mar 2024 04:24:24 +0000   Mon, 04 Mar 2024 04:24:24 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Mon, 04 Mar 2024 05:10:18 +0000   Sun, 03 Mar 2024 15:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 04 Mar 2024 05:10:18 +0000   Sun, 03 Mar 2024 15:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 04 Mar 2024 05:10:18 +0000   Sun, 03 Mar 2024 15:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 04 Mar 2024 05:10:18 +0000   Sun, 03 Mar 2024 15:32:16 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  172.30.2.2
  Hostname:    node01
Capacity:
  cpu:                1
  ephemeral-storage:  20134592Ki
  hugepages-2Mi:      0
  memory:             2030940Ki
  pods:               110
Allocatable:
  cpu:                1
  ephemeral-storage:  19586931083
  hugepages-2Mi:      0
  memory:             1928540Ki
  pods:               110
System Info:
  Machine ID:                 388a2d0f867a4404bc12a0093bd9ed8d
  System UUID:                41f26ab9-4d3a-4984-abc4-f614b02795f9
  Boot ID:                    43e6078c-8ce2-4bca-9c5a-3559a7c33335
  Kernel Version:             5.4.0-131-generic
  OS Image:                   Ubuntu 20.04.5 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.7.13
  Kubelet Version:            v1.29.0
  Kube-Proxy Version:         v1.29.0
PodCIDR:                      192.168.1.0/24
PodCIDRs:                     192.168.1.0/24
Non-terminated Pods:          (4 in total)
  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
  kube-system                 canal-gjxwj                 25m (2%)      0 (0%)      0 (0%)           0 (0%)         13h
  kube-system                 coredns-86b698fbb6-8q542    50m (5%)      0 (0%)      50Mi (2%)        170Mi (9%)     13h
  kube-system                 coredns-86b698fbb6-hqpmj    50m (5%)      0 (0%)      50Mi (2%)        170Mi (9%)     13h
  kube-system                 kube-proxy-lhxdd            0 (0%)        0 (0%)      0 (0%)           0 (0%)         13h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                125m (12%)  0 (0%)
  memory             100Mi (5%)  340Mi (18%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                From             Message
  ----     ------                   ----               ----             -------
  Normal   Starting                 13h                kube-proxy       
  Normal   Starting                 48m                kube-proxy       
  Normal   NodeHasNoDiskPressure    13h (x2 over 13h)  kubelet          Node node01 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     13h (x2 over 13h)  kubelet          Node node01 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  13h                kubelet          Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  13h (x2 over 13h)  kubelet          Node node01 status is now: NodeHasSufficientMemory
  Normal   RegisteredNode           13h                node-controller  Node node01 event: Registered Node node01 in Controller
  Normal   NodeReady                13h                kubelet          Node node01 status is now: NodeReady
  Normal   Starting                 48m                kubelet          Starting kubelet.
  Normal   NodeHasSufficientMemory  48m (x2 over 48m)  kubelet          Node node01 status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    48m (x2 over 48m)  kubelet          Node node01 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     48m (x2 over 48m)  kubelet          Node node01 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  48m                kubelet          Updated Node Allocatable limit across pods
  Warning  Rebooted                 48m                kubelet          Node node01 has been rebooted, boot id: 43e6078c-8ce2-4bca-9c5a-3559a7c33335
  Normal   RegisteredNode           47m                node-controller  Node node01 event: Registered Node node01 in Controller

3 1.13[ | ]

root@localhost:~# kubectl get nodes
NAME                     STATUS       ROLES     AGE     VERSION
kubernetes-node-861h     NotReady     <none>    1h      v1.13.0
kubernetes-node-bols     Ready        <none>    1h      v1.13.0
kubernetes-node-st6x     Ready        <none>    1h      v1.13.0
kubernetes-node-unaj     Ready        <none>    1h      v1.13.0
root@localhost:~# kubectl describe node kubernetes-node-861h
Name:			kubernetes-node-861h
Role
Labels:		 kubernetes.io/arch=amd64
           kubernetes.io/os=linux
           kubernetes.io/hostname=kubernetes-node-861h
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:	Mon, 04 Sep 2017 17:13:23 +0800
Phase:
Conditions:
  Type		Status		LastHeartbeatTime			LastTransitionTime			Reason					Message
  ----    ------    -----------------     ------------------      ------          -------
  OutOfDisk             Unknown         Fri, 08 Sep 2017 16:04:28 +0800         Fri, 08 Sep 2017 16:20:58 +0800         NodeStatusUnknown       Kubelet stopped posting node status.
  MemoryPressure        Unknown         Fri, 08 Sep 2017 16:04:28 +0800         Fri, 08 Sep 2017 16:20:58 +0800         NodeStatusUnknown       Kubelet stopped posting node status.
  DiskPressure          Unknown         Fri, 08 Sep 2017 16:04:28 +0800         Fri, 08 Sep 2017 16:20:58 +0800         NodeStatusUnknown       Kubelet stopped posting node status.
  Ready                 Unknown         Fri, 08 Sep 2017 16:04:28 +0800         Fri, 08 Sep 2017 16:20:58 +0800         NodeStatusUnknown       Kubelet stopped posting node status.
Addresses:	10.240.115.55,104.197.0.26
Capacity:
 cpu:           2
 hugePages:     0
 memory:        4046788Ki
 pods:          110
Allocatable:
 cpu:           1500m
 hugePages:     0
 memory:        1479263Ki
 pods:          110
System Info:
 Machine ID:                    8e025a21a4254e11b028584d9d8b12c4
 System UUID:                   349075D1-D169-4F25-9F2A-E886850C47E3
 Boot ID:                       5cd18b37-c5bd-4658-94e0-e436d3f110e0
 Kernel Version:                4.4.0-31-generic
 OS Image:                      Debian GNU/Linux 8 (jessie)
 Operating System:              linux
 Architecture:                  amd64
 Container Runtime Version:     docker://1.12.5
 Kubelet Version:               v1.6.9+a3d1dfa6f4335
 Kube-Proxy Version:            v1.6.9+a3d1dfa6f4335
ExternalID:                     15233045891481496305
Non-terminated Pods:            (9 in total)
  Namespace                     Name                                            CPU Requests    CPU Limits      Memory Requests Memory Limits
  ---------                     ----                                            ------------    ----------      --------------- -------------
......
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits      Memory Requests         Memory Limits
  ------------  ----------      ---------------         -------------
  900m (60%)    2200m (146%)    1009286400 (66%)        5681286400 (375%)
Events:         <none>
$ kubectl describe node
...
Capacity:
  cpu:                4
  ephemeral-storage:  32461564Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16084408Ki
  nvidia.com/gpu:     2
  pods:               110
Allocatable:
  cpu:                4
  ephemeral-storage:  29916577333
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             15982008Ki
  nvidia.com/gpu:     2
  pods:               110

4 같이 보기[ | ]

5 참고[ | ]

문서 댓글 ({{ doc_comments.length }})
{{ comment.name }} {{ comment.created | snstime }}