1. 쿠버네티스 실습
1.1. run 명령어를 사용한 pod 생성
# default에서 생성된 pod가 없음을 확인한다. [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp No resources found in default namespace. # webserver라는 이름의 pod를 생성, 사용한 이미지는 nginx [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k run webserver --image=nginx pod/webserver created # 생성된 pod확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES webserver 0/1 ContainerCreating 0 4s <none> ubu22-03 <none> <none> # 동작중인 pod확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES webserver 1/1 Running 0 15s 10.233.109.2 ubu22-03 <none> <none> # 모든 pod를 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default webserver 1/1 Running 0 2m31s 10.233.109.2 ubu22-03 <none> <none> kube-system calico-kube-controllers-6dfcdfb99-s8rks 1/1 Running 0 139m 10.233.109.1 ubu22-03 <none> <none> kube-system calico-node-87jt9 1/1 Running 0 140m 192.168.100.50 ubu22-03 <none> <none> kube-system calico-node-bv6t2 1/1 Running 0 140m 192.168.100.30 ubu22-01 <none> <none> kube-system calico-node-vlw8g 1/1 Running 0 140m 192.168.100.40 ubu22-02 <none> <none> kube-system coredns-645b46f4b6-c6dqn 1/1 Running 0 139m 10.233.84.65 ubu22-01 <none> <none> kube-system coredns-645b46f4b6-tl8td 1/1 Running 0 138m 10.233.88.65 ubu22-02 <none> <none> kube-system dns-autoscaler-659b8c48cb-tzllr 1/1 Running 0 139m 10.233.84.66 ubu22-01 <none> <none> kube-system kube-apiserver-ubu22-01 1/1 Running 1 141m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-controller-manager-ubu22-01 1/1 Running 2 141m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-proxy-4tcm9 1/1 Running 0 140m 192.168.100.40 ubu22-02 <none> <none> kube-system kube-proxy-bdbgn 1/1 Running 0 140m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-proxy-ps42j 1/1 Running 0 140m 192.168.100.50 ubu22-03 <none> <none> kube-system kube-scheduler-ubu22-01 1/1 Running 1 141m 192.168.100.30 ubu22-01 <none> <none> kube-system nginx-proxy-ubu22-02 1/1 Running 0 139m 192.168.100.40 ubu22-02 <none> <none> kube-system nginx-proxy-ubu22-03 1/1 Running 0 139m 192.168.100.50 ubu22-03 <none> <none> kube-system nodelocaldns-fbcgk 1/1 Running 0 138m 192.168.100.30 ubu22-01 <none> <none> kube-system nodelocaldns-h2k2f 1/1 Running 0 138m 192.168.100.40 ubu22-02 <none> <none> kube-system nodelocaldns-mp7pk 1/1 Running 0 138m 192.168.100.50 ubu22-03 <none> <none> # 생성한 pod webserver에 대한 상세 내용 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k describe pod webserver Name: webserver Namespace: default Priority: 0 Service Account: default Node: ubu22-03/192.168.100.50 Start Time: Tue, 08 Aug 2023 15:28:02 +0900 Labels: run=webserver Annotations: cni.projectcalico.org/containerID: b034cf976bddfa3b35dbf4e299d6c7bd468f2d6b79d81a9657a07b6d0f7b29ce cni.projectcalico.org/podIP: 10.233.109.2/32 cni.projectcalico.org/podIPs: 10.233.109.2/32 Status: Running IP: 10.233.109.2 IPs: IP: 10.233.109.2 Containers: webserver: Container ID: containerd://638465ab47c43204251c0ead6423cc71850b01c2a33f05c0547a47b414d68d69 Image: nginx Image ID: docker.io/library/nginx@sha256:67f9a4f10d147a6e04629340e6493c9703300ca23a2f7f3aa56fe615d75d31ca Port: <none> Host Port: <none> State: Running Started: Tue, 08 Aug 2023 15:28:16 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vfsvd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-vfsvd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 3m23s default-scheduler Successfully assigned default/webserver to ubu22-03 Normal Pulling 3m23s kubelet Pulling image "nginx" Normal Pulled 3m10s kubelet Successfully pulled image "nginx" in 13.095491868s (13.09552405s including waiting) Normal Created 3m10s kubelet Created container webserver Normal Started 3m10s kubelet Started container webserver |
# webserver 포드 내부의 nginx 컨테이너 동작 확인 # 컨테이너 내부로 이동 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k exec -it webserver -- bash # 변화된 프롬프트 확인 root@webserver:/# # ps 확인을 위한 패키지 설치 root@webserver:/# apt -y update && apt -y install procps # 사용중인 모든 프로세스 확인 root@webserver:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 06:28 ? 00:00:00 nginx: master process nginx -g daemon off; nginx 28 1 0 06:28 ? 00:00:00 nginx: worker process nginx 29 1 0 06:28 ? 00:00:00 nginx: worker process root 30 0 0 06:32 pts/0 00:00:00 bash root 219 30 0 06:33 pts/0 00:00:00 ps -ef # 프로세스 최상단에는 동작 시킨 nginx의 우선순위가 가장 높다는 것을 확인 |
# 생성한 pod 제거 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k delete pod webserver pod "webserver" deleted # 제거 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp No resources found in default namespace. |
1.2. yaml 파일을 이용한 pod 생성
1) # --dry-run을 사용한 생성될 yaml 파일 확인
[kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k run webserver --image=nginx:1.20 --dry-run=client -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: webserver name: webserver spec: containers: - image: nginx:1.20 name: webserver resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} # 리다이렉션을 사용한 yaml 파일 생성 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k run webserver --image=nginx:1.20 --dry-run=client -o yaml > Documents/pod_create.yml # 생성된 yaml 파일 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ cat ./Documents/pod_create.yml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: webserver name: webserver spec: containers: - image: nginx:1.20 name: webserver resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} # 필요없는 코드 삭제 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ vi ./Documents/pod_create.yml --- piVersion: v1kgp kind: Pod metadata: name: webserver spec: containers: - image: nginx:1.20 name: webserver ... |
2) yaml 파일로 pod 생성
# 현재 상태 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp No resources found in default namespace. # yaml파일 적용 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k apply -f ./Documents/pod_create.yml pod/webserver created # 생성된 pod 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES webserver 0/1 ContainerCreating 0 5s <none> ubu22-03 <none> <none> [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES webserver 1/1 Running 0 20s 10.233.109.3 ubu22-03 <none> <none> |
3) yaml로 작성된 pod 제거
[kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k delete -f ./Documents/pod_create.yml pod "webserver" deleted # 제거 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp No resources found in default namespace. # 하지만 파일은 남아 있다. [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ cat Documents/pod_create.yml apiVersion: v1 kind: Pod metadata: name: webserver spec: containers: - image: nginx:1.20 name: webserver |
1.3. 예제 문제
angry라는 이름의 네임스페이스 생성.
angry라는 네임 스페이스로 korean-army라는 이름의 pod 생성.
컨테이너 이미지는 busybox, 이름은 korean-army이다.
CERT라는 이름의 변수를 지정, 값 Test-Cert는 10초간 sleep하는 동안 출력된다.
# 네임스페이스 생성 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k create ns angry namespace/happy created # 생성된 네임스페이스 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k ns default angry kube-node-lease kube-public kube-system # 네임스페이스 변경 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k ns angry Context "kubernetes-admin@cluster.local" modified. Active namespace is "angry". # 제시된 문제를 yaml로 작성 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:angry)]$ vi Documents/busybox.yml --- apiVersion: v1 kind: Pod metadata: name: korean-army namespace: angry spec: containers: - image: busybox name: korean-army env: - name: CERT value: "TEST-cert" command: ["/bin/sh"] args: ["-c", "while true; do echo $(CERT); sleep 10; done"] ... # 작성한 yaml 적용 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:angry)]$ k apply -f Documents/busybox.yml pod/korean-army created # 생성된 pod 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:angry)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES korean-army 0/1 ContainerCreating 0 3s <none> ubu22-03 <none> <none> # pod 동작 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:angry)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES korean-army 1/1 Running 0 12s 10.233.109.4 ubu22-03 <none> <none> # pod = swiss-army의 상세 내용 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:angry)]$ k describe pod korean-army Name: korean-army Namespace: angry Priority: 0 Service Account: default Node: ubu22-03/192.168.100.50 Start Time: Tue, 08 Aug 2023 15:54:15 +0900 Labels: <none> Annotations: cni.projectcalico.org/containerID: 3ae0119f6e48583c84c9b8d315f54fcefb95eaef0d1ae351a1bf4ed114cb35f2 cni.projectcalico.org/podIP: 10.233.109.4/32 cni.projectcalico.org/podIPs: 10.233.109.4/32 Status: Running IP: 10.233.109.4 IPs: IP: 10.233.109.4 Containers: swiss-army: Container ID: containerd://04fde1a55cb78f92e883ea93d6f5bfd702da58d611c307e153b7a0fc0c7a7d97 Image: busybox Image ID: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 Port: <none> Host Port: <none> Command: /bin/sh Args: -c while true; do echo $(CERT); sleep 10; done State: Running Started: Tue, 08 Aug 2023 15:54:22 +0900 Ready: True Restart Count: 0 Environment: CERT: TEST-cert Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24jr8 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-24jr8: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned happy/swiss-army to ubu22-03 Normal Pulling 30s kubelet Pulling image "busybox" Normal Pulled 24s kubelet Successfully pulled image "busybox" in 6.120731463s (6.120749376s including waiting) Normal Created 24s kubelet Created container swiss-army Normal Started 24s kubelet Started container swiss-army # 로그 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:angry)]$ k logs korean-army TEST-cert TEST-cert TEST-cert TEST-cert TEST-cert TEST-cert TEST-cert TEST-cert TEST-cert TEST-cert TEST-cert |
1.4. 클러스터 서버에 직접 생성
# kubelet 동작 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:happy)]$ systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2023-08-08 13:12:32 KST; 3h 50min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 34690 (kubelet) Tasks: 12 (limit: 9379) Memory: 45.7M CPU: 4min 47.091s CGroup: /system.slice/kubelet.service └─34690 /usr/local/bin/kubelet --v=2 --node-ip=192.168.100.30 --hostname-override=ubu22-01 --b> 8월 08 16:59:33 ubu22-01 kubelet[34690]: I0808 16:59:33.858247 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:00:33 ubu22-01 kubelet[34690]: I0808 17:00:33.858499 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:00:33 ubu22-01 kubelet[34690]: I0808 17:00:33.859473 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:00:33 ubu22-01 kubelet[34690]: I0808 17:00:33.859590 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:01:33 ubu22-01 kubelet[34690]: I0808 17:01:33.860529 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:01:33 ubu22-01 kubelet[34690]: I0808 17:01:33.860606 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:01:33 ubu22-01 kubelet[34690]: I0808 17:01:33.860624 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:02:33 ubu22-01 kubelet[34690]: I0808 17:02:33.861138 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:02:33 ubu22-01 kubelet[34690]: I0808 17:02:33.861237 34690 kubelet_getters.go:182] "Pod status> 8월 08 17:02:33 ubu22-01 kubelet[34690]: I0808 17:02:33.861259 34690 kubelet_getters.go:182] "Pod status> # 클러스터 노드로 접속 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:happy)]$ ssh ubu22-03 # kubelet 동작 확인 kevin@ubu22-03:~$ cat /var/lib/kubelet/config.yaml apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/ssl/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s cgroupDriver: systemd clusterDNS: - 169.254.25.10 clusterDomain: cluster.local cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration logging: flushFrequency: 0 options: json: infoBufferSize: "0" verbosity: 0 memorySwap: {} nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s resolvConf: /run/systemd/resolve/resolv.conf rotateCertificates: true runtimeRequestTimeout: 0s shutdownGracePeriod: 0s shutdownGracePeriodCriticalPods: 0s staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s |
# 클러스터 노드에 파일 전송 # 전송할 yaml 파일 확인 [kevin@ubu22-01 ~ (kubernetes-admin@cluster.local:happy)]$ cd Documents/ [kevin@ubu22-01 ~/Documents (kubernetes-admin@cluster.local:happy)]$ ll total 16 drwxr-xr-x 2 kevin kevin 4096 8월 8 15:53 ./ drwxr-x--- 22 kevin kevin 4096 8월 8 17:03 ../ -rw-rw-r-- 1 kevin kevin 273 8월 8 15:53 busybox.yml -rw-rw-r-- 1 kevin kevin 115 8월 8 15:41 pod_create.yml [kevin@ubu22-01 ~/Documents (kubernetes-admin@cluster.local:happy)]$ cat pod_create.yml apiVersion: v1 kind: Pod metadata: name: webserver spec: containers: - image: nginx:1.20 name: webserver # /etc 밑은 sudo 권한이 필요함. 먼저 /tmp로 이동시킨다. [kevin@ubu22-01 ~/Documents (kubernetes-admin@cluster.local:happy)]$ scp pod_create.yml ubu22-03:/tmp pod_create.yml # ssh로 yaml 파일 이동 [kevin@ubu22-01 ~/Documents (kubernetes-admin@cluster.local:happy)]$ ssh ubu22-03 sudo mv /tmp/pod_create.yml /etc/kubernetes/manifests/ # 전송한 파일이 동작하는지 확인 [kevin@ubu22-01 ~/Documents (kubernetes-admin@cluster.local:happy)]$ kgp -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default webserver-ubu22-03 1/1 Running 0 12s 10.233.109.5 ubu22-03 <none> <none> happy korean-army 1/1 Running 0 73m 10.233.109.4 ubu22-03 <none> <none> kube-system calico-kube-controllers-6dfcdfb99-s8rks 1/1 Running 0 3h56m 10.233.109.1 ubu22-03 <none> <none> kube-system calico-node-87jt9 1/1 Running 0 3h57m 192.168.100.50 ubu22-03 <none> <none> kube-system calico-node-bv6t2 1/1 Running 0 3h57m 192.168.100.30 ubu22-01 <none> <none> kube-system calico-node-vlw8g 1/1 Running 0 3h57m 192.168.100.40 ubu22-02 <none> <none> kube-system coredns-645b46f4b6-c6dqn 1/1 Running 0 3h56m 10.233.84.65 ubu22-01 <none> <none> kube-system coredns-645b46f4b6-tl8td 1/1 Running 0 3h56m 10.233.88.65 ubu22-02 <none> <none> kube-system dns-autoscaler-659b8c48cb-tzllr 1/1 Running 0 3h56m 10.233.84.66 ubu22-01 <none> <none> kube-system kube-apiserver-ubu22-01 1/1 Running 1 3h59m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-controller-manager-ubu22-01 1/1 Running 2 3h59m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-proxy-4tcm9 1/1 Running 0 3h57m 192.168.100.40 ubu22-02 <none> <none> kube-system kube-proxy-bdbgn 1/1 Running 0 3h57m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-proxy-ps42j 1/1 Running 0 3h57m 192.168.100.50 ubu22-03 <none> <none> kube-system kube-scheduler-ubu22-01 1/1 Running 1 3h59m 192.168.100.30 ubu22-01 <none> <none> kube-system nginx-proxy-ubu22-02 1/1 Running 0 3h57m 192.168.100.40 ubu22-02 <none> <none> kube-system nginx-proxy-ubu22-03 1/1 Running 0 3h57m 192.168.100.50 ubu22-03 <none> <none> kube-system nodelocaldns-fbcgk 1/1 Running 0 3h56m 192.168.100.30 ubu22-01 <none> <none> kube-system nodelocaldns-h2k2f 1/1 Running 0 3h56m 192.168.100.40 ubu22-02 <none> <none> kube-system nodelocaldns-mp7pk 1/1 Running 0 3h56m 192.168.100.50 ubu22-03 <none> <none> # 전송한 파일 자체 제거 [kevin@ubu22-01 ~/Documents (kubernetes-admin@cluster.local:happy)]$ ssh ubu22-03 sudo rm -rf /etc/kubernetes/manifests/pod_create.yml # 파일과 함께 동작하던 컨테이너도 사라짐 [kevin@ubu22-01 ~/Documents (kubernetes-admin@cluster.local:happy)]$ kgp -A NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES happy korean-army 1/1 Running 0 76m 10.233.109.4 ubu22-03 <none> <none> kube-system calico-kube-controllers-6dfcdfb99-s8rks 1/1 Running 0 3h59m 10.233.109.1 ubu22-03 <none> <none> kube-system calico-node-87jt9 1/1 Running 0 4h 192.168.100.50 ubu22-03 <none> <none> kube-system calico-node-bv6t2 1/1 Running 0 4h 192.168.100.30 ubu22-01 <none> <none> kube-system calico-node-vlw8g 1/1 Running 0 4h 192.168.100.40 ubu22-02 <none> <none> kube-system coredns-645b46f4b6-c6dqn 1/1 Running 0 3h59m 10.233.84.65 ubu22-01 <none> <none> kube-system coredns-645b46f4b6-tl8td 1/1 Running 0 3h59m 10.233.88.65 ubu22-02 <none> <none> kube-system dns-autoscaler-659b8c48cb-tzllr 1/1 Running 0 3h59m 10.233.84.66 ubu22-01 <none> <none> kube-system kube-apiserver-ubu22-01 1/1 Running 1 4h2m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-controller-manager-ubu22-01 1/1 Running 2 4h2m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-proxy-4tcm9 1/1 Running 0 4h1m 192.168.100.40 ubu22-02 <none> <none> kube-system kube-proxy-bdbgn 1/1 Running 0 4h1m 192.168.100.30 ubu22-01 <none> <none> kube-system kube-proxy-ps42j 1/1 Running 0 4h1m 192.168.100.50 ubu22-03 <none> <none> kube-system kube-scheduler-ubu22-01 1/1 Running 1 4h2m 192.168.100.30 ubu22-01 <none> <none> kube-system nginx-proxy-ubu22-02 1/1 Running 0 4h 192.168.100.40 ubu22-02 <none> <none> kube-system nginx-proxy-ubu22-03 1/1 Running 0 4h 192.168.100.50 ubu22-03 <none> <none> kube-system nodelocaldns-fbcgk 1/1 Running 0 3h59m 192.168.100.30 ubu22-01 <none> <none> kube-system nodelocaldns-h2k2f 1/1 Running 0 3h59m 192.168.100.40 ubu22-02 <none> <none> kube-system nodelocaldns-mp7pk 1/1 Running 0 3h59m 192.168.100.50 ubu22-03 <none> <none> |
1.5. yaml 파일 익스포트 플러그인 kube-neat 설치
# 용량이 작은 busybox 파드 실행 (YAML 파일 확인을 위해서) [asd@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k run busybox --image=busybox pod/busybox created [asd@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 0/1 Completed 0 18s 10.233.109.15 ubu22-03 <none> <none> webserver-ubu22-03 1/1 Running 1 (44m ago) 16h 10.233.109.8 ubu22-03 <none> <none> |
[asd@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k get pod busybox -o yaml apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: b6a79271ab777cf39728cc533e15ca084cd6513f2b333e6e0af56b14ca4c132b cni.projectcalico.org/podIP: 10.233.109.15/32 cni.projectcalico.org/podIPs: 10.233.109.15/32 creationTimestamp: "2023-08-09T00:56:14Z" labels: run: busybox name: busybox namespace: default resourceVersion: "44883" uid: d8f617e6-ef25-44df-b560-f3230275cbff ... status: conditions: - lastProbeTime: null lastTransitionTime: "2023-08-09T00:56:13Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-08-09T00:56:13Z" message: 'containers with unready status: [busybox]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-08-09T00:56:13Z" message: 'containers with unready status: [busybox]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-08-09T00:56:14Z" status: "True" type: PodScheduled containerStatuses: - image: busybox imageID: "" lastState: {} name: busybox ready: false restartCount: 0 started: false state: waiting: message: Back-off pulling image "busybox" reason: ImagePullBackOff hostIP: 192.168.100.130 phase: Pending podIP: 10.233.109.15 podIPs: - ip: 10.233.109.15 qosClass: BestEffort startTime: "2023-08-09T00:56:13Z" |
# kube-neat 설치 [admin1@ubu22-01 ~ (kubernetes-admin@cluster.local:default)]$ k krew install neat Updated the local copy of plugin index. Installing plugin: neat Installed plugin: neat \ | Use this plugin: | kubectl neat | Documentation: | https://github.com/itaysk/kubectl-neat / {admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)}$k run httpd --image=httpd --dry-run=client -o yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: httpd name: httpd spec: containers: - image: httpd name: httpd resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {} # kube-neat를 활용해 훨씬 간결해진 것을 확인할 수 있다. {admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)}$k run httpd --image=httpd --dry-run=client -o yaml | k neat apiVersion: v1 kind: Pod metadata: labels: run: httpd name: httpd spec: containers: - image: httpd name: httpd |
1.6. 레플리카 사용하기
1) yaml 파일 생성
[admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ vi dep_nginx.yml --------------------------------------------------------------------------- apiVersion: apps/v1 kind: Deployment metadata: name: web01 spec: replicas: 1 selector: matchLabels: app_env: ngi template: metadata: labels: app_env: ngi spec: containers: - image: nginx name: nginx --------------------------------------------------------------------------- |
● apiVersion
리소스의 api 버전 지정
● kind
리소스의 종류 정의
● metadata
이름 문자열, UID, 그리고 선택적인 네임스페이스를 포함하여 오브젝트를
유일하게 구분지어 줄 데이터
● spec
배포에 대한 세부 사항 정의
● replicas
이 Deployment에서 사용할 레플리카의 수
● selector
pod를 선택하는 방식
● matchLabels
템플릿에서 지정한 label과 같은 pod 선택
● app_env
app_env값이 ngi인 label 선택
● template
pod에 대한 템플릿 지정
● containers
실행할 컨테이너 지정
● image
사용할 컨테이너의 이미지
● name
컨테이너의 이름
2) 실행
# 위에서 생성한 dep_nginx 파일을 실행 [admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ k apply -f dep_nginx.yml deployment.apps/web01 created # 파일에서 정의된 web01이 생성 [admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web01-75fd994b65-2x6xj 1/1 Running 0 5s 10.233.88.83 ubu22-02 <none> <none> |
3) 레플리카 갯수 늘리기
# web01의 레플리카 설정을 5로 변경 adin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ k scale deployment web01 --replicas=5 deployment.apps/web01 scaled [admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web01-75fd994b65-2x6xj 1/1 Running 0 3m20s 10.233.88.83 ubu22-02 <none> <none> web01-75fd994b65-d9k25 1/1 Running 0 2s 10.233.109.23 ubu22-03 <none> <none> web01-75fd994b65-n9dks 1/1 Running 0 2s 10.233.88.85 ubu22-02 <none> <none> web01-75fd994b65-pzt89 1/1 Running 0 2s 10.233.88.84 ubu22-02 <none> <none> web01-75fd994b65-wj68p 1/1 Running 0 2s 10.233.109.22 ubu22-03 <none> <none> |
4) 레플리카 갯수 줄이기
# 네임스페이스 default에 생성된 pod 확인 admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web01-75fd994b65-2x6xj 0/1 ErrImagePull 0 3m20s 10.233.88.83 ubu22-02 <none> <none> web01-75fd994b65-d9k25 0/1 ErrImagePull 0 2s 10.233.109.23 ubu22-03 <none> <none> web01-75fd994b65-n9dks 0/1 ErrImagePull 0 2s 10.233.88.85 ubu22-02 <none> <none> web01-75fd994b65-pzt89 0/1 ErrImagePull 0 2s 10.233.88.84 ubu22-02 <none> <none> web01-75fd994b65-wj68p 0/1 ErrImagePull 0 2s 10.233.109.22 ubu22-03 <none> <none> # 레플리카 설정을 1로 변경 [admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ k scale deployment web01 --replicas=1 deployment.apps/web01 scaled # 변경된 사항(1) 확인 [admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web01-75fd994b65-d9k25 1/1 Running 0 87s 10.233.109.23 ubu22-03 <none> <none> |
5) nginx 버전 바꾸기
[admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web01-75fd994b65-d9k25 1/1 Running 0 87s 10.233.109.23 ubu22-03 <none> <none> # nginx 1.12 버전으로 변경 [admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ k set image deployment web01 nginx=1.12 deployment.apps/web01 image updated # 버전의 바뀌면서 노드의 이름이 바뀌었다. [admin1@ubu22-01 Documents (kubernetes-admin@cluster.local:default)]$ kgp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web01-6b49f4984f-j9bs4 1/1 Running 0 5s 10.233.88.86 ubu22-02 <none> <none> web01-75fd994b65-d9k25 0/1 Running 0 8m26s 10.233.109.23 ubu22-03 <none> <none> |