1. 쿠버네티스 설치
1.1. 노드 공통 설정
● 우분투 3대 설치
● ubu22-01 노드를 설정한 후 복제하여 각 노드에 맞는 설정
# 패스워드 설정 vmadmin@admin-virtual-machine:~/Desktop$ sudo visudo --- # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) NOPASSWD: ALL ... :wq # nano 제거 vmadmin@admin-virtual-machine:~/Desktop$ sudo apt remove nano # neovim 설치 vmadmin@admin-virtual-machine:~/Desktop$ sudo apt install neovim # hosts 파일 설정 vmadmin@admin-virtual-machine:~/Desktop$ sudo vi /etc/hosts --- # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 192.168.100.30 ubu22-01 192.168.100.40 ubu22-02 192.168.100.50 ubu22-03 ... :wq # apt update vmadmin@admin-virtual-machine:~/Desktop$ sudo apt update -y # apt upgrade vmadmin@admin-virtual-machine:~/Desktop$ sudo apt upgrade -y # hostname 설정 vmadmin@admin-virtual-machine:~/Desktop$ sudo hostnamectl set-hostname ubu22-01 # 방화벽 해제 # 우분투에서 방화벽은 ufw admin@ubu22-01:~$ sudo systemctl disable --now ufw |
1.2. 단일 노드(ubu22-01)에 쿠버네티스 클러스터 설치
# key 생성 vmadmin@ubu22-01:~$ ssh-keygen -t ed25519 Generating public/private ed25519 key pair. Enter file in which to save the key (/home/vmadmin/.ssh/id_ed25519): Created directory '/home/vmadmin/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/vmadmin/.ssh/id_ed25519 Your public key has been saved in /home/vmadmin/.ssh/id_ed25519.pub The key fingerprint is: SHA256:r4DAzdVvbN9HYc2DuZmNvKaiiBZAMSrRknITd751ys8 vmadmin@ubu22-01 The key's randomart image is: +--[ED25519 256]--+ |.=o.. . | |=o=. o . o..| |=o . o o . o +o| |.o o . + = . B o| | + o . S = * ..| | o . * . ... | | o . E .o. .| | .. .. o o . | | .. . .o .. | +----[SHA256]-----+ # key 복사 vmadmin@ubu22-01:~$ ssh-copy-id ubu22-01 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed # 우분투에는 openssh-server가 기본으로 깔려 있지 않다. /usr/bin/ssh-copy-id: ERROR: ssh: connect to host ubu22-01 port 22: Connection refused # openssh-server 설치 vmadmin@ubu22-01:~$ sudo apt install openssh-server # localhost와 키 교환 vmadmin@ubu22-01:~$ ssh-copy-id ubu22-01 # 다른 서버에도 설치를 해줘야 한다. vmadmin@ubu22-02:~$ sudo apt install openssh-server -y vmadmin@ubu22-03:~$ sudo apt install openssh-server -y # 다른 노드와 키 교환 vmadmin@ubu22-01:~$ ssh-copy-id ubu22-02 vmadmin@ubu22-01:~$ ssh-copy-id ubu22-03 |
● kubespray 설치
# sudo 유저의 홈 디렉토리로 이동한다. vmadmin@ubu22-01:~$ cd ~ vmadmin@ubu22-01:~$ pwd /home/admin1 # git 설치 vmadmin@ubu22-01:~$ sudo apt install git -y # git clone 에서 kubespray 설치 vmadmin@ubu22-01:~$ git clone https://github.com/kubernetes-sigs/kubespray.git Cloning into 'kubespray'... remote: Enumerating objects: 70127, done. remote: Counting objects: 100% (771/771), done. remote: Compressing objects: 100% (508/508), done. remote: Total 70127 (delta 188), reused 576 (delta 183), pack-reused 69356 Receiving objects: 100% (70127/70127), 22.14 MiB | 17.31 MiB/s, done. Resolving deltas: 100% (39262/39262), done. # kubespray 디렉토리 생성 확인 vmadmin@ubu22-01:~$ ls Desktop Downloads Music Pictures snap Videos Documents kubespray NOPASSWORD Public Templates # kubespray 이동 vmadmin@ubu22-01:~$ cd kubespray/ # 모듈 설치 vmadmin@ubu22-01:~/kubespray$ sudo apt install -y python3-pip vmadmin@ubu22-01:~/kubespray$ pip3 --version pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10) # pip를 사용하여 requirements.txt 파일에 명시된 모든 패키지 설치 vmadmin@ubu22-01:~/kubespray$ sudo pip3 install -r requirements.txt |
● inventory 파일 설정
# sample directory를 복사하여 사용 vmadmin@ubu22-01:~/kubespray$ cp -rfp inventory/sample inventory/mycluster vmadmin@ubu22-01:~/kubespray$ cd inventory/mycluster/ vmadmin@ubu22-01:~/kubespray/inventory/mycluster$ ll total 20 drwxrwxr-x 4 admin1 admin1 4096 8월 8 12:04 ./ drwxrwxr-x 5 admin1 admin1 4096 8월 8 12:30 ../ drwxrwxr-x 4 admin1 admin1 4096 8월 8 12:04 group_vars/ -rw-rw-r-- 1 admin1 admin1 1028 8월 8 12:04 inventory.ini drwxrwxr-x 2 admin1 admin1 4096 8월 8 12:04 patches/ # 클러스터가 설치될 노드에 대한 정보가 담긴 inventory 파일을 설정한다. vmadmin@ubu22-01:~/kubespray/inventory/mycluster$ vi inventory.ini -------------------------------------------------------------------------- # ## Configure 'ip' variable to bind kubernetes services on a # ## different ip than the default iface # ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value. [all] # node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1 # node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2 # node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3 # node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4 # node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5 # node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6 ubu22-01 ansible_host=192.168.100.30 ip=192.168.100.30 etcd_member_name=etcd1 ubu22-02 ansible_host=192.168.100.40 ip=192.168.100.40 ubu22-03 ansible_host=192.168.100.50 ip=192.168.100.50 # ## configure a bastion host if your nodes are not directly reachable # [bastion] # bastion ansible_host=x.x.x.x ansible_user=some_user [kube_control_plane] # 컨트롤 노드 # node1 # node2 # node3 ubu22-01 [etcd] # node1 # node2 # node3 ubu22-01 [kube_node] # 워커 노드 # node2 # node3 # node4 # node5 # node6 ubu22-02 ubu22-03 :wq --------------------------------------------------------------------------- |
● 클러스터 파일 설정
# 쿠버네티스 클러스터의 설치 및 구성과 관련된 파일을 수정한다. vmadmin@ubu22-01:~/kubespray/inventory/mycluster$ vi group_vars/k8s_cluster/k8s-cluster.yml --------------------------------------------------------------------------- ... # must be set to true for MetalLB to work #kube_proxy_strict_arp: false # MetalLB를 로드밸런서 용도로 사용하기 위한 설정 kube_proxy_strict_arp: true # audit log for kubernetes #kubernetes_audit: false # 클러스터 감사 로그 활성화 kubernetes_audit: true ... --------------------------------------------------------------------------- |
● playbook 실행
# kubespray 위치로 이동 vmadmin@ubu22-01:~/kubespray/inventory/mycluster$ cd ~/kubespray # 설치 결과 확인 vmadmin@ubu22-01:~/kubespray$ ansible-playbook -i inventory/mycluster/inventory.ini -b -v cluster.yml # 플레이북 실행 =============================================================================== network_plugin/calico : Calico | Create ipamconfig resources ---------- 176.46s kubernetes/kubeadm : Join to cluster ----------------------------------- 33.01s kubernetes/preinstall : Install packages requirements ------------------ 32.71s container-engine/containerd : Download_file | Download item ------------ 22.02s container-engine/crictl : Download_file | Download item ---------------- 15.50s download : Download_container | Download image if required ------------- 14.67s container-engine/nerdctl : Download_file | Download item --------------- 14.28s container-engine/runc : Download_file | Download item ------------------ 14.28s kubernetes/preinstall : Update package management cache (APT) ---------- 13.49s download : Download_file | Download item ------------------------------- 13.34s download : Download_file | Download item ------------------------------- 12.54s download : Download_file | Download item ------------------------------- 12.19s kubernetes/control-plane : Kubeadm | Initialize first master ----------- 12.02s download : Download_container | Download image if required ------------- 11.84s download : Download_container | Download image if required ------------- 11.83s download : Download_container | Download image if required ------------- 11.09s download : Download_container | Download image if required ------------- 10.46s container-engine/crictl : Extract_file | Unpacking archive ------------- 10.35s download : Download_container | Download image if required -------------- 9.69s container-engine/nerdctl : Extract_file | Unpacking archive ------------- 9.63s # 루트로 접속 vmadmin@ubu22-01:~$ sudo bash # 클러스터의 현재 노드 상태 및 정보를 조회 root@ubu22-01:~# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ubu22-01 Ready control-plane 88m v1.26.7 192.168.100.30 <none> Ubuntu 22.04.3 LTS 6.2.0-26-generic containerd://1.7.2 ubu22-02 Ready <none> 87m v1.26.7 192.168.100.40 <none> Ubuntu 22.04.3 LTS 6.2.0-26-generic containerd://1.7.2 ubu22-03 Ready <none> 86m v1.26.7 192.168.100.50 <none> Ubuntu 22.04.3 LTS 6.2.0-26-generic containerd://1.7.2 |