목표: 쿠버네티스 클러스터 설치
vm 2개를 준비한다. ubuntu
1개는 마스터 노드, 1개는 워커 노드로 사용 예정
1.호스트 이름 변경
sudo hostnamectl set-hostname "master-node"
exec bash
sudo hostnamectl set-hostname "worker-node1"
exec bash
2.hosts 파일 업데이트
192.168.0.27 master-node
192.168.0.62 worker-node1
3.모든 노드에 IPv4 브리지 설정
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
overlay
br_netfilter
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.d/10-console-messages.conf ...
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf ...
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
* Applying /etc/sysctl.d/10-ptrace.conf ...
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf ...
vm.mmap_min_addr = 65536
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.default.accept_source_route = 0
sysctl: setting key "net.ipv4.conf.all.accept_source_route": Invalid argument
net.ipv4.conf.default.promote_secondaries = 1
sysctl: setting key "net.ipv4.conf.all.promote_secondaries": Invalid argument
net.ipv4.ping_group_range = 0 2147483647
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
fs.protected_regular = 1
fs.protected_fifos = 1
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/98-nhncloud.conf ...
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_syncookies = 1
* Applying /etc/sysctl.d/99-cloudimg-ipv6.conf ...
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.default.use_tempaddr = 0
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
4.모든 노드에 kubelet, lubeadm, kubectl 설치
sudo apt-get update
sudo mkdir -p -m 755 /etc/apt/keyrings
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt install -y kubelet=1.30.2-1.1 kubeadm=1.30.2-1.1 kubectl=1.30.2-1.1
sudo apt-mark hold kubelet kubeadm kubectl
5.모든 노드에 도커 설치
sudo apt install docker.io
sudo mkdir /etc/containerd
sudo sh -c "containerd config default > /etc/containerd/config.toml"
이러한 명령을 실행한 후 config.toml 파일을 수정하여 "SystemdCgroup"을 false로 설정하고 해당 값을 true로 변경하는 항목을 찾아야 합니다. Kubernetes에는 모든 구성 요소가 필요하고 컨테이너 런타임은 cgroup에 systemd를 사용하기 때문에 이는 중요합니다.
sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd.service
sudo systemctl restart kubelet.service
sudo systemctl enable kubelet.service
6.마스터 노드에서 Kubernetes 클러스터 (Kubeadm) 초기화
sudo kubeadm config images pull
sudo kubeadm init --pod-network-cidr=10.10.0.0/16
--pod-network-cidr 플래그는 포드 네트워크의 IP 주소 범위를 설정합니다.
마스터 노드에서 kubeadm을 초기화하면 토큰을 받게 된다. 로그에 남음. 나중에 Worker node 추가시 사용한다.
kubeadm join 192.168.0.27:6443 --token dm713u.ex7fygm4e5eljx0o \
--discovery-token-ca-cert-hash sha256:00157592afb4a7c67513588c85d380310580c37e86f00905a93dda1bbcb1635f
# kubectl 로컬 설정
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
7.Kubectl 및 Calico 구성
Calico Operator
Calico Operator는 Kubernetes 클러스터에서 Calico 네트워크 정책 및 보안 기능을 관리하는 Kubernetes Operator입니다.
Operator는 Kubernetes의 커스텀 리소스를 관리하고 커스텀 리소스 정의에 따라 애플리케이션 및 서비스를 배포, 업데이트, 백업 등을 자동화하는 방법을 제공합니다.
Calico는 컨테이너 오케스트레이션 시스템에서 효과적인 네트워크 정책을 제공하기 위한 오픈 소스 솔루션 중 하나입니다. Calico는 기본적으로 IP 기반의 네트워크 구성을 사용하여 컨테이너 간 통신 및 네트워크 보안을 담당합니다. Calico Operator는 Calico를 Kubernetes 클러스터에 손쉽게 배포하고 구성할 수 있도록 도와주는 도구 중 하나입니다.
Calico Operator를 사용하면 Calico 네트워크 정책을 쉽게 설정하고 유지 관리할 수 있으며, Kubernetes 클러스터에서의 네트워크 보안을 효과적으로 관리할 수 있습니다. Calico Operator를 통해 네트워크 정책의 변경 사항이나 업데이트를 간편하게 처리할 수 있으며, 이는 Kubernetes 클러스터에서의 유연하고 안전한 컨테이너 네트워킹을 지원합니다.
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
Calico가 사용할 다양한 리소스에 대한 정의가 포함된 Calico용 사용자 정의 리소스 파일을 다운로드
sed -i 's/cidr: 192\.168\.0\.0\/16/cidr: 10.10.0.0\/16/g' custom-resources.yaml
사용자 지정 리소스의 CIDR을 포드 네트워크와 일치하도록 수정합니다.
여기서는 sed 명령을 사용하여 Calico 사용자 정의 리소스의 기본 CIDR 값을 kubeadm init 명령에 사용한 CIDR과 일치하도록 변경합니다.
kubectl create -f custom-resources.yaml
리소스 생성
8.클러스터에 worker 노드 추가
sudo kubeadm join <MASTER_NODE_IP>:<API_SERVER_PORT> --token <TOKEN> --discovery-token-ca-cert-hash <CERTIFICATE_HASH>
7단계에서 로그에 남은 내용을 이용해서 join한다.
sudo kubeadm join 192.168.0.27:6443 --token dm713u.ex7fygm4e5eljx0o \
--discovery-token-ca-cert-hash sha256:00157592afb4a7c67513588c85d380310580c37e86f00905a93dda1bbcb1635f
9. 클러스터 확인 및 테스트
kubectl get nodes
root@master-node:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node Ready control-plane 27m v1.26.5
worker-node1 Ready <none> 14m v1.26.5
kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-7b4877cddc-7cwzw 1/1 Running 0 25m
calico-apiserver calico-apiserver-7b4877cddc-rwpcq 1/1 Running 0 25m
calico-system calico-kube-controllers-ffb8f84b5-hthwt 1/1 Running 0 26m
calico-system calico-node-6qnwd 1/1 Running 0 14m
calico-system calico-node-rr2dv 1/1 Running 0 26m
calico-system calico-typha-7986567b76-nwswt 1/1 Running 0 26m
calico-system csi-node-driver-2lh4h 2/2 Running 0 14m
calico-system csi-node-driver-gqxnn 2/2 Running 0 26m
kube-system coredns-787d4945fb-dvd6h 1/1 Running 0 27m
kube-system coredns-787d4945fb-smzwr 1/1 Running 0 27m
kube-system etcd-master-node 1/1 Running 0 27m
kube-system kube-apiserver-master-node 1/1 Running 0 27m
kube-system kube-controller-manager-master-node 1/1 Running 0 27m
kube-system kube-proxy-2ggv4 1/1 Running 0 14m
kube-system kube-proxy-pjlsj 1/1 Running 0 27m
kube-system kube-scheduler-master-node 1/1 Running 0 27m
tigera-operator tigera-operator-78d7857c44-8qsck 1/1 Running 0 27m
외부에서 Kubectl 하기
https://coffeewhale.com/kubernetes/authentication/x509/2020/05/02/auth01/
https://jmholly.tistory.com/entry/vm%EC%82%AC%EC%9A%A9%EC%8B%9C-master-node%EA%B0%80-%EB%8B%A4%EB%A5%B8-%EC%84%9C%EB%B2%84%EC%9D%98-private-ip%EC%9D%BC-%EB%95%8C-worker-node-%EC%B6%94%EA%B0%80%ED%95%98%EB%8A%94-%EB%B2%95-kubeadm-join-%EC%95%88%EB%90%A0%EB%95%8C-%EC%97%90%EB%9F%AC%ED%95%B4%EA%B2%B0
에러 발생
kubeadm을 사용해 클러스터를 생성.
master 노드 private-ip를 할당.
외부 맥북에 kubeconfig 파일을 복사
kubeconfig 파일 안에 있는 서버 주소(master 노드의 private-ip)를 master 노드의 floating-ip로 변경
kubectl 명령을 수행.
Unable to connect to the server: x509: certificate is valid for x.x.x.x, y.y.y.y, not z.z.z.z
만약 최초 설치 시 이 상황을 예상했다면 kubeadm init 시 — apiserver-cert-extra-sans 옵션을 사용하면 된다. (설치 문서 참고)
kubectl get configmap kubeadm-config -n kube-system -o yaml
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.26.11
networking:
dnsDomain: cluster.local
podSubnet: 10.10.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
kind: ConfigMap
metadata:
creationTimestamp: "2023-12-11T06:05:22Z"
name: kubeadm-config
namespace: kube-system
resourceVersion: "200"
mkdir /etc/kubernetes/pki/apiserver_231211
cp -rp /etc/kubernetes/pki/apiserver.* /etc/kubernetes/pki/apiserver_231211
kubectl get configmap kubeadm-config -n kube-system -o jsonpath='{.data.ClusterConfiguration}' > kubeadm-conf.yaml
컨피그 부분만 따로 저장
certSANs: #추가
- <공인ip> #추가
apiServer:
certSANs:
- 133.186.244.143
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.26.11
networking:
dnsDomain: cluster.local
podSubnet: 10.10.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
rm -rf /etc/kubernetes/pki/apiserver.crt
rm -rf /etc/kubernetes/pki/apiserver.key
이미 있으면 생성이 안되므로 삭제
sudo kubeadm init phase certs apiserver --config kubeadm-conf.yaml
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text
DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:master-node, IP Address:10.96.0.1, IP Address:192.168.0.27, IP Address:133.186.244.143
인증서 관련
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 929766089756615767 (0xce731552025a457)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Dec 11 06:05:13 2023 GMT
Not After : Dec 10 06:05:13 2024 GMT
Subject: CN = kube-apiserver
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:c4:bf:25:19:e3:b2:d9:a1:37:d2:bc:4a:35:5f:
e3:77:09:95:5e:24:10:10:12:6b:01:a8:e5:a1:78:
6e:03:d8:7d:f1:8c:07:db:4b:1c:42:28:73:aa:b6:
8b:da:8a:2a:ad:d7:f2:c4:ce:21:27:47:cb:b2:26:
40:5e:bb:49:ea:44:75:d4:44:7f:d2:ef:b9:5b:5e:
8e:67:de:e5:af:9e:5c:83:83:19:71:19:72:63:4c:
cd:f8:ab:f3:ab:e8:bd:47:97:02:21:3a:c7:1b:64:
59:20:42:e2:da:f7:61:55:70:ab:70:65:d4:c1:43:
42:d7:b6:92:3d:34:15:14:38:8f:00:59:2e:49:9a:
9b:bc:db:8b:a4:d5:9c:cb:0d:b5:56:5b:f8:f0:f5:
9d:3a:cf:8d:72:92:47:a1:29:39:c7:88:d4:74:37:
49:82:de:2e:1f:39:2f:34:ed:cc:82:49:bd:4d:3e:
df:f3:ce:4f:5a:20:5b:ae:2d:0e:df:2c:c2:3c:9e:
31:11:8b:86:dc:88:fd:ee:40:11:54:f8:ce:27:ae:
34:a6:7f:54:a3:fb:93:dc:2b:23:aa:16:75:21:ec:
11:64:97:47:d8:27:5e:75:28:53:77:a2:7a:88:9b:
7e:f3:81:31:3b:8f:d3:20:da:95:5d:a9:1c:12:28:
89:af
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Authority Key Identifier:
C7:7B:02:9D:5D:1F:B6:85:9C:9B:DE:67:36:18:D0:5A:98:AB:D5:23
X509v3 Subject Alternative Name:
DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:master-node, IP Address:10.96.0.1, IP Address:192.168.0.27
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
15:05:f2:27:6a:53:a4:96:ac:96:6d:04:c8:b4:c5:7e:32:1e:
a4:bb:8b:2e:3d:31:59:b0:79:62:9c:fe:d5:32:31:3e:c8:36:
1c:b8:45:9a:6e:f2:76:52:d1:e2:6a:e9:1f:1d:10:d6:12:e5:
b7:ed:69:a3:b3:6c:7d:95:d5:19:aa:73:cc:60:25:29:ca:c9:
21:45:3f:9d:00:21:70:4f:12:bb:bf:87:65:e0:4c:bc:7c:81:
c7:1a:fe:50:73:a3:f2:c7:d7:2e:b6:a8:e0:4a:81:95:dc:bb:
c5:da:9c:e3:05:bf:53:53:a3:c1:c5:4c:f3:36:ed:fb:e9:cc:
68:30:0f:bb:eb:0e:d1:da:01:ef:42:b4:5a:1e:90:dd:40:3b:
b3:ac:59:fe:52:61:19:7c:ff:81:2f:6c:6f:8f:6a:c2:90:16:
e5:a3:cb:d5:4b:f4:2b:c7:d8:4e:77:f6:31:ba:8e:3f:77:c3:
1d:d8:93:19:a1:6a:29:9b:3c:9e:58:8c:b3:a2:29:7e:52:61:
27:ed:50:94:ec:20:10:92:ca:4c:0e:d8:1c:f2:a6:ac:96:37:
20:8a:78:2d:c9:37:99:58:1a:04:54:17:2e:22:6e:88:35:86:
19:f8:b0:9e:6b:0e:0d:8a:02:ea:12:89:55:64:e8:ad:63:e1:
32:a7:4f:88
-----BEGIN CERTIFICATE-----
MIIDizCCAnOgAwIBAgIIDOcxVSAlpFcwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yMzEyMTEwNjA1MTNaFw0yNDEyMTAwNjA1MTNaMBkx
FzAVBgNVBAMTDmt1YmUtYXBpc2VydmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAxL8lGeOy2aE30rxKNV/jdwmVXiQQEBJrAajloXhuA9h98YwH20sc
QihzqraL2ooqrdfyxM4hJ0fLsiZAXrtJ6kR11ER/0u+5W16OZ97lr55cg4MZcRly
Y0zN+Kvzq+i9R5cCITrHG2RZIELi2vdhVXCrcGXUwUNC17aSPTQVFDiPAFkuSZqb
vNuLpNWcyw21Vlv48PWdOs+NcpJHoSk5x4jUdDdJgt4uHzkvNO3Mgkm9TT7f885P
WiBbri0O3yzCPJ4xEYuG3Ij97kARVPjOJ640pn9Uo/uT3CsjqhZ1IewRZJdH2Cde
dShTd6J6iJt+84ExO4/TINqVXakcEiiJrwIDAQABo4HaMIHXMA4GA1UdDwEB/wQE
AwIFoDATBgNVHSUEDDAKBggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMB8GA1UdIwQY
MBaAFMd7Ap1dH7aFnJveZzYY0FqYq9UjMIGABgNVHREEeTB3ggprdWJlcm5ldGVz
ghJrdWJlcm5ldGVzLmRlZmF1bHSCFmt1YmVybmV0ZXMuZGVmYXVsdC5zdmOCJGt1
YmVybmV0ZXMuZGVmYXVsdC5zdmMuY2x1c3Rlci5sb2NhbIILbWFzdGVyLW5vZGWH
BApgAAGHBMCoABswDQYJKoZIhvcNAQELBQADggEBABUF8idqU6SWrJZtBMi0xX4y
HqS7iy49MVmweWKc/tUyMT7INhy4RZpu8nZS0eJq6R8dENYS5bftaaOzbH2V1Rmq
c8xgJSnKySFFP50AIXBPEru/h2XgTLx8gcca/lBzo/LH1y62qOBKgZXcu8XanOMF
v1NTo8HFTPM27fvpzGgwD7vrDtHaAe9CtFoekN1AO7OsWf5SYRl8/4EvbG+PasKQ
FuWjy9VL9CvH2E539jG6jj93wx3YkxmhaimbPJ5YjLOiKX5SYSftUJTsIBCSykwO
2BzypqyWNyCKeC3JN5lYGgRUFy4ibog1hhn4sJ5rDg2KAuoSiVVk6K1j4TKnT4g=
-----END CERTIFICATE-----
kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 10, 2024 06:05 UTC 364d ca no
apiserver Dec 10, 2024 06:05 UTC 364d ca no
apiserver-etcd-client Dec 10, 2024 06:05 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 10, 2024 06:05 UTC 364d ca no
controller-manager.conf Dec 10, 2024 06:05 UTC 364d ca no
etcd-healthcheck-client Dec 10, 2024 06:05 UTC 364d etcd-ca no
etcd-peer Dec 10, 2024 06:05 UTC 364d etcd-ca no
etcd-server Dec 10, 2024 06:05 UTC 364d etcd-ca no
front-proxy-client Dec 10, 2024 06:05 UTC 364d front-proxy-ca no
scheduler.conf Dec 10, 2024 06:05 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 08, 2033 06:05 UTC 9y no
etcd-ca Dec 08, 2033 06:05 UTC 9y no
front-proxy-ca Dec 08, 2033 06:05 UTC 9y no
'Kubernetes > Install & Config' 카테고리의 다른 글
kubectl 명령어로 익히는 쿠버네티스의 주요 오브젝트 (0) | 2023.11.28 |
---|---|
환경설정 k (kubectl), krew 플러그인 (1) | 2023.11.27 |
(2) Kubespray 설치 (2) | 2023.11.25 |
(1) 쿠버네티스 설치 - kubespray (1) | 2023.11.24 |
(0) ubuntu 20.04 (0) | 2023.09.08 |