https://medium.com/@lubomir-tobek/kubernetes-multi-master-node-cluster-f2081e504983
Kluster Node Multi-Master Kubernetes
Pembuatan dan pengoperasian kluster Kubernetes dengan ketersediaan tinggi memerlukan beberapa node bidang kontrol Kubernetes dan “Node Master”. Untuk mencapai hal ini, setiap “Node Master” harus dapat berkomunikasi dengan setiap Master lainnya dan dapat dialamatkan oleh satu alamat IP.
Komponen Node Master Kubernetes
Kube-apiserver:
- Menyediakan API yang berfungsi sebagai ujung depan bidang kontrol Kubernetes.
- Menangani permintaan eksternal dan internal yang menentukan apakah permintaan tersebut valid, lalu memprosesnya.
- API dapat diakses melalui antarmuka baris perintah kubectl atau alat lain seperti kubeadm, dan melalui panggilan REST.
Penjadwal Kube:
- Komponen ini menjadwalkan pod pada node tertentu sesuai alur kerja otomatis dan kondisi yang ditentukan pengguna.
Kube-controller-manager:
- Manajer pengendali Kubernetes merupakan loop kontrol yang memantau dan mengatur status klaster Kubernetes.
- Menerima informasi mengenai status terkini klaster dan objek di dalamnya, lalu mengirimkan instruksi untuk menggerakkan klaster ke arah status yang diinginkan operator klaster.
dlld:
- Basis data nilai-kunci yang berisi data tentang status dan konfigurasi klaster Anda.
- Etcd toleran terhadap kesalahan dan terdistribusi.
Komponen Node Pekerja Kubernetes
Kubelet:
- Setiap node berisi kubelet, yang merupakan aplikasi kecil yang dapat berkomunikasi dengan bidang kontrol Kubernetes.
- Kubelet memastikan bahwa kontainer yang ditentukan dalam konfigurasi pod berjalan pada node tertentu dan mengelola siklus hidupnya.
- Ia mengeksekusi tindakan yang diperintahkan oleh bidang kendali Anda.
Kube-proxy:
- Semua node komputasi berisi kube-proxy, proksi jaringan yang memfasilitasi layanan jaringan Kubernetes.
- Ia menangani semua komunikasi jaringan di luar dan di dalam kluster, dan meneruskan lalu lintas atau membalas pada lapisan penyaringan paket sistem operasi.
Polong:
- Pod berfungsi sebagai contoh aplikasi tunggal dan dianggap sebagai unit terkecil dalam model objek Kubernetes.
Tuan Rumah Bastion:
- Komputer umumnya menjadi tuan rumah bagi satu aplikasi atau proses, misalnya, server proxy atau penyeimbang beban, dan semua layanan lainnya dihapus atau dibatasi untuk mengurangi ancaman terhadap komputer.
Hari ini, kita akan melihat cara mudah membuat kluster HA Kubernetes dengan dua node induk yang memegang peran bidang kontrol.
Saya akan menggunakan mesin HAProxy untuk mengelola kluster Kubernetes dan membuat semua sertifikat yang diperlukan. Namun, Anda juga dapat menggunakan bastion host khusus sebagai mesin klien.
Saya akan menggunakan konfigurasi minimalis untuk skenario sederhana dan sumber daya lokal terbatas.
Saya menggunakan tiga mesin virtual dengan Ubuntu Server 22.04.3 dengan OpenSSH.
Setiap mini VM berisi 2vCPU, RAM 4GB, HDD 20GB, vNIC vmxnet3 Bridge yang berjalan pada VMware Fusion.
Saya menggunakan server ini:
* k8s-haproxy pada IP: 192.168.1.116
* k8s-master-node-01 pada IP: 192.168.1.112
* k8s-master-node-02 pada IP: 192.168.1.123
Dalam postingan blog ini, saya telah meninggalkan pembahasan mengenai penciptaan node pekerja sebagai mesin terpisah; pada akhirnya, saya akan menyebutkan kemungkinan menghubungkan node pekerja.
Mari kita lihat bersama cara melakukannya.
Saya memasang peralatan klien pada mesin HAProxy dan membuat sertifikat; itulah bastion host saya.
Saya sudah menyiapkan semua alat yang diperlukan seperti kubeadm, kubectl, docker, containerd, dan lainnya.
Saya mulai menyiapkan alat klien pada mesin HAProxy.
Menginstal cfssl
CFSSL adalah alat SSL oleh Cloudflare yang memungkinkan kita membuat Sertifikat dan CA.
Unduh binernya
k8s-pengguna@k8s-haproxy:~$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
k8s-pengguna@k8s-haproxy:~$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
Tambahkan izin eksekusi ke biner
pengguna-k8s@k8s-haproxy:~$ chmod +x cfssl*
Pindahkan biner ke /usr/local/bin
pengguna k8s@k8s-haproxy:~$ sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
pengguna k8s@k8s-haproxy:~$ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
Verifikasi instalasi
k8s-user@k8s-haproxy:~$ versi cfssl
Menginstal kubectl
Dapatkan binernya
k8s-pengguna@k8s-haproxy:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/linux/amd64/kubectl
Tambahkan izin eksekusi ke biner
pengguna-k8s@k8s-haproxy:~$ chmod +x kubectl
Pindahkan biner ke /usr/local/bin
pengguna k8s@k8s-haproxy:~$ sudo mv kubectl /usr/local/bin
Memasang HAProxy Load Balancer
Karena saya akan menyebarkan 2 node master Kubernetes, saya perlu menyebarkan HAProxy Load Balancer di depannya untuk mendistribusikan lalu lintas.
Instal HAProxy
pengguna k8s@k8s-haproxy:~$ sudo apt-get install -y haproxy
Konfigurasi HAProxy
pengguna k8s@k8s-haproxy:~$ sudo nano /etc/haproxy/haproxy.cfg
#Masukkan konfigurasi berikut: ke /etc/haproxy/haproxy.cfg
global
...
default
...
frontend kubernetes
bind 192.168.1.116:6443
opsi tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
opsi tcp-check
server k8s-master-node-01 192.168.1.112:6443 check fall 3 rise 2
server k8s-master-node-02 192.168.1.123:6443 check fall 3 rise 2
Mulai ulang HAProxy
pengguna k8s@k8s-haproxy:~$ sudo systemctl restart haproxy
Membuat sertifikat TLS
Membuat Otoritas Sertifikat
Buat file konfigurasi otoritas sertifikat
pengguna-k8s@k8s-haproxy:~$ nano ca-config.json
{
"penandatanganan": {
"default": {
"kedaluwarsa": "8760h"
},
"profil": {
"kubernetes": {
"penggunaan": ["penandatanganan", "enkripsi kunci", "autentikasi server", "autentikasi klien"],
"kedaluwarsa": "8760h"
}
}
}
}
Buat file konfigurasi permintaan penandatanganan otoritas sertifikat
pengguna-k8s@k8s-haproxy:~$ nano ca-csr.json
{
"CN": "Kubernetes",
"kunci": {
"algo": "rsa",
"ukuran": 2048
},
"nama": [
{
"C": "SK",
"L": "Bratislava",
"O": "XYZ",
"OU": "IT",
"ST": "Slowakia"
}
]
}
Hasilkan sertifikat otoritas sertifikat dan kunci pribadi
k8s-pengguna@k8s-haproxy:~$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Verifikasi bahwa ca-key.pem dan ca.pem telah dibuat
pengguna-k8s@k8s-haproxy:~$ ls -la
Membuat sertifikat untuk cluster Etcd
Buat file konfigurasi permintaan penandatanganan sertifikat
k8s-user@k8s-haproxy:~$ nano kubernetes-csr.json
{
"CN": "Kubernetes",
"kunci": {
"algo": "rsa",
"ukuran": 2048
},
"nama": [
{
"C": "SK",
"L": "Bratislava",
"O": "XYZ",
"OU": "IT",
"ST": "Slowakia"
}
]
}
Hasilkan sertifikat dan kunci pribadi
k8s-user@k8s-haproxy:~$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-namahost=192.168.1.116,192.168.1.112,192.168.1.123,127.0.0.1,kubernetes.default \
-profil=kubernetes kubernetes-csr.json | \
cfssljson -bare kubernetes
Verifikasi bahwa kubernetes-key.pem dan file kubernetes.pem telah dibuat
pengguna-k8s@k8s-haproxy:~$ ls -la
Salin sertifikat ke setiap node
scp ca.pem kubernetes.pem kubernetes-key.pem k8s-user@192.168.1.112:~
scp ca.pem kubernetes.pem kubernetes-key.pem k8s-user@192.168.1.123:~
Mempersiapkan node untuk kubeadm
Pengaturan Awal untuk semua mesin master dan node
Salin perintah di bawah ini dan tempel ke file setup.sh lalu jalankan dengan .setup.sh.
Pengguna k8s@k8s-master-node-01:~$ sudo nano setup.sh
pengguna k8s@k8s-master-node-02:~$ sudo nano setup.sh
pengguna k8s@k8s-master-node-01:~$ . setup.sh
pengguna k8s@k8s-master-node-02:~$ . setup.sh
Bahasa Indonesia: sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo usermod -aG docker k8s-user
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
kucing <<EOF | Bahasa Indonesia: sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial utama
EOF
sudo apt-get perbarui
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark tahan kubelet kubeadm kubectl
sudo swapoff -a
Menginstal dan mengkonfigurasi Etcd pada semua 2 Master Node
Unduh dan pindahkan file etcd dan sertifikat ke tempatnya masing-masing
Bahasa Indonesia: k8s-pengguna@k8s-master-node-01:~$ sudo mkdir /etc/etcd /var/lib/etcd
k8s-pengguna@k8s-master-node-02:~$ sudo mkdir /etc/etcd /var/lib/etcd
k8s-pengguna@k8s-master-node-01:~$ sudo mv ~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem /etc/etcd
k8s-pengguna@k8s-master-node-02:~$ sudo mv ~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem /etc/etcd
k8s-pengguna@k8s-master-node-01:~$ wget Bahasa Indonesia: https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
k8s-user@k8s-master-node-02:~$ wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz k8s-
user@k8s-master-node-01:~$ tar xvzf etcd-v3.4.13-linux-amd64.tar.gz
k8s-user@k8s-master-node-02:~$ tar xvzf etcd-v3.4.13-linux-amd64.tar.gz k8s
-user@k8s-master-node-01:~$ sudo mv etcd-v3.4.13-linux-amd64/etcd* /usr/lokal/bin/
k8s-user@k8s-master-node-02:~$ sudo mv etcd-v3.4.13-linux-amd64/etcd* /usr/lokal/bin/
Buat file unit systemd etcd
k8s-user@k8s-master-node-01:~$ sudo nano /etc/systemd/sistem/etcd.service
k8s-user@k8s-master-node-02:~$ sudo nano /etc/systemd/sistem/etcd.service
[Unit]
Deskripsi=
Dokumentasi etcd=https://github.com/coreos
[Layanan]
ExecStart=/usr/local/bin/etcd \
--nama 192.168.1.112 \
--berkas-sertifikat=/etc/etcd/kubernetes.pem \
--berkas-kunci=/etc/etcd/kubernetes-key.pem \
--berkas-sertifikat-peer=/etc/etcd/kubernetes.pem \
--berkas-kunci-peer=/etc/etcd/kubernetes-key.pem \
--berkas-ca-terpercaya=/etc/etcd/ca.pem \
--berkas-ca-terpercaya-peer=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://192.168.1.112:2380 \
--listen-peer-urls https://192.168.1.112:2380 \
--listen-client-urls https://192.168.1.112:2379,http://127.0.0.1:2379 \
--iklankan-client-urls https://192.168.1.112:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster 192.168.1.112=https://192.168.1.112:2380,192.168.1.123=https://192.168.1.123:2380 \
--initial-cluster-state baru \
--data-dir=/var/lib/etcd
Restart=saat-gagal
RestartSec=5
[Instal]
WantedBy=multi-pengguna.target
Ganti alamat IP pada semua bidang kecuali bidang — initial-cluster agar sesuai dengan IP mesin.
Muat ulang konfigurasi daemon
pengguna-k8s@k8s-master-node-01:~$ sudo systemctl daemon-reload
pengguna-k8s@k8s-master-node-02:~$ sudo systemctl daemon-reload
Aktifkan etcd untuk memulai saat boot
k8s-user@k8s-master-node-01:~$ sudo systemctl aktifkan etcd
k8s-user@k8s-master-node-02:~$ sudo systemctl aktifkan etcd
Mulai etcd
k8s-pengguna@k8s-master-node-01:~$ sudo systemctl mulai etcd
k8s-pengguna@k8s-master-node-02:~$ sudo systemctl mulai etcd
Verifikasi bahwa cluster sudah aktif dan berjalan
Bahasa Indonesia: k8s-user@k8s-master-node-01:~$ ETCDCTL_API=3 etcdctl daftar anggota
1ec9f10bf28faf68, dimulai, 192.168.1.112, https://192.168.1.112:2380, https://192.168.1.112:2379, false
7a9a899a0680e6b2, dimulai, 192.168.1.123, https://192.168.1.123:2380, https://192.168.1.123:2379, false
k8s-user@k8s-master-node-02:~$ ETCDCTL_API=3 etcdctl daftar anggota
1ec9f10bf28faf68, dimulai, 192.168.1.112, https://192.168.1.112:2380, https://192.168.1.112:2379, salah
7a9a899a0680e6b2, dimulai, 192.168.1.123, https://192.168.1.123:2380, https://192.168.1.123:2379, salah
Inisialisasi Node Master
Inisialisasi Master Node pertama
Buat file konfigurasi untuk kubeadm
k8s-pengguna@k8s-master-node-01:~$ nano config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
jenis: ClusterConfiguration
kubernetesVersion: v1.28.0
controlPlaneEndpoint: "192.168.1.116:6443"
etcd:
eksternal:
titik akhir:
- https://192.168.1.112:2379
- https://192.168.1.123:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
jaringan:
podSubnet: 10.30.0.0/24
apiServer:
certSANs:
- "192.168.1.116"
extraArgs:
apiserver-count: "3"
Tambahkan domain atau Alamat IP tambahan yang ingin Anda hubungkan ke kluster di bawah certSAN.
Inisialisasi mesin sebagai node master
k8s-pengguna@k8s-master-node-01:~$ sudo kubeadm init --config=config.yaml
Salin sertifikat ke node master kedua
k8s-pengguna@k8s-master-node-01:~$ sudo scp -r /etc/kubernetes/pki k8s-pengguna@192.168.1.123:~
Inisialisasi Master Node kedua
Hapus apiserver.crt dan apiserver.key
k8s-pengguna@k8s-master-node-02:~$ rm ~/pki/apiserver.*
Pindahkan sertifikat ke direktori /etc/kubernetes
pengguna k8s@k8s-master-node-02:~$ sudo mv ~/pki /etc/kubernetes/
Buat file konfigurasi untuk kubeadm
k8s-pengguna@k8s-master-node-02:~$ nano config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
jenis: ClusterConfiguration
kubernetesVersion: v1.28.0
controlPlaneEndpoint: "192.168.1.116:6443"
etcd:
eksternal:
titik akhir:
- https://192.168.1.112:2379
- https://192.168.1.123:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
jaringan:
podSubnet: 10.30.0.0/24
apiServer:
certSANs:
- "192.168.1.116"
extraArgs:
apiserver-count: "3"
Inisialisasi mesin sebagai node master
k8s-pengguna@k8s-master-node-02:~$ sudo kubeadm init --config=config.yaml
Simpan perintah join yang dicetak dalam output setelah perintah di atas (Contoh)
Bahasa Indonesia: Anda sekarang dapat menggabungkan sejumlah node bidang kontrol dengan menyalin otoritas sertifikat
dan kunci akun layanan pada setiap node, lalu menjalankan yang berikut sebagai root:
sudo kubeadm join 192.168.1.116:6443 --token aksywy.on2927krgaf9hja1 \
--discovery-token-ca-cert-hash sha256:d4b9bbd0e92ba973ffc22401cd537298961f5a05ed1c6094336116f3b44a9730 \
--control-plane
Kemudian Anda dapat menggabungkan sejumlah node pekerja dengan menjalankan yang berikut pada masing-masing sebagai root:
sudo kubeadm join 192.168.1.116:6443 --token aksywy.on2927krgaf9hja1 \
--penemuan-token-ca-sertifikat-hash sha256:d4b9bbd0e92ba973ffc22401cd537298961f5a05ed1c6094336116f3b44a9730
Konfigurasikan kubectl pada mesin klien
SSH ke salah satu node master
k8s-pengguna@k8s-haproxy:~$ ssh k8s-pengguna@192.168.1.112
Tambahkan izin ke file admin.conf
k8s-pengguna@k8s-master-node-01:~$ sudo chmod +r /etc/kubernetes/admin.conf
Dari mesin klien, salin file konfigurasi
k8s-user@k8s-master-node-01:~$ scp k8s-user@192.168.1.112:/etc/kubernetes/admin.conf .
Buat dan konfigurasikan direktori konfigurasi kubectl
Pengguna k8s@k8s-haproxy:/etc/kubernetes$ mv admin.conf ~/.kube/config
Pengguna k8s@k8s-haproxy:/etc/kubernetes$ sudo mv admin.conf ~/.kube/config
Pengguna k8s@k8s-haproxy:/etc/kubernetes$ sudo chmod 600 ~/.kube/config
Kembali ke sesi SSH dan kembalikan izin file konfigurasi
k8s-pengguna@k8s-master-node-01:~$ sudo chmod 600 /etc/kubernetes/admin.conf
Uji untuk melihat apakah Anda dapat mengakses API Kubernetes dari mesin klien
k8s-user@k8s-haproxy:~$ kubectl cluster-info
Bidang kontrol Kubernetes berjalan di https://192.168.1.116:6443
CoreDNS berjalan di https://192.168.1.116:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Untuk men-debug dan mendiagnosis masalah kluster lebih lanjut, gunakan 'kubectl cluster-info dump'.
k8s-user@k8s-haproxy:~$ kubectl get nodes
NAMA STATUS PERAN USIA VERSI
k8s-master-node-01 NotReady bidang kontrol 92m v1.28.2
k8s-master-node-02 NotReady bidang kontrol 84m v1.28.2
k8s-user@k8s-haproxy:~$ kubectl get pods -n kube-system
NAMA SIAP STATUS MULAI ULANG
coredns-5dd5756b68-j7n7x 0/1 Tertunda 0
coredns-5dd5756b68-nmfmv 0/1 Tertunda 0
kube-apiserver-k8s-master-node-01 1/1 Berjalan 1 (46 menit lalu)
kube-apiserver-k8s-master-node-02 1/1 Berjalan 2 (46 menit lalu)
kube-controller-manager-k8s-master-node-01 1/1 Berjalan 1 (46 menit lalu)
kube-controller-manager-k8s-master-node-02 1/1 Berjalan 2 (46 menit lalu)
kube-proxy-75zgk 1/1 Berjalan 1 (46 menit lalu)
kube-proxy-m89j9 1/1 Berjalan 1 (46 menit lalu)
kube-scheduler-k8s-master-node-01 1/1 Berjalan 1 (46 menit lalu)
kube-scheduler-k8s-master-node-02 1/1 Berjalan 1 (46 menit lalu)
Menyebarkan jaringan overlay
Kami akan menggunakan Proyek Calico sebagai jaringan pelapis.
Terapkan manifes untuk menyebarkan overlay calico
k8s-pengguna@k8s-master-node-01:~$ curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl terapkan -f calico.yaml
k8s-user@k8s-haproxy:~$ kubectl get nodes
NAMA STATUS PERAN USIA VERSI
k8s-master-node-01 Siap untuk bidang kontrol 136m v1.28.2
k8s-master-node-02 Siap untuk bidang kontrol 128m v1.28.2
k8s-user@k8s-master-node-01:~$ kubectl get pod -n kube-system -w
NAMA SIAP STATUS DIMULAI ULANG
calico-kube-controllers-57758d645c-mhnkj 1/1 Berjalan 0
calico-node-fzx64 1/1 Berjalan 0
calico-node-qj2hk 1/1 Berjalan 0
coredns-5dd5756b68-j7n7x 1/1 Berjalan 0
coredns-5dd5756b68-nmfmv 1/1 Berjalan 0
kube-apiserver-k8s-master-node-01 1/1 Berjalan 1 (87 menit lalu) kube-
apiserver-k8s-master-node-02 1/1 Berjalan 2 (88 menit lalu)
kube-controller-manager-k8s-master-node-01 1/1 Berjalan 1 (87 menit lalu)
kube-controller-manager-k8s-master-node-02 1/1 Berjalan 2 (88 menit lalu)
kube-proxy-75zgk 1/1 Berjalan 1 (87 menit lalu)
kube-proxy-m89j9 1/1 Berjalan 1 (88 menit lalu)
kube-scheduler-k8s-master-node-01 1/1 Berjalan 1 (87 menit lalu)
kube-scheduler-k8s-master-node-02 1/1 Berjalan 1 (88 menit lalu)
Inisialisasi node pekerja
SSH ke setiap node pekerja dan jalankan perintah kubeadm join yang Anda salin sebelumnya.
sudo kubeadm gabung 192.168.1.116:6443 --token aksywy.on2927krgaf9hja1 \
--discovery-token-ca-cert-hash sha256:d4b9bbd0e92ba973ffc22401cd537298961f5a05ed1c6094336116f3b44a9730
Setelah semua node pekerja bergabung dalam klaster, uji API untuk memeriksa node yang tersedia dari mesin klien.
Kini HA Multi-Master Kubernetes Cluster siap digunakan. Perintah yang disebutkan dan outputnya bersifat ilustratif.
Dan jika Anda tertarik melihat HA untuk HAProxy dengan opsi Keepalived untuk K8s, berikut salah satu skenarionya:
=======================================================
Kubernetes Multi-Master Node Cluster
Creating and operating a highly available Kubernetes cluster requires multiple Kubernetes control plane nodes and “Master Nodes”. To achieve this, each “Master Node” must be able to communicate with every other Master and be addressable by a single IP address.
Kubernetes Master Node Components
Kube-apiserver:
- Provides an API that serves as the front end of a Kubernetes control plane.
- It handles external and internal requests that determine whether a request is valid and then processes it.
- The API can be accessed via the kubectl command-line interface or other tools like kubeadm, and via REST calls.
Kube-scheduler:
- This component schedules pods on specific nodes as per automated workflows and user-defined conditions.
Kube-controller-manager:
- The Kubernetes controller manager is a control loop that monitors and regulates the state of a Kubernetes cluster.
- It receives information about the current state of the cluster and objects within it and sends instructions to move the cluster towards the cluster operator’s desired state.
etcd:
- A key-value database that contains data about your cluster state and configuration.
- Etcd is fault-tolerant and distributed.
Kubernetes Worker Node Components
Kubelet:
- Each node contains a kubelet, which is a small application that can communicate with the Kubernetes control plane.
- The kubelet ensures that containers specified in pod configuration run on a specific node, and manage their lifecycle.
- It executes the actions commanded by your control plane.
Kube-proxy:
- All compute nodes contain kube-proxy, a network proxy that facilitates Kubernetes networking services.
- It handles all network communications outside and inside the cluster, and forwards traffic or replies on the packet filtering layer of the operating system.
Pods:
- A pod serves as a single application instance and is considered the smallest unit in the object model of Kubernetes.
Bastion Host:
- The computer generally hosts a single application or process, for example, a proxy server or load balancer, and all other services are removed or limited to reduce the threat to the computer.
Today, we will look at how to easily create an HA Kubernetes cluster with two master nodes that hold the control plane role.
I will use the HAProxy machine to manage the Kubernetes cluster and generate all the necessary certificates. Still, you can also use a dedicated bastion host as the client machine.
I will use a minimalistic configuration for a simple scenario and limited local resources.
I used three virtual machines with Ubuntu Server 22.04.3 with OpenSSH.
Each mini VM contained 2vCPUs, 4GB RAM, 20GB HDD, vNIC vmxnet3 Bridge running on VMware Fusion.
I used these servers:
* k8s-haproxy on IP: 192.168.1.116
* k8s-master-node-01 on IP: 192.168.1.112
* k8s-master-node-02 on IP: 192.168.1.123
In this blog post, I have already left out the creation of worker nodes as separate machines; in the end, I will mention the possibility of connecting worker nodes.
Let’s see together how to do it.
I installed client tools on the HAProxy machine and generated certificates; that was my bastion host.
I have already prepared all the necessary tools such as kubeadm, kubectl, docker, containerd, and others.
I started setting up client tools on the HAProxy machine.
Installing cfssl
CFSSL is an SSL tool by Cloudflare that lets us create our Certs and CAs.
Download the binaries
k8s-user@k8s-haproxy:~$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
k8s-user@k8s-haproxy:~$ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
Add the execution permission to the binaries
k8s-user@k8s-haproxy:~$ chmod +x cfssl*
Move the binaries to /usr/local/bin
k8s-user@k8s-haproxy:~$ sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
k8s-user@k8s-haproxy:~$ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
Verify the installation
k8s-user@k8s-haproxy:~$ cfssl version
Installing kubectl
Get the binary
k8s-user@k8s-haproxy:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.0/bin/linux/amd64/kubectl
Add the execution permission to the binary
k8s-user@k8s-haproxy:~$ chmod +x kubectl
Move the binary to /usr/local/bin
k8s-user@k8s-haproxy:~$ sudo mv kubectl /usr/local/bin
Installing HAProxy Load Balancer
Since I will be deploying 2 Kubernetes master nodes, I need to deploy a HAProxy Load Balancer in front of them to distribute the traffic.
Install HAProxy
k8s-user@k8s-haproxy:~$ sudo apt-get install -y haproxy
Configure HAProxy
k8s-user@k8s-haproxy:~$ sudo nano /etc/haproxy/haproxy.cfg
#Enter the following config: to /etc/haproxy/haproxy.cfg
global
...
default
...
frontend kubernetes
bind 192.168.1.116:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server k8s-master-node-01 192.168.1.112:6443 check fall 3 rise 2
server k8s-master-node-02 192.168.1.123:6443 check fall 3 rise 2
Restart HAProxy
k8s-user@k8s-haproxy:~$ sudo systemctl restart haproxy
Generating the TLS certificates
Creating a Certificate Authority
Create the certificate authority configuration file
k8s-user@k8s-haproxy:~$ nano ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
Create the certificate authority signing request configuration file
k8s-user@k8s-haproxy:~$ nano ca-csr.json
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "SK",
"L": "Bratislava",
"O": "XYZ",
"OU": "IT",
"ST": "Slovakia"
}
]
}
Generate the certificate authority certificate and private key
k8s-user@k8s-haproxy:~$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Verify that the ca-key.pem and the ca.pem were generated
k8s-user@k8s-haproxy:~$ ls -la
Creating the certificate for the Etcd cluster
Create the certificate signing request configuration file
k8s-user@k8s-haproxy:~$ nano kubernetes-csr.json
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "SK",
"L": "Bratislava",
"O": "XYZ",
"OU": "IT",
"ST": "Slovakia"
}
]
}
Generate the certificate and private key
k8s-user@k8s-haproxy:~$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=192.168.1.116,192.168.1.112,192.168.1.123,127.0.0.1,kubernetes.default \
-profile=kubernetes kubernetes-csr.json | \
cfssljson -bare kubernetes
Verify that the kubernetes-key.pem and the kubernetes.pem file were generated
k8s-user@k8s-haproxy:~$ ls -la
Copy the certificate to each node
scp ca.pem kubernetes.pem kubernetes-key.pem k8s-user@192.168.1.112:~
scp ca.pem kubernetes.pem kubernetes-key.pem k8s-user@192.168.1.123:~
Preparing the nodes for kubeadm
Initial Setup for all master and node machines
Copy the commands below and paste them into a setup.sh file and then execute it with . setup.sh.
k8s-user@k8s-master-node-01:~$ sudo nano setup.sh
k8s-user@k8s-master-node-02:~$ sudo nano setup.sh
k8s-user@k8s-master-node-01:~$ . setup.sh
k8s-user@k8s-master-node-02:~$ . setup.sh
sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo usermod -aG docker k8s-user
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
sudo swapoff -a
Installing and configuring Etcd on all 2 Master Nodes
Download and move etcd files and certs to their respective places
k8s-user@k8s-master-node-01:~$ sudo mkdir /etc/etcd /var/lib/etcd
k8s-user@k8s-master-node-02:~$ sudo mkdir /etc/etcd /var/lib/etcd
k8s-user@k8s-master-node-01:~$ sudo mv ~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem /etc/etcd
k8s-user@k8s-master-node-02:~$ sudo mv ~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem /etc/etcd
k8s-user@k8s-master-node-01:~$ wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
k8s-user@k8s-master-node-02:~$ wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
k8s-user@k8s-master-node-01:~$ tar xvzf etcd-v3.4.13-linux-amd64.tar.gz
k8s-user@k8s-master-node-02:~$ tar xvzf etcd-v3.4.13-linux-amd64.tar.gz
k8s-user@k8s-master-node-01:~$ sudo mv etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
k8s-user@k8s-master-node-02:~$ sudo mv etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
Create an etcd systemd unit file
k8s-user@k8s-master-node-01:~$ sudo nano /etc/systemd/system/etcd.service
k8s-user@k8s-master-node-02:~$ sudo nano /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
--name 192.168.1.112 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://192.168.1.112:2380 \
--listen-peer-urls https://192.168.1.112:2380 \
--listen-client-urls https://192.168.1.112:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://192.168.1.112:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster 192.168.1.112=https://192.168.1.112:2380,192.168.1.123=https://192.168.1.123:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Replace the IP address on all fields except the — initial-cluster field to match the machine IP.
Reload the daemon configuration
k8s-user@k8s-master-node-01:~$ sudo systemctl daemon-reload
k8s-user@k8s-master-node-02:~$ sudo systemctl daemon-reload
Enable etcd to start at boot time
k8s-user@k8s-master-node-01:~$ sudo systemctl enable etcd
k8s-user@k8s-master-node-02:~$ sudo systemctl enable etcd
Start etcd
k8s-user@k8s-master-node-01:~$ sudo systemctl start etcd
k8s-user@k8s-master-node-02:~$ sudo systemctl start etcd
Verify that the cluster is up and running
k8s-user@k8s-master-node-01:~$ ETCDCTL_API=3 etcdctl member list
1ec9f10bf28faf68, started, 192.168.1.112, https://192.168.1.112:2380, https://192.168.1.112:2379, false
7a9a899a0680e6b2, started, 192.168.1.123, https://192.168.1.123:2380, https://192.168.1.123:2379, false
k8s-user@k8s-master-node-02:~$ ETCDCTL_API=3 etcdctl member list
1ec9f10bf28faf68, started, 192.168.1.112, https://192.168.1.112:2380, https://192.168.1.112:2379, false
7a9a899a0680e6b2, started, 192.168.1.123, https://192.168.1.123:2380, https://192.168.1.123:2379, false
Initialising the Master Nodes
Initialising the first Master Node
Create the configuration file for kubeadm
k8s-user@k8s-master-node-01:~$ nano config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.28.0
controlPlaneEndpoint: "192.168.1.116:6443"
etcd:
external:
endpoints:
- https://192.168.1.112:2379
- https://192.168.1.123:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
networking:
podSubnet: 10.30.0.0/24
apiServer:
certSANs:
- "192.168.1.116"
extraArgs:
apiserver-count: "3"
Add any additional domains or IP Addresses that you would want to connect to the cluster under certSANs.
Initialise the machine as a master node
k8s-user@k8s-master-node-01:~$ sudo kubeadm init --config=config.yaml
Copy the certificates to the second master node
k8s-user@k8s-master-node-01:~$ sudo scp -r /etc/kubernetes/pki k8s-user@192.168.1.123:~
Initialising the second Master Node
Remove the apiserver.crt and apiserver.key
k8s-user@k8s-master-node-02:~$ rm ~/pki/apiserver.*
Move the certificates to the /etc/kubernetes directory
k8s-user@k8s-master-node-02:~$ sudo mv ~/pki /etc/kubernetes/
Create the configuration file for kubeadm
k8s-user@k8s-master-node-02:~$ nano config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.28.0
controlPlaneEndpoint: "192.168.1.116:6443"
etcd:
external:
endpoints:
- https://192.168.1.112:2379
- https://192.168.1.123:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
networking:
podSubnet: 10.30.0.0/24
apiServer:
certSANs:
- "192.168.1.116"
extraArgs:
apiserver-count: "3"
Initialise the machine as a master node
k8s-user@k8s-master-node-02:~$ sudo kubeadm init --config=config.yaml
Save the join command printed in the output after the above command (Example)
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
sudo kubeadm join 192.168.1.116:6443 --token aksywy.on2927krgaf9hja1 \
--discovery-token-ca-cert-hash sha256:d4b9bbd0e92ba973ffc22401cd537298961f5a05ed1c6094336116f3b44a9730 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
sudo kubeadm join 192.168.1.116:6443 --token aksywy.on2927krgaf9hja1 \
--discovery-token-ca-cert-hash sha256:d4b9bbd0e92ba973ffc22401cd537298961f5a05ed1c6094336116f3b44a9730
Configure kubectl on the client machine
SSH to one of the master node
k8s-user@k8s-haproxy:~$ ssh k8s-user@192.168.1.112
Add permissions to the admin.conf file
k8s-user@k8s-master-node-01:~$ sudo chmod +r /etc/kubernetes/admin.conf
From the client machine, copy the configuration file
k8s-user@k8s-master-node-01:~$ scp k8s-user@192.168.1.112:/etc/kubernetes/admin.conf .
Create and configure the kubectl configuration directory
k8s-user@k8s-haproxy:/etc/kubernetes$ mv admin.conf ~/.kube/config
k8s-user@k8s-haproxy:/etc/kubernetes$ sudo mv admin.conf ~/.kube/config
k8s-user@k8s-haproxy:/etc/kubernetes$ sudo chmod 600 ~/.kube/config
Go back to the SSH session and revert the permissions of the config file
k8s-user@k8s-master-node-01:~$ sudo chmod 600 /etc/kubernetes/admin.conf
Test to see if you can access the Kubernetes API from the client machine
k8s-user@k8s-haproxy:~$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.116:6443
CoreDNS is running at https://192.168.1.116:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
k8s-user@k8s-haproxy:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-node-01 NotReady control-plane 92m v1.28.2
k8s-master-node-02 NotReady control-plane 84m v1.28.2
k8s-user@k8s-haproxy:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS
coredns-5dd5756b68-j7n7x 0/1 Pending 0
coredns-5dd5756b68-nmfmv 0/1 Pending 0
kube-apiserver-k8s-master-node-01 1/1 Running 1 (46m ago)
kube-apiserver-k8s-master-node-02 1/1 Running 2 (46m ago)
kube-controller-manager-k8s-master-node-01 1/1 Running 1 (46m ago)
kube-controller-manager-k8s-master-node-02 1/1 Running 2 (46m ago)
kube-proxy-75zgk 1/1 Running 1 (46m ago)
kube-proxy-m89j9 1/1 Running 1 (46m ago)
kube-scheduler-k8s-master-node-01 1/1 Running 1 (46m ago)
kube-scheduler-k8s-master-node-02 1/1 Running 1 (46m ago)
Deploying the overlay network
We will be using Project Calico as the overlay network.
Apply the manifest to deploy calico overlay
k8s-user@k8s-master-node-01:~$ curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
k8s-user@k8s-haproxy:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-node-01 Ready control-plane 136m v1.28.2
k8s-master-node-02 Ready control-plane 128m v1.28.2
k8s-user@k8s-master-node-01:~$ kubectl get pod -n kube-system -w
NAME READY STATUS RESTARTS
calico-kube-controllers-57758d645c-mhnkj 1/1 Running 0
calico-node-fzx64 1/1 Running 0
calico-node-qj2hk 1/1 Running 0
coredns-5dd5756b68-j7n7x 1/1 Running 0
coredns-5dd5756b68-nmfmv 1/1 Running 0
kube-apiserver-k8s-master-node-01 1/1 Running 1 (87m ago)
kube-apiserver-k8s-master-node-02 1/1 Running 2 (88m ago)
kube-controller-manager-k8s-master-node-01 1/1 Running 1 (87m ago)
kube-controller-manager-k8s-master-node-02 1/1 Running 2 (88m ago)
kube-proxy-75zgk 1/1 Running 1 (87m ago)
kube-proxy-m89j9 1/1 Running 1 (88m ago)
kube-scheduler-k8s-master-node-01 1/1 Running 1 (87m ago)
kube-scheduler-k8s-master-node-02 1/1 Running 1 (88m ago)
Initialise the worker nodes
SSH into each worker node and execute the kubeadm join command that you copied previously.
sudo kubeadm join 192.168.1.116:6443 --token aksywy.on2927krgaf9hja1 \
--discovery-token-ca-cert-hash sha256:d4b9bbd0e92ba973ffc22401cd537298961f5a05ed1c6094336116f3b44a9730
Once all worker nodes have joined the cluster, test the API to check the available nodes from the client machine.
Now the HA Multi-Master Kubernetes Cluster is ready for use. The mentioned commands and their outputs are illustrative.
And if you’re interested in looking at HA for HAProxy with the Keepalived option for K8s, here’s one scenario:
Tidak ada komentar:
Posting Komentar