渣骑 发表于 2025-6-1 19:09:45

k8s部署HA高可用集群

1.k8s高可用集群说明


[*]Kubernetes Master 节点的高可用(HA)在没有云厂商LB的情况下通常有两种主流的实现方案:​Keepalived + Nginx 或​ Keepalived + ​HAProxy。这两种方案的核心目标都是通过负载均衡和 VIP(虚拟 IP)漂移实现多个 Master 节点的流量分发和故障转移。
[*]我本来是想使用keepalived + nginx 做高可用的,但由于服务器有限且业务量不大,所以只使用keepalived 做VIP漂移,没有使用nginx做代理和负载均衡,但我会提供nginx的相关配置。
2.部署环境


[*]master至少部署3个节点,保证etcd是奇数节点,如果部署两个的话,在停掉一个master时,etcd是不可用的,那么api-server也不可用【重要!!!】
IP地址系统内核系统配置角色数据基础目录172.16.1.23   VIP(keepalived)keepalived部署在3台master节点172.16.1.20CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gmaster1/data/172.16.1.21CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gmaster2/data/172.16.1.24CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gmaster3/data/172.16.1.22CentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gnode1/data/172.16.1.xxCentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gnode2/data/172.16.1.xxCentOS7.85.4.278-1.el7.elrepo.x86_648核16G/100Gnode3/data/3.初始化系统、内核升级、安装k8s组件


[*]如果部署的是k8s HA高可用集群,只使用下边文档中的初始化部分,其余使用本文档,且所有的k8s节点都做同样初始化操作
https://www.cnblogs.com/Leonardo-li/p/186484494.编译kubeadm,修改证书授权时间

4.1 准备go环境


[*]go环境需要大于1.17,否则会报错,我这里用的是1.23
#下载go环境包
wget https://golang.google.cn/dl/go1.23.4.linux-amd64.tar.gz
tar zxf go1.23.4.linux-amd64.tar.gz 
mv go /usr/local/

#设置go环境变量
vim /etc/profile
#添加下面2行到文件末尾
export PATH=$PATH:/usr/local/go/bin
export GOPATH=$HOME/go

#让环境变量生效
source /etc/profile

#测试go环境是否可用
go version4.2 安装git

yum -y install git4.3 kubeadm修改-编译

4.3.1 下载对应版本的kubernetes源码包,我这里是v1.23.17

git clone --depth 1 --branch v1.23.17 https://github.com/kubernetes/kubernetes.git4.3.2 修改证书时间

cd kubernetes/

[*]修改ca证书到100年,注释原代码(注释符号 // ),将代码里面的 *10 换成 *100
vim ./staging/src/k8s.io/client-go/util/cert/cert.go

[*]修改其他证书到100年,注释原代码(注释符号 // ),将代码里面的 24 * 365 改成 24 * 365 * 100
vim ./cmd/kubeadm/app/constants/constants.go
4.3.3 进行编译

make all WHAT=cmd/kubeadm GOFLAGS=-v4.3.4 查看是否编译成功

ls _output/bin/kubeadm4.3.5 拷贝到所有k8s机器,master和node

scp  _output/bin/kubeadm root@172.16.1.20:/data/
scp  _output/bin/kubeadm root@172.16.1.21:/data/
scp  _output/bin/kubeadm root@172.16.1.24:/data/
scp  _output/bin/kubeadm root@172.16.1.22:/data/4.3.6 所以k8s节点备份kubeadm,并将新编译的kubeadm拷贝到相同目录

mv /usr/bin/kubeadm /usr/bin/kubeadm-old
mv /data/kubeadm /usr/bin/5.部署keepalived

5.1 下载keepalived

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum -y install keepalived5.2 keepalived配置


[*]master1 配置 keepalived.conf
# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state MASTER                  # 设置为主节点
    interface ens33                  # 网络接口,根据实际情况修改
    virtual_router_id 51            # VRRP 路由ID,主备节点必须相同
    priority 100                  # 优先级,主节点必须高于备份节点
    advert_int 1                  # VRRP通告间隔,单位秒
    authentication {
      auth_type PASS            # 认证类型
      auth_pass 1111            # 认证密码,主备节点必须相同
    }
    virtual_ipaddress {
      172.16.1.23/22            # 虚拟IP地址,可以根据实际情况修改
    }
}

[*]master2 配置 keepalived.conf
# cat /etc/keepalived/keepalived.conf! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP                  # 设置为备份节点
    interface ens33                  # 确保使用正确的网络接口名称
    virtual_router_id 51            # VRRP 路由ID,主备节点必须相同
    priority 80                     # 优先级,备份节点必须低于主节点
    advert_int 1                  # VRRP通告间隔,单位秒

    authentication {
      auth_type PASS            # 认证类型
      auth_pass 1111            # 认证密码,主备节点必须相同
    }

    virtual_ipaddress {
      172.16.1.23/22             # 虚拟IP地址,与主节点相同
    }
}

[*]master3 配置 keepalived.conf
# cat /etc/keepalived/keepalived.conf! Configuration File for keepalived

global_defs {
    router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP                  # 设置为备份节点
    interface ens33                  # 确保使用正确的网络接口名称
    virtual_router_id 51            # VRRP 路由ID,主备节点必须相同
    priority 60                     # 优先级,备份节点必须低于主节点
    advert_int 1                  # VRRP通告间隔,单位秒

    authentication {
      auth_type PASS            # 认证类型
      auth_pass 1111            # 认证密码,主备节点必须相同
    }

    virtual_ipaddress {
      172.16.1.23/22             # 虚拟IP地址,与主节点相同
    }
}5.3 启动keepalived(3个节点)

systemctl restart keepalived
systemctl enable keepalived5.4 补充nginx.conf代理和负载均衡配置,我这里暂时没有使用nginx,只使用了keepalived的VIP

#nginx.conf stream代理
stream {
    upstream k8s_apiserver {
      # 后端 Kubernetes Master 节点
      server 172.16.1.21:6443;# Master1
      server 172.16.1.22:6443;# Master2
      server 172.16.1.24:6443;# Master3
    }

    server {
      listen 6443;          # 监听 6443 端口(TCP)
      proxy_pass k8s_apiserver;
      proxy_timeout 10s;
    }
}6.初始化k8s【master1】

6.1 编写kubeadm-config.yaml


[*]172.16.4.177:8090/k8s12317/registry.aliyuncs.com/google_containers 这是我的harbor私有仓库地址,是在之前离线下载镜像后推到自己的harbor的,具体参考:https://www.cnblogs.com/Leonardo-li/p/18648449
cat kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.23.17
imageRepository: 172.16.4.177:8090/k8s12317/registry.aliyuncs.com/google_containers
apiServer:
certSANs:
- "172.16.1.23"   # VIP
- "172.16.1.20"   # Master1 实际IP
- "172.16.1.21"   # Master2 实际IP
- "172.16.1.24"   # Master3 实际IP
- "127.0.0.1"   # 本地回环
controlPlaneEndpoint: "172.16.1.23:6443"# VIP
networking:
serviceSubnet: 10.96.0.0/12# 关键修正:serviceCIDR -> serviceSubnet
podSubnet: 10.244.0.0/16   # 确保与 Calico 配置匹配
---## 添加下面几行 添加ipvs模式,
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs6.2 初始化master1

kubeadm init --config=kubeadm-config.yaml --upload-certs

[*]初始化信息如下:
master1初始化信息# kubeadm init --config=kubeadm-config.yaml --upload-certs
Using Kubernetes version: v1.23.17
Running pre-flight checks
Pulling images required for setting up a Kubernetes cluster
This might take a minute or two, depending on the speed of your internet connection
You can also perform this action in beforehand using 'kubeadm config images pull'
Using certificateDir folder "/etc/kubernetes/pki"
Generating "ca" certificate and key
Generating "apiserver" certificate and key
apiserver serving cert is signed for DNS names and IPs
Generating "apiserver-kubelet-client" certificate and key
Generating "front-proxy-ca" certificate and key
Generating "front-proxy-client" certificate and key
Generating "etcd/ca" certificate and key
Generating "etcd/server" certificate and key
etcd/server serving cert is signed for DNS names and IPs
Generating "etcd/peer" certificate and key
etcd/peer serving cert is signed for DNS names and IPs
Generating "etcd/healthcheck-client" certificate and key
Generating "apiserver-etcd-client" certificate and key
Generating "sa" key and public key
Using kubeconfig folder "/etc/kubernetes"
Writing "admin.conf" kubeconfig file
Writing "kubelet.conf" kubeconfig file
Writing "controller-manager.conf" kubeconfig file
Writing "scheduler.conf" kubeconfig file
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Starting the kubelet
Using manifest folder "/etc/kubernetes/manifests"
Creating static Pod manifest for "kube-apiserver"
Creating static Pod manifest for "kube-controller-manager"
Creating static Pod manifest for "kube-scheduler"
Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
All control plane components are healthy after 15.008274 seconds
Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
Using certificate key:
6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
Marking the node master1 as control-plane by adding the labels:
Marking the node master1 as control-plane by adding the taints
Using token: agitw8.fwghrey1nysrprf8
Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
configured RBAC rules to allow Node Bootstrap tokens to get nodes
configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
Creating the "cluster-info" ConfigMap in the "kube-public" namespace
Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
Applied essential addon: CoreDNS
Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f .yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
        --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
        --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
        --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8

[*] 初始化信息重要信息截取:1.拷贝配置文件到家目录(执行),2.添加master节点的命令(记录),3.添加node节点的命令 (记录)


[*] 将初始化后的管理员配置文件拷贝到家目录(命令直接从初始化信息中复制执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

[*]查看节点,此时还没有网络插件,所以状态为NotReady

7.初始化k8s【master2】

7.1 初始化master2,加入k8s集群


[*]将【步骤6.2】中获取到的添加master节点的初始化命令复制-执行
kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
        --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
        --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5

[*]master2初始化信息如下:
master2初始化信息# kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
> --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
> --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
Running pre-flight checks
Reading configuration from the cluster...
FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
Running pre-flight checks before initializing the new control plane instance
Pulling images required for setting up a Kubernetes cluster
This might take a minute or two, depending on the speed of your internet connection
You can also perform this action in beforehand using 'kubeadm config images pull'
Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
Using certificateDir folder "/etc/kubernetes/pki"
Generating "apiserver-kubelet-client" certificate and key
Generating "apiserver" certificate and key
apiserver serving cert is signed for DNS names and IPs
Generating "front-proxy-client" certificate and key
Generating "etcd/peer" certificate and key
etcd/peer serving cert is signed for DNS names and IPs
Generating "apiserver-etcd-client" certificate and key
Generating "etcd/server" certificate and key
etcd/server serving cert is signed for DNS names and IPs
Generating "etcd/healthcheck-client" certificate and key
Valid certificates and keys now exist in "/etc/kubernetes/pki"
Using the existing "sa" key
Generating kubeconfig files
Using kubeconfig folder "/etc/kubernetes"
Writing "admin.conf" kubeconfig file
Writing "controller-manager.conf" kubeconfig file
Writing "scheduler.conf" kubeconfig file
Using manifest folder "/etc/kubernetes/manifests"
Creating static Pod manifest for "kube-apiserver"
Creating static Pod manifest for "kube-controller-manager"
Creating static Pod manifest for "kube-scheduler"
Checking that the etcd cluster is healthy
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Starting the kubelet
Waiting for the kubelet to perform the TLS Bootstrap...
Announced new etcd member joining to the existing etcd cluster
Creating static Pod manifest for "etcd"
Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
Marking the node master2 as control-plane by adding the labels:
Marking the node master2 as control-plane by adding the taints

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[*]将初始化后的管理员配置文件拷贝到家目录(从初始化信息中复制执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

[*]查看节点信息,已经有两个master节点了,没有网络插件,状态为NotReady

8.初始化k8s【master3】

8.1 初始化master3,加入k8s集群


[*]将【步骤6.2】中获取到的添加master节点的初始化命令复制-执行
kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
        --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
        --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5

[*]master2初始化信息如下:
master3初始化信息 # kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
> --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8 \
> --control-plane --certificate-key 6d3f1abc998f3ffd104a989aa4b5ff3ae622ccd1a9b0098d9b68ed4221820ac5
Running pre-flight checks
Reading configuration from the cluster...
FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
Running pre-flight checks before initializing the new control plane instance
Pulling images required for setting up a Kubernetes cluster
This might take a minute or two, depending on the speed of your internet connection
You can also perform this action in beforehand using 'kubeadm config images pull'
Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
Using certificateDir folder "/etc/kubernetes/pki"
Generating "apiserver-kubelet-client" certificate and key
Generating "apiserver" certificate and key
apiserver serving cert is signed for DNS names and IPs
Generating "front-proxy-client" certificate and key
Generating "etcd/server" certificate and key
etcd/server serving cert is signed for DNS names and IPs
Generating "apiserver-etcd-client" certificate and key
Generating "etcd/peer" certificate and key
etcd/peer serving cert is signed for DNS names and IPs
Generating "etcd/healthcheck-client" certificate and key
Valid certificates and keys now exist in "/etc/kubernetes/pki"
Using the existing "sa" key
Generating kubeconfig files
Using kubeconfig folder "/etc/kubernetes"
Writing "admin.conf" kubeconfig file
Writing "controller-manager.conf" kubeconfig file
Writing "scheduler.conf" kubeconfig file
Using manifest folder "/etc/kubernetes/manifests"
Creating static Pod manifest for "kube-apiserver"
Creating static Pod manifest for "kube-controller-manager"
Creating static Pod manifest for "kube-scheduler"
Checking that the etcd cluster is healthy
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Starting the kubelet
Waiting for the kubelet to perform the TLS Bootstrap...
Announced new etcd member joining to the existing etcd cluster
Creating static Pod manifest for "etcd"
Waiting for the new etcd member to join the cluster. This can take up to 40s
The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation
Marking the node master3 as control-plane by adding the labels:
Marking the node master3 as control-plane by adding the taints

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[*]将初始化后的管理员配置文件拷贝到家目录(从初始化信息中复制执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

[*]查看节点信息,已经有三个master节点了,没有网络插件,状态为NotReady
https://img2024.cnblogs.com/blog/1276949/202503/1276949-20250328134541418-2057094921.png
9.查看节点信息


[*]由于没有安装calico网络插件,所以状态为NotReady
9.1 在master3查看节点信息

# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master1   NotReady   control-plane,master   23m   v1.23.17   172.16.1.20   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master2   NotReady   control-plane,master   18m   v1.23.17   172.16.1.21   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master3   NotReady   control-plane,master   2m52s   v1.23.17   172.16.1.24   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.99.2 在master2查看节点信息

# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master1   NotReady   control-plane,master   50m   v1.23.17   172.16.1.20   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master2   NotReady   control-plane,master   44m   v1.23.17   172.16.1.21   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.99.3 master1查看节点信息

# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master1   NotReady   control-plane,master   20m   v1.23.17   172.16.1.20   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master2   NotReady   control-plane,master   15m   v1.23.17   172.16.1.21   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.910.添加node节点到k8s集群(所有node节点执行)


[*]初始化在【步骤3】全部执行完成
10.1 将【步骤6.2】中获取到添加Node节点的初始化命令复制-执行

kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
        --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8

[*]node1节点注册信息
node1注册到集群信息# kubeadm join 172.16.1.23:6443 --token agitw8.fwghrey1nysrprf8 \
> --discovery-token-ca-cert-hash sha256:01099779f60c0ba7a8070edacaeaaa1b3b55c36b3a9136402200ce75dafe6bb8
Running pre-flight checks
Reading configuration from the cluster...
FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Starting the kubelet
Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.10.2 查看node注册状态,在任意master执行(没有安装calico插件,所以状态不是Ready)

# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE    VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master1   NotReady   control-plane,master   26m    v1.23.17   172.16.1.20   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master2   NotReady   control-plane,master   21m    v1.23.17   172.16.1.21   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master3   NotReady   control-plane,master   6m3s   v1.23.17   172.16.1.24   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
node1   NotReady   <none>               28s    v1.23.17   172.16.1.22   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.911.安装calico网络插件


[*]参考下列文章中的【步骤6】
https://www.cnblogs.com/Leonardo-li/p/18648449

[*]calico部署后各节点状态
# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE    VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master1   Ready    control-plane,master   31m    v1.23.17   172.16.1.20   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master2   Ready    control-plane,master   26m    v1.23.17   172.16.1.21   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
master3   Ready    control-plane,master   10m    v1.23.17   172.16.1.24   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.9
node1   Ready    <none>               5m3s   v1.23.17   172.16.1.22   <none>      CentOS Linux 7 (Core)   5.4.278-1.el7.elrepo.x86_64   docker://20.10.912.k8s集群HA高可用验证

12.1 VIP和master节点状态查看


[*]VIP状态查看,可以看到此时的VIP(172.16.1.23)在master1(172.16.1.20)上。


[*]查看控制层组件状态

12.2 高可用验证


[*]此时的VIP在master1上,关闭master1服务器,查看VIP的漂移是否正常,发现VIP在master3上了


[*]查看master2和master3控制层是否可用,发现是可以正常使用的

13.k8s永久证书确认

13.1 可以看到ca证书,还是其他业务使用的证书都是100年

kubeadm certs check-expiration# kubeadm certs check-expiration
Reading configuration from the cluster...
FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf               Mar 07, 2125 02:50 UTC   99y             ca                      no      
apiserver                  Mar 07, 2125 02:50 UTC   99y             ca                      no      
apiserver-etcd-client      Mar 07, 2125 02:50 UTC   99y             etcd-ca               no      
apiserver-kubelet-client   Mar 07, 2125 02:50 UTC   99y             ca                      no      
controller-manager.conf    Mar 07, 2125 02:50 UTC   99y             ca                      no      
etcd-healthcheck-client    Mar 07, 2125 02:50 UTC   99y             etcd-ca               no      
etcd-peer                  Mar 07, 2125 02:50 UTC   99y             etcd-ca               no      
etcd-server                Mar 07, 2125 02:50 UTC   99y             etcd-ca               no      
front-proxy-client         Mar 07, 2125 02:50 UTC   99y             front-proxy-ca          no      
scheduler.conf             Mar 07, 2125 02:50 UTC   99y             ca                      no      

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Mar 07, 2125 02:50 UTC   99y             no      
etcd-ca               Mar 07, 2125 02:50 UTC   99y             no      
front-proxy-ca          Mar 07, 2125 02:50 UTC   99y             no      13.2 kubelet证书也是100年

openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -dates# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -dates
notBefore=Mar 31 02:50:04 2025 GMT
notAfter=Mar7 02:50:08 2125 GMT14.参考文档

#高可用集群部署
https://mp.weixin.qq.com/s/l4qS_GnmEZ2BmQpO6VI3sQ
#永久证书创建
https://mp.weixin.qq.com/s/TRukdEGu0Nm_7wjqledrRg
#证书介绍
https://mp.weixin.qq.com/s/E1gc6pJGLzbgHCvbOd1nPQ 

来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
页: [1]
查看完整版本: k8s部署HA高可用集群