准备CentOS节点:hosts,chronyd等。

高可用官方文档:

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/high-availability/

yum源:

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo && \
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo && \
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo && \
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

kubernetes yum源:

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装docker:

yum install docker-ce -y

sed -i '/ExecStart=/a\ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT' /usr/lib/systemd/system/docker.service
sed -i "/ExecStart=/iEnvironment=HTTPS_PROXY=http://192.168.0.26:8118" /usr/lib/systemd/system/docker.service
sed -i '/ExecStart=/i\Environment=NO_PROXY=127.0.0.0/8,192.168.0.0/24' /usr/lib/systemd/system/docker.service

docker配置:

cat >> /etc/docker/daemon.json <<EOF
{
    "registry-mirrors":["https://regisrty.docker-cn.com"],
    "insecure-registries":["192.168.0.12"],
    "log-driver":"json-file",
    "log-opts": {"max-size":"100m", "max-file":"2"},
    "exec-opts": ["native.cgroupdriver=systemd"] // 这个必须加
}
EOF

注意:如果出现以下错误需要在 docker 中的 daemon.json 加上 "exec-opts": ["native.cgroupdriver=systemd"]

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.


安装kubelet等组件:

yum install kubectl-1.18.8-0 kubelet-1.18.8-0 kubeadm-1.18.8-0 -y
yum install kubectl-1.18.20-0 kubelet-1.18.20-0 kubeadm-1.18.20-0 -y

配置kubelet:

cat >> /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF

启动服务:

systemctl start docker.service && systemctl status docker.service && systemctl enable docker.service

开启转发:

cat >> /etc/sysctl.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

开机自启:但是不用启动

systemctl enable kubelet.service

关闭swap:

swapoff -a
sed -i "s/^\/dev\/mapper\/centos-swap/#&/" /etc/fstab

下载镜像:或者在kubeadm init 时添加 --image-repository registry.aliyuncs.com/google_containers

#!/bin/bash

images=(
    kube-apiserver:v1.18.20
    kube-controller-manager:v1.18.20
    kube-scheduler:v1.18.20
    kube-proxy:v1.18.20
    pause:3.2
    etcd:3.4.3-0
    coredns:1.6.7
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done


安装负载均衡器:这里选用 keepalived+haproxy

安装keepalived:

yum install keepalived -y

配置文件:keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id {{ ansible_hostname }}
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.199.85/24 dev eth0 label eth0:0
    }
}

安装haproxy:

yum install haproxy -y

haproxy配置文件:haproxy.cfg

frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes

backend kubernetes
    balance     roundrobin
    server  k8s-master01 192.168.0.80:6443 check
    server  k8s-master02 192.168.0.81:6443 check
    server  k8s-master03 192.168.0.82:6443 check


初始化:--control-plane-endpoint 为负载均衡的IP和端口。

kubeadm init --control-plane-endpoint "192.168.0.85:16443" --upload-certs \
 --kubernetes-version=v1.18.8 --pod-network-cidr=10.244.0.0/16 \
 --ignore-preflight-errors=SystemVerification \
 --image-repository registry.aliyuncs.com/google_containers

扩展集群master:

  kubeadm join 192.168.0.11:6443 --token 5w3vj8.nzx1h240b03xddo2 \
    --discovery-token-ca-cert-hash sha256:55d9aebbfa13e5659e2cb34394611700ce6a3c4831f98191da98bc009d5aa2e3 \
    --control-plane --certificate-key 877026e89a1835733735d4d2e2a38ff1130a7d3c05770af879ff9a532865bc2e

flannel:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

calico:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

查看token列表:

~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
5w3vj8.nzx1h240b03xddo2   9h          2021-07-18T00:11:09+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token


加入node节点:

token过期使用如下命令生成:

kubeadm token generate

生成join命令:

kubeadm token create 8rl94q.4g4y5yj39zlfb41v --print-join-command --ttl=0

加入集群:

kubeadm join 192.168.0.12:16443 --token 8rl94q.4g4y5yj39zlfb41v \
--discovery-token-ca-cert-hash sha256:6adf08dd88964e98431efe4de74fb52278989c7606af0d20f514fdf13046a98a

加入master节点:

自定义生成certificate-key:

kubeadm alpha certs certificate-key

自动生成certificate-key:

kubeadm init phase upload-certs --upload-certs

生成:

kubeadm join 192.168.0.11:16443 --token 5w3vj8.nzx1h240b03xddo2 \
 --discovery-token-ca-cert-hash sha256:55d9aebbfa13e5659e2cb34394611700ce6a3c4831f98191da98bc009d5aa2e3 \
 --control-plane --certificate-key 877026e89a1835733735d4d2e2a38ff1130a7d3c05770af879ff9a532865bc2e

coredns解析:

~]# kubectl run busybox --image=busybox:1.28 --generator="run-pod/v1" -it --rm -- sh
/ # nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
 
Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local