k8s中安装etcd docker flannel
来源:原创
时间:2019-06-04
作者:脚本小站
分类:云原生
Vxlan模式:
flannel通过查询路由表将docker网络中的数据包再封装上物理网络的报文通过物理网络到达目的节点。也就是IP在封装一个IP头,隧道机制。
flannel会生成路由规则,将每个节点都当成一个网关。每个节点都有自己的一个网段,每个节点可运行的Pod数就是根据这个网段来划分的。
etcd:
安装:
yum install etcd -y
配置:
vim /etc/etcd/etcd.conf [Member] ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="https://etcd:2379,https://localhost:2379" ETCD_NAME="etcd" ETCD_ADVERTISE_CLIENT_URLS="https://etcd:2379,https://localhost:2379" [Security] ETCD_CERT_FILE="/etc/etcd/cert/etcd_server.crt" ETCD_KEY_FILE="/etc/etcd/cert/etcd_server.key" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/etc/etcd/cert/ca.crt"
启动:
systemctl enable etcd.service && systemctl start etcd.service && systemctl status etcd.service
flannel:
下载:去官方下载安装包:下载解压到到目录,参考flanneld.service文件。
https://github.com/coreos/flannel
安装:
wget https://github.com/flannel-io/flannel/releases/download/v0.14.0/flannel-v0.14.0-linux-amd64.tar.gz mkdir /usr/local/flannel tar -xf flannel-v0.14.0-linux-amd64.tar.gz -C /usr/local/flannel/
在etcd中设置flannel网络配置:
export ETCDCTL_API=2 etcdctl --ca-file=etcd_ca.crt --cert-file=etcd_client.crt --key-file=etcd_client.key --endpoints=https://etcd:2379 mk /coreos.com/network/config '{ "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } }'
查看在etcd中关于flannel的配置:
etcdctl --ca-file=etcd_ca.crt --cert-file=etcd_client.crt --key-file=etcd_client.key --endpoints=https://etcd:2379 get /coreos.com/network/config
添加systemd启动文件:
cat > /usr/lib/systemd/system/flanneld.service <<EOF [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify ExecStart=/usr/local/flannel/flanneld --ip-masq \\ --etcd-cafile=/etc/kubernetes/pki/etcd/etcd_ca.crt \\ --etcd-certfile=/etc/kubernetes/pki/etcd/etcd_client.crt \\ --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd_client.key \\ --etcd-endpoints=https://etcd:2379 \\ --etcd-prefix=/coreos.com/network ExecStartPost=/usr/local/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
启动flannel:
systemctl enable flanneld.service && systemctl start flanneld.service && systemctl status flanneld.service
查看生成的文件:
如下的环境变量文件是 mk-docker-opts.sh 脚本生成的。是将Pod的子网段信息写入 /run/flannel/subnet.env 文件中,docker启动时会加载这个文件中的信息来配置docker0网桥。
cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=10.244.8.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.244.8.1/24 --ip-masq=false --mtu=1450"
安装配置docker:
安装:
yum install docker-ce -y
配置:
将环境变量文件 EnvironmentFile=/run/flannel/subnet.env 和 $DOCKER_NETWORK_OPTIONS 变量和加入到 docker.service 的启动选项中。
vim /usr/lib/systemd/system/docker.service [Unit] ... After= ... flanneld.service # 在flannel之后启动 ... EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_NETWORK_OPTIONS ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT # 开启转发 ...
启动docker:
systemctl enable docker.service && systemctl start docker.service && systemctl status docker.service
开启转发:
cat >> /etc/sysctl.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sysctl -p
查看网络配置:
# ifconfig docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.244.8.1 netmask 255.255.255.0 broadcast 10.244.8.255 ether 02:42:ee:33:88:6f txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.72 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::5054:ff:fe3c:945b prefixlen 64 scopeid 0x20<link> ether 52:54:00:3c:94:5b txqueuelen 1000 (Ethernet) RX packets 851977 bytes 203742410 (194.3 MiB) RX errors 0 dropped 131903 overruns 0 frame 0 TX packets 477348 bytes 1134109842 (1.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.244.8.0 netmask 255.255.255.255 broadcast 10.244.8.0 inet6 fe80::388b:71ff:fef2:2ede prefixlen 64 scopeid 0x20<link> ether 3a:8b:71:f2:2e:de txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 5 overruns 0 carrier 0 collisions 0
路由:
# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default gateway 0.0.0.0 UG 0 0 0 eth0 10.32.221.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.244.36.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0 169.254.169.254 10.32.221.2 255.255.255.255 UGH 0 0 0 eth0
iptables:
iptables -L -nv ...
查看etcd中的信息:
最终docker0和flannel.1 在同一个网络中,且docker0是flannel.1的子网络中。
etcdctl --endpoints=http://192.168.1.131:2379 ls /coreos.com/network/subnets /coreos.com/network/subnets/10.1.16.0-21 /coreos.com/network/subnets/10.1.32.0-21
etcd中的路由表:
etcdctl --endpoints=http://192.168.1.131:2379 get /coreos.com/network/subnets/10.1.16.0-21 {"PublicIP":"192.168.1.132","BackendType":"vxlan","BackendData":{"VtepMAC":"12:b9:d6:fb:f8:e3"}}
开启转发:
在各个节点上执行如下命令,或者在docker.service中添加
iptables -P FORWARD ACCEPT
这样容器之间就可以跨节点通信了。
通信测试:
node1上:
docker run -it busybox / # ifconfig
node2上:
docker run -it busybox / # ping CONTAINER_IP
参考:
yisu.com/zixun/7746.html