这篇学习笔记是基于杜宽老师在51CTO上发布的视频课程制作的。在此,感谢杜宽老师的分享和教学。如有侵权,请及时联系我。版权归原作者所有,未经允许不得转载或使用。
一、基本环境配置
主机名 | IP地址 | 说明 |
---|---|---|
k8s-master01 ~ 03 | 192.168.1.104 ~ 106 | master节点 * 3 |
k8s-master-lb | 192.168.1.236 | keepalived虚拟IP |
k8s-node01 ~ 02 | 192.168.1.107 ~ 108 | worker节点 * 2 |
Pod网段和service和宿主机网段不要重复,公有云上搭建VIP是公有云的负载均衡的IP,比如阿里云的内网SLB的地址,腾讯云内网ELB的地址。不需要再搭建keepalived和haproxy
配置信息 | 备注 |
---|---|
系统版本 | CentOS 7.9 |
Docker版本 | 20.10.x |
Pod网段 | 172.16.0.0/16 |
Service网段 | 10.96.0.0/16 |
所有节点更改主机名(其它节点按需修改)
# hostnamectl set-hostname XXX
所有节点配置hosts,修改/etc/hosts如下
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.104 k8s-master01
192.168.1.105 k8s-master02
192.168.1.106 k8s-master03
192.168.1.236 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP
192.168.1.107 k8s-node01
192.168.1.108 k8s-node02
所有节点配置yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
所有节点必备工具安装
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
所有节点关闭防火墙、selinux、dnsmasq、swap。服务器配置如下
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
所有节点关闭swap分区
swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
所有节点安装ntpdate和chrony
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate chrony -y
所有节点同步时间。时间同步配置如下:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
# 加入到chronyd服务
sed -i -e '/^pool.*/d' -e '/^server.*/d' -e '/^# Please consider .*/a\server time2.aliyun.com iburst' /etc/chrony.conf
systemctl enable --now chronyd
systemctl is-active chronyd
所有节点配置limit:
ulimit -SHn 65535
vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
所有节点升级系统并重启
yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统
Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:
ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
Master01节点下载安装所有的源码文件
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
如果无法下载就下载:https://gitee.com/dukuan/k8s-ha-install.git
二、内核配置
CentOS7 需要升级内核至4.18+,本次升级的版本为4.19
在master01节点下载内核:
cd /root
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
从master01节点传到其他节点:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
所有节点安装内核
cd /root && yum localinstall -y kernel-ml*
所有节点更改内核启动顺序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
检查默认内核是不是4.19
# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
所有节点安装ipvsadm:
yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf
# 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
# systemctl enable --now systemd-modules-load.service
所有节点开启一些k8s集群中必须的内核参数,所有节点配置k8s内核
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
net.ipv4.conf.all.route_localnet = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
然后检查内核是不是4.19
# uname -a
Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
三、高可用组件安装
(注意:如果不是高可用集群,haproxy和keepalived无需安装)
公有云要用公有云自带的负载均衡,比如阿里云的SLB、NLB,腾讯云的ELB,用来替代haproxy和keepalived,因为公有云大部分都是不支持keepalived的,另外如果用阿里云的话,kubectl控制端不能放在master节点,推荐使用腾讯云,因为阿里云的slb有回环的问题,也就是slb代理的服务器不能反向访问SLB,但是腾讯云修复了这个问题。
所有Master节点通过yum安装HAProxy和KeepAlived
yum install keepalived haproxy -y
所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同)
# mkdir /etc/haproxy
# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind 0.0.0.0:16443
bind 127.0.0.1:16443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server k8s-master01 192.168.1.104:6443 check
server k8s-master02 192.168.1.105:6443 check
server k8s-master03 192.168.1.106:6443 check
所有Master节点配置KeepAlived,配置不一样,注意区分[root@k8s-master01 pki]# vim /etc/keepalived/keepalived.conf
,注意每个节点的IP和网卡(interface参数)
Master01节点的配置,注意配置中ens32改为本机网卡名称
# mkdir /etc/keepalived
# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface ens32
mcast_src_ip 192.168.1.104
virtual_router_id 51
priority 101
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.236
}
track_script {
chk_apiserver
}
}
Master02节点的配置
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
mcast_src_ip 192.168.1.105
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.236
}
track_script {
chk_apiserver
}
}
Master03节点的配置
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script chk_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
mcast_src_ip 192.168.1.106
virtual_router_id 51
priority 100
advert_int 2
authentication {
auth_type PASS
auth_pass K8SHA_KA_AUTH
}
virtual_ipaddress {
192.168.1.236
}
track_script {
chk_apiserver
}
}
所有master节点配置KeepAlived健康检查文件
# cat /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $(seq 1 3)
do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
赋予执行权限
chmod +x /etc/keepalived/check_apiserver.sh
启动haproxy和keepalived
# systemctl daemon-reload
# systemctl enable --now haproxy
# systemctl enable --now keepalived
重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
所有节点测试VIP
# ping 192.168.1.236 -c 4
PING 192.168.1.236 (192.168.1.236) 56(84) bytes of data.
64 bytes from 192.168.1.236: icmp_seq=1 ttl=64 time=0.464 ms
64 bytes from 192.168.1.236: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 192.168.1.236: icmp_seq=3 ttl=64 time=0.062 ms
64 bytes from 192.168.1.236: icmp_seq=4 ttl=64 time=0.063 ms
--- 192.168.1.236 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3106ms
rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms
# telnet 192.168.1.236 16443
Trying 192.168.1.236...
Connected to 192.168.1.236.
Escape character is '^]'.
Connection closed by foreign host.
如果ping不通且telnet没有出现 ] ,则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux,haproxy和keepalived的状态,监听端口等
所有节点查看防火墙状态必须为disable和inactive:systemctl status firewalld
所有节点查看selinux状态,必须为disable:getenforce
master节点查看haproxy和keepalived状态:systemctl status keepalived haproxy
master节点查看监听端口:netstat -lntp
如果以上都没有问题,需要确认:
- 是否是公有云机器
- 是否是私有云机器(类似OpenStack)
上述公有云一般都是不支持keepalived,私有云可能也有限制,需要和自己的私有云管理员咨询
四、K8s组件和Runtime安装
如果安装的版本低于1.24,选择Docker和Containerd均可,高于1.24选择Containerd作为Runtime。
1、安装Containerd
所有节点安装docker-ce-20.10(如果在以前已经安装过,需要重新安装一下)
# yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
可以无需启动Docker,只需要配置和启动Containerd即可。
首先配置Containerd所需的模块(所有节点):
# cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
所有节点加载模块:
# modprobe -- overlay
# modprobe -- br_netfilter
所有节点,配置Containerd所需的内核:
# cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
所有节点加载内核:
# sysctl --system
所有节点配置Containerd的配置文件:
# mkdir -p /etc/containerd
# containerd config default | tee /etc/containerd/config.toml
所有节点将Containerd的Cgroup改为Systemd:
# vim /etc/containerd/config.toml
找到containerd.runtimes.runc.options,添加SystemdCgroup = true(如果已存在直接修改,否则会报错),如下图所示:
114 [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
115 BinaryName = ""
116 CriuImagePath = ""
117 CriuPath = ""
118 CriuWorkPath = ""
119 IoGid = 0
120 IoUid = 0
121 NoNewKeyring = false
122 NoPivotRoot = false
123 Root = ""
124 ShimCgroup = ""
125 SystemdCgroup = true
所有节点将sandbox_image的Pause镜像改成符合自己版本的地址registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
60 restrict_oom_score_adj = false
61 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
62 selinux_category_range = 1024
所有节点启动Containerd,并配置开机自启动:
# systemctl daemon-reload
# systemctl enable --now containerd
所有节点配置crictl客户端连接的运行时位置:
# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
2、安装docker(1.24以前版本)
所有节点安装Docker-ce 20.10
yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{ "registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"max-concurrent-downloads": 10, "max-concurrent-uploads": 5, "log-opts": { "max-size": "300m", "max-file": "2" }, "live-restore": true }
EOF
systemctl daemon-reload && systemctl enable --now docker
所有节点设置开机自启动Docker
systemctl daemon-reload && systemctl enable --now docker
3、安装Kubernetes组件
首先在Master01节点查看最新的Kubernetes版本是多少:
# yum list kubeadm.x86_64 --showduplicates | sort -r
所有节点安装1.27最新版本kubeadm、kubelet和kubectl
# yum install kubeadm-1.27* kubelet-1.27* kubectl-1.27* -y
所有节点设置Kubelet开机自启动(由于还未初始化,没有kubelet的配置文件,此时kubelet无法启动,无需管理)
systemctl daemon-reload
systemctl enable --now kubelet
此时kubelet是起不来的,日志会有报错不影响!
五、集群初始化
官方初始化文档:
https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/
如果不是高可用集群,192.168.1.236:16443改为master01的地址,16443改为apiserver的端口,默认是6443,注意更改kubernetesVersion的值和自己服务器kubeadm的版本一致:kubeadm version)
以下操作在master01
# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.1.104
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock # containerd方式
#criSocket: /var/run/dockershim.sock # docker方式
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
---
apiServer:
certSANs:
- 192.168.1.236
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.236:16443
controllerManager: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.27.1 # 更改此处的版本号和kubeadm version一致
networking:
dnsDomain: cluster.local
podSubnet: 172.16.0.0/16
serviceSubnet: 10.96.0.0/16
scheduler: {}
更新kubeadm文件
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
将new.yaml文件复制到其他master节点
for i in k8s-master02 k8s-master03; do scp new.yaml $i:/root/; done
之后所有Master节点提前下载镜像,可以节省初始化时间(其他节点不需要更改任何配置,包括IP地址也不需要更改):
kubeadm config images pull --config /root/new.yaml
正确的反馈信息如下(版本可能不一样)
# kubeadm config images pull --config /root/new.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可
kubeadm init --config /root/new.yaml --upload-certs
初始化成功以后,会产生Token值,用于其他节点加入时使用,因此要记录下初始化成功生成的token值(令牌值)
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.1.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
--control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94
Master01节点配置环境变量,用于访问Kubernetes集群
cat <<EOF >> /root/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF
source /root/.bashrc
查看节点状态:(显示NotReady不影响)
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 24s v1.27.0
采用初始化安装方式,所有的系统组件均以容器的方式运行并且在kube-system命名空间内,此时可以查看Pod状态:
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-777d78ff6f-kstsz 0/1 Pending 0 14m
coredns-777d78ff6f-rlfr5 0/1 Pending 0 14m
etcd-k8s-master01 1/1 Running 0 14m
kube-apiserver-k8s-master01 1/1 Running 0 13m
kube-controller-manager-k8s-master01 1/1 Running 0 13m
kube-proxy-8d4qc 1/1 Running 0 14m
kube-scheduler-k8s-master01 1/1 Running 0 13m
启动和二进制不同的是,kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml,修改后需要重启kubelet进程
其他组件的配置文件在/etc/kubernetes/manifests目录下,比如kube-apiserver.yaml,该yaml文件更改后,kubelet会自动刷新配置,也就是会重启pod。不能再次创建该文件
kube-proxy的配置在kube-system命名空间下的configmap中,可以通过以下命令将Kube-proxy改为ipvs模式,因为在初始化集群的时候注释了ipvs配置,所以需要自行修改一下
kubectl edit cm kube-proxy -n kube-system
mode: ipvs
进行更改,更改完成后,可以通过patch更新Kube-Proxy的Pod
kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
验证Kube-Proxy模式
# curl 127.0.0.1:10249/proxyMode
ipvs
1、初始化失败排查
如果初始化失败,重置后再次初始化,命令如下(没有失败不要执行)
kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
如果多次尝试都是初始化失败,需要看系统日志,CentOS日志路径:/var/log/messages,Ubuntu日志路径:/var/log/syslog
tail -f /var/log/messages | grep -v "not found"
经常出错的原因:
- Containerd的配置文件修改的不对,自行参考《安装containerd》小节核对
- new.yaml配置问题,比如非高可用集群忘记修改16443端口为6443
- new.yaml配置问题,三个网段有交叉,出现IP地址冲突
VIP不通导致无法初始化成功,此时messages日志会有VIP超时的报错
六、高可用Master
其他master加入集群,master02和master03分别执行(不用在master01再次执行,不能直接复制文档当中的命令)
kubeadm join 192.168.1.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94 \
--control-plane --certificate-key c595f7f4a7a3beb0d5bdb75d9e4eff0a60b977447e76c1d6885e82c3aa43c94c
查看当前状态:(NotReady不影响)
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 4m23s v1.27.0
k8s-master02 NotReady control-plane 66s v1.27.0
k8s-master03 NotReady control-plane 14s v1.27.0
1、Token过期处理
注意:以下步骤是上述init命令产生的Token过期了才需要执行以下步骤,如果没有过期不需要执行,直接join即可
Token过期后生成新的token
kubeadm token create --print-join-command
Master需要生成--certificate-key
kubeadm init phase upload-certs --upload-certs
七、Node节点的配置
Node节点上主要部署公司的一些业务应用,生产环境中不建议Master节点部署系统组件之外的其他Pod,测试环境可以允许Master节点部署Pod以节省系统资源。
kubeadm join 192.168.1.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94
所有节点初始化完成后,查看集群状态(NotReady不影响)
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 4m23s v1.27.0
k8s-master02 NotReady control-plane 66s v1.27.0
k8s-master03 NotReady control-plane 14s v1.27.0
k8s-node01 NotReady <none> 13s v1.27.0
k8s-node02 NotReady <none> 10s v1.27.0
八、Calico组件的安装
以下步骤只在master01执行
cd /root/k8s-ha-install && git checkout manual-installation-v1.27.x && cd calico/
修改Pod网段:
POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
kubectl apply -f calico.yaml
查看容器和节点状态
# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5f6d4b864b-pwvnb 1/1 Running 0 3m29s
calico-node-5lz9m 1/1 Running 0 3m29s
calico-node-8z4bg 1/1 Running 0 3m29s
calico-node-lmzvf 1/1 Running 0 3m29s
calico-node-mpngv 1/1 Running 0 3m29s
calico-node-vmqsl 1/1 Running 0 3m29s
coredns-54d67798b7-8525g 1/1 Running 0 39m
coredns-54d67798b7-fxs72 1/1 Running 0 39m
etcd-k8s-master01 1/1 Running 0 39m
etcd-k8s-master02 1/1 Running 0 33m
etcd-k8s-master03 1/1 Running 0 31m
kube-apiserver-k8s-master01 1/1 Running 0 39m
kube-apiserver-k8s-master02 1/1 Running 0 33m
kube-apiserver-k8s-master03 1/1 Running 0 30m
kube-controller-manager-k8s-master01 1/1 Running 1 39m
kube-controller-manager-k8s-master02 1/1 Running 0 33m
kube-controller-manager-k8s-master03 1/1 Running 0 31m
kube-proxy-hnkmj 1/1 Running 0 39m
kube-proxy-jk4dm 1/1 Running 0 32m
kube-proxy-nbcg2 1/1 Running 0 32m
kube-proxy-qv9k7 1/1 Running 0 32m
kube-proxy-x6xdc 1/1 Running 0 33m
kube-scheduler-k8s-master01 1/1 Running 1 39m
kube-scheduler-k8s-master02 1/1 Running 0 33m
kube-scheduler-k8s-master03 1/1 Running 0 30m
此时节点全部变为Ready状态
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 7m52s v1.27.0
k8s-master02 Ready control-plane 4m35s v1.27.0
k8s-master03 Ready control-plane 3m43s v1.27.0
k8s-node01 Ready <none> 3m42s v1.27.0
k8s-node02 Ready <none> 3m39s v1.27.0
九、Metrics部署
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
将Master01节点的front-proxy-ca.crt复制到所有Node节点
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node(其他节点自行拷贝):/etc/kubernetes/pki/front-proxy-ca.crt
以下操作均在master01节点执行
安装metrics server
cd /root/k8s-ha-install/kubeadm-metrics-server
# kubectl create -f comp.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
查看状态
kubectl get po -n kube-system -l k8s-app=metrics-server
变成1/1 Running后
# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01 153m 3% 1701Mi 44%
k8s-master02 125m 3% 1693Mi 44%
k8s-master03 129m 3% 1590Mi 41%
k8s-node01 73m 1% 989Mi 25%
k8s-node02 64m 1% 950Mi 24%
# kubectl top po -A # -A表示所有命名空间
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system calico-kube-controllers-66686fdb54-74xkg 2m 17Mi
kube-system calico-node-6gqpb 21m 85Mi
kube-system calico-node-bmvjt 29m 76Mi
kube-system calico-node-hdp9c 15m 82Mi
kube-system calico-node-wwrfv 23m 86Mi
kube-system calico-node-zzv88 22m 84Mi
kube-system calico-typha-67c6dc57d6-hj6l4 2m 23Mi
kube-system calico-typha-67c6dc57d6-jm855 2m 22Mi
kube-system coredns-7d89d9b6b8-sr6mf 1m 16Mi
kube-system coredns-7d89d9b6b8-xqwjk 1m 16Mi
kube-system etcd-k8s-master01 24m 96Mi
kube-system etcd-k8s-master02 20m 91Mi
kube-system etcd-k8s-master03 21m 92Mi
kube-system kube-apiserver-k8s-master01 41m 502Mi
kube-system kube-apiserver-k8s-master02 35m 476Mi
kube-system kube-apiserver-k8s-master03 71m 480Mi
kube-system kube-controller-manager-k8s-master01 15m 65Mi
kube-system kube-controller-manager-k8s-master02 1m 26Mi
kube-system kube-controller-manager-k8s-master03 2m 27Mi
kube-system kube-proxy-8lt45 1m 18Mi
kube-system kube-proxy-d6jfh 1m 18Mi
kube-system kube-proxy-hfnvz 1m 19Mi
kube-system kube-proxy-nsms8 1m 18Mi
kube-system kube-proxy-xmlhq 3m 21Mi
kube-system kube-scheduler-k8s-master01 2m 26Mi
kube-system kube-scheduler-k8s-master02 2m 24Mi
kube-system kube-scheduler-k8s-master03 2m 24Mi
kube-system metrics-server-d54b585c4-4dqpf 46m 16Mi
十、Dashboard部署
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等
1、安装指定版本dashboard
cd /root/k8s-ha-install/dashboard/
# kubectl create -f .
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
2、安装最新版
官方GitHub地址:https://github.com/kubernetes/dashboard
可以在官方dashboard查看到最新版dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
以具体版本号为准
# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
# kubectl apply -f admin.yaml -n kube-system
3、登录dashboard
在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,参考图
--test-type --ignore-certificate-errors
更改dashboard的svc为NodePort
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
将ClusterIP更改为NodePort(如果已经为NodePort忽略此步骤)
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
查看端口号
kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
# kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
NodePort 192.168.1.104 <none> 443:18282/TCP 201d
根据自己的实例端口号,通过任意安装了kube-proxy的宿主机的IP+端口即可访问到dashboard:
访问Dashboard:https://192.168.1.104:18282(请更改18282为自己的端口),选择登录方式为令牌(即token方式),参考图
1.24版本之前可以直接执行
# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-r4vcp
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w
1.24版本之后需要创建登录Token,然后用执行以上命令
kubectl create token admin-user -n kube-system
将token值输入到令牌后,单击登录即可访问Dashboard
十一、注意事项
注意:kubeadm安装的集群,证书有效期默认是一年。master节点的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器运行的。可以通过kubectl get po -n kube-system查看。
Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式删除Taint,即可部署Pod
# kubectl taint node -l node-role.kubernetes.io/control-plane node-role.kubernetes.io/control-plane:NoSchedule-
十二、k8s安装失败重置
报错:It seems like the kubelet isn't running or healthy,查看日志表示kubelet无法连接到containerd
查看containerd状态
systemctl status containerd
如果是新集群可以重新安装kubelet和containerd
systemctl stop containerd kubelet
rm -rf /etc/kubernetes/
rm -rf /var/lib/containerd/ /var/lib/kubelet/
如果提示kubelet被pod占用可以使用以下命令
umount /var/lib/kubelet/pods/af97baf3...省略
复制其他节点的配置文件到故障节点
scp /etc/contalnerd/contig.toml k8s-node02:/etc/contalnerd/contig.toml
重新启动kubelet和containerd
# systemctl daemon-reload
# systemctl restart containerd kubelet
查看日志
tail -f /var/log/messages
故障节点重新加入集群
kubeadm join 192.168.1.236:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:df72788de04bbc2e8fca70becb8a9e8503a962b5d7cd9b1842a0c39930d08c94
十三、kubeadm 99 年证书
1、续期一年证书
kubeadm certs check-expiration # 查看证书有效期
cp -rp /etc/kubernetes/pki/ /opt/pki.bak/ # 备份证书
所有master执行命令
kubeadm certs renew all # 更新证书
systemctl restart kubelet # 重启集群
2、更新 99 年证书
查看k8s版本
kubeadm version
下载k8s源码
git clone https://gitee.com/mirrors/kubernetes.git
切换分支到自己的k8s版本
git checkout v1.25.0
启动一个golang的环境
docker run -ti --rm -v `pwd`:/go/src/registry.cn-beijing.aliyuncs.com/dotbalo/golang:kubeadm bash
# cd /go/src/
# go env -w GOPROXY=https://goproxy.cn,direct
# go env -w GOSUMDB=off
# grep "365" cmd/kubeadm/app/constants/constants.go
# sed -i 's#365#365 * 100#g' cmd/kubeadm/app/constants/constants.go
# grep "365" cmd/kubeadm/app/constants/constants.go
# mkdir -p _output/
# chmod 777 -R _output/
# make WHAT=cmd/kubeadm
# cp _output/bin/kubeadm /opt/
查看k8s版本
/opt/kubeadm version
/opt/kubeadm certs renew all
kubeadm certs check-expiration
systemctl restart kubelet
评论区