使用 kubeadm 搭建 Kubernetes 集群

minikube 也是使用 kubeadm

  • 安装软件 kubelete、kubeadm、kubectl
    • 都将是以容器的方式运行在 Docker 中
  • 初始化集群
  • 添加 node 到集群中
  • 证书自动生成
  • 集群管理系统是以容器方式存在,容器运行在 Master
  • 容器镜像是谷歌提供,科学上网
    • 阿里云下载容器镜像,需要重新打标记

主机要求

三台机器,一台 master,2台 worker
Centos7、2 cpu、 2G 内存

主机准备

操作系统安装,centos 最小化安装,硬盘分区(/boot、/)

设置 hostname

hostnamectl set-hostname master
hostnamectl set-hostname worker1
hostnamectl set-hostname worker2

配置 IP /etc/hosts

192.168.0.130 master
192.168.0.122 worker1
192.168.0.123 worker2

关闭 firewalld

systemctl stop firewalld
systemctl disable firewalld

<!--确认是否运行-->
firewall-cmd --state

SELINUX

 cat /etc/selinux/config


sed -ri 's/SELINUX=enforcing/SELINUX=disable/' /etc/selinux/config

时间同步

yum install ntpdate
ntpdate time1.aliyun.com

crontab -e 
0 */1 * * * ntpdate time1.aliyun.com

关闭 swap 分区

vim /etc/fstab

Swap 前加 # 注释掉

free -m 查看有 swap 分区
reboot
重启之后没有

配置网桥过滤功能

touch /etc/sysctl.d/k8s.conf
添加以下内容

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

加载过滤模块 br_netfilter

modprobe br_netfilter

查看是否
lsmod | grep br_netfilter

加载网桥过滤配置文件

sysctl -p /etc/sysctl.d/k8s.conf

开启 ipvs

service 需要使用到 iptable 或者 ipvs
ipvs 比 iptables 转化效率高。

安装 ipset、ipvsadm

yum -y install ipset ipvsadm

在所有节点执行

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

检查是否加载

lsmod | grep -e ipvs -e nf_conntrack_ipv4

安装 docker

centos 安装 docker 脚本

curl -sSL https://get.daocloud.io/docker | sh

开启启动 docker

systemctl enable docker

修改 docker 的 cgroup driver

sudo vim /etc/docker/daemon.json

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

安装 kubeadm、kubectl、kubelet

需要在每台机器上安装以下的软件包:

kubeadm:用来初始化集群的指令。

kubelet:在集群中的每个节点上用来启动 Pod 和容器等。

kubectl:用来与集群通信的命令行工具。

新建 /etc/yum.repos.d/kubernetes.repo,内容为:

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

安装最新版本

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

安装指定版本

yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5

配置 kubelet

为了实现 docker 使用的 cgroupdriver 与 kubelet 使用的 cgroup 一致,修改如下文件内容

vim /etc/sysconfig/kubelet
添加以下内容
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

下载所需镜像

k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
  1. docker pull 下载
    docker pull 下载所有镜像,需要科学上网。
    在有科学上网的电脑上下载镜像,通过 docker save -o 、 docker load -i 将镜像复制到所有虚拟机中。

  2. 通过脚本下载

#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.23.5
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

kubeadm 初始化集群

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.130 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers

然后成功后有以下输出内容:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.118:6443 --token tprabz.gefub6pvv1x2qalv \
        --discovery-token-ca-cert-hash sha256:b617b1efda2e97a3f98c624a20090c0dc711bfe2e432a31631cbe87f215c103d 

依次执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

这时候 kubectl get nodes 只有 master,并且还没启动,原因是缺了网络插件,可以使用 flannel 网络插件,比较简单。
CleanShot 2022-07-02 at 23.59.29@2x

Flannel is an overlay network provider that can be used with Kubernetes.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

或者使用 yml 文件

kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: rancher/mirrored-flannelcni-flannel:v0.17.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

kubectl apply -f kube-flannel.yml

worker 节点中执行下面命令添加到集群中。

kubeadm join 192.168.0.118:6443 --token tprabz.gefub6pvv1x2qalv \
        --discovery-token-ca-cert-hash sha256:b617b1efda2e97a3f98c624a20090c0dc711bfe2e432a31631cbe87f215c103d 

如果忘记了 join 需要的 token,重新再 master 机器上生成一个

kubeadm token create --print-join-command

然后将输出内容,在 worker 节点上运行就行。

不出意外,集群搭建完成。
CleanShot 2022-07-03 at 00.08.44@2x

将上面的操作整理成脚本

k8s_install.sh
#!/bin/bash
# 第一个参数是 ip
# 第二个参数是 master or worker

# yum -y update

if [ $# -lt 2 ]; then 
	echo "执行命令参数缺失 eg: ./$0 ip master"
	exit;
fi


echo "$1 $2" >> /etc/hosts

hostnamectl set-hostname $2

# 关闭firewalld
echo "关闭firewalld"
systemctl stop firewalld
systemctl disable firewalld

sleep 3
firewall-cmd --state

echo "SELINUX"
sed -ri 's/SELINUX=enforcing/SELINUX=disable/' /etc/selinux/config

echo "时间同步"
yum -y install ntpdate
ntpdate time1.aliyun.com
echo "0 */1 * * * ntpdate time1.aliyun.com" > /var/spool/cron/root


modprobe br_netfilter


echo "关闭 swap 分区,Swap 前加 # 注释掉"
read -p "按任意键继续..."

vim /etc/fstab

echo "配置网桥过滤功能"
cat >  /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

sysctl -p /etc/sysctl.d/k8s.conf


echo "安装 ipset、ipvsadm"
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4


echo "安装 docker"
curl -sSL https://get.daocloud.io/docker | sh

cat >  /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

echo "安装 Kubernetes v1.23.5"

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5

rm -rf /etc/sysconfig/kubelet
touch /etc/sysconfig/kubelet


cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF


systemctl enable kubelet.service

reboot

# echo "下载镜像"
# sh image.sh

# echo "初始化 Kubernetes"
# if [ $2 == 'master' ] ; then
# 	echo "初始化开始"
# 	kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=$1 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
# fi

# sh after_install.sh
image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.23.5
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done

安装

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.130 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
after_install.sh
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f kube-flannel.yml
2022/03/07 22:59 下午 posted in  kubernetes