k8s的初始及搭建

(kubernetes)k8s

1.初识k8s

1.1.k8s是什么?

​ kubernetes,简称K8s,是用8代替8个字符“ubernete”而成的缩写。是一个开源的,由go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。

k8s是一种容器的管理,规划,扩展的技术。

1.2.环境搭建

部署Kubernetes环境(集群)主要有多种方式:

minikube(用的不多)

minikube可以在本地运行Kubernetes的工具,minikube可以在个人计算机(包括 Windows ,macOS和Linux PC)上运行一个单节点Kubernetes集群,
以便您可以试用Kubernetes或进行日常开发工作,不可以用于生产。

网页内操作(一般用于测试代码):https://kubernetes.io/zh/docs/tutorials/hello-minikube/

kind(用的不多)

Kind和minikube类似的工具,让你在本地计算机上运行Kubernetes,此工具需要安装并配置Docker ;
详细:https://kind.sigs.k8s.io/ (没有远程操作)

kubeadm(常用)

Kubeadm是一个K8s部署工具,提供kubeadm init和 kubeadm join两个操作命令,可以快速部署一个Kubernetes集群;可以用于生产

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

二进制包(适用于老手)

从Github下载发行版的二进制包,手动部署安装每个组件,组成Kubernetes集群,步骤比较繁琐,r但是能让你对各个组件有更清晰的认识

yum安装(版本太老)

通过yum安装Kubernetes的每个组件,组成Kubernetes集群,不过yum源里面的k8s版本已经比较老的,所以这种方式用得也比较少了

第三方工具

有一些大神封装了一些工具,利用这些工具进行k8s 环境的安装

花钱购买

直接购买类似阿里云这样的公有云平台k8s,一键搞定;

1.3.用kubeadm部署k8s

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具这工具能通过两条指令完成一个kubernetes集群的部署;
1、创建一个Master节点∶

kubeadm init

2、将Node节点加入到 Master集群中∶

$ kubeadm join <Master 节点的IP和端口>

配置要求:

(1)一台或多台机器,操作系统CentOS 7.x-86_x64

(2)硬件配置:内存2GB或2G+,CPU 2核或CPU2核+;( 3)集群内各个机器之间能相互通信;

(4)集群内各个机器可以访问外网,需要拉取镜像;( 5)禁止swap分区;

为了方便我们在xshell里开启全部会话: 查看 > 攥写 > 攥写 > 左下方图标 > 全部会话

这样,我们在底下编写命令,上面所有会话都执行。


对master和node都设置
#1.关闭防火墙

systemctl stop firewalld

systemctl disable firewalld
#或者设置空规则
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save


#关闭selinux
 
setenforce 0 #临时
sed -i 's/enforcing/disabled/ ' /etc/selinux/config #永久
1.selinux这个是用来加强安全性的一个组件,挺复杂的,一般直接禁用
2.关闭selinux以允许容器访问宿主机的文件系统

#关闭swap ( k8s禁止虚拟内存以提高性能)
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
swapoff -a #临时
注:
1.swap相当于“虚拟内存”。当物理内存不足时,拿出部分硬盘空间当SWAP分区(虚拟成内存)使用,从而解决内存容量不足的情况。
2.kubelet 在 1.8 版本以后强制要求 swap 必须关闭
3.free -m 命令可以查看交换区的空间大小,我们注释完再使用这个命令发现交换区swap还没关闭,因为需要重启才生效

# 	设置系统主机名以及Host文件

hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02

#添加hosts
cat >> /etc/hosts << EOF
master主机ip k8smaster1
node主机ip   k8snode1
node主机ip   k8snode2
EOF

#设置网桥参数
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#生效
sysctl --system  

#更新时间
yum install ntpdate -y
ntpdate time.windows.com

安装

所有服务器节点都需要安装docker/kubeadm/kubelet

#docker的安装
#可选
#改成阿里镜像
yum install wget -y

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
#安装新版本
yum install docker-ce-19.03.13 -y

#配置开机自动启动
systemctl enable docker.service

#配置镜像加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://hj6c4pxo.mirror.aliyuncs.com"] 
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker	

 
#搭建kubeadm	kubelete

#添加k8s的阿里云yum源(使用阿里的下载源(快速))
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#安装 kubeadm,kubelet 和 kubectl
 yum install kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 -y
#开机自动启动
systemctl enable kubelet.service

#查看是否安装成功
yum list installed | grep kubelet
yum list installed | grep kubeadm
yum list installed | grep kubectl

查看安装的版本: kubelet --version
Kubelet:运行在qluster所有节点上,负责启动POD和容器;
Kubeadm:用于初始化cluster ;
Kubectl (类似于redis-cli): kubectl是kubenetes命令行工具,通过kubectl可以部署和管理应

#初始化master节点(在master主机上执行)
kubeadm init --apiserver-advertise-address=192.168.16.135 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16


解释:192.168.16.135根据你的master节点的ip修改
说明︰service-cidr 的选取不能和PodCIDR及本机网络有重叠或者冲突,一般可以选择一个本机网络和PodCIDR都没有用到的私网地址段,比如PODCIDR使用 10.244.0.0/16,那么service cidr可以选择10.96.0.0/12,网络无重叠冲突即可;

之后需要等一段时间 
#直到有这句话,证明你的master节点配置成功	
Your Kubernetes control-plane has initialized successfully!
-------------------------------------------------------------------------------------------------------
To start using your cluster, you need to run the following as a regular user:(你需要立即执行)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
-------------------------------------------------------------------------------------------------------
You should now deploy a pod network to the cluster.(不会的可以看文档)
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
-------------------------------------------------------------------------------------------------------
Then you can join any number of worker nodes by running the following on each as root:(把node节点添加进去的方式)

kubeadm join master节点ip:6443 --token 7ye4m8.ftwfagsf8l6sccdf \
    --discovery-token-ca-cert-hash sha256:db03daf9b5b13abfd9173e50f1f13aaf6ac93b0e3e44115d185886826fa85395 
--------------------------------------------------------------------------------------------------------
#此时你需要再次执行(复制上面的)
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

#查看节点信息
kubectl get nodes
[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8smaster1   NotReady   master   9m6s   v1.19.4


#把node节点加到master集群里(node里执行)(复制上面的)
kubeadm join master节点ip:6443 --token 7ye4m8.ftwfagsf8l6sccdf \
    --discovery-token-ca-cert-hash sha256:db03daf9b5b13abfd9173e50f1f13aaf6ac93b0e3e44115d185886826fa85395 
    
    
    
    
    
    
#部署网络插件,让master节点和弄node节点准备就绪
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2021-10-15 11:55:10--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.110.133, ...
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:5163 (5.0K) [text/plain]
正在保存至: “kube-flannel.yml”

100%[====================================================================================================================================================>] 5,163       1.77KB/s 用时 2.8s   

2021-10-15 11:55:14 (1.77 KB/s) - 已保存 “kube-flannel.yml” [5163/5163])




kubectl apply -f kube-flannel.yml

[root@k8smaster1 ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

#查看组件是否在运行
[root@k8smaster1 ~]# kubectl get pod -n kube-system


部署中出现的问题(没有ready):
#检查组件
[root@k8smaster1 ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS     RESTARTS   AGE
coredns-6d56c8448f-6l2c8             0/1     Pending    0          175m
coredns-6d56c8448f-ndsbt             0/1     Pending    0          175m
etcd-k8smaster1                      1/1     Running    3          176m
kube-apiserver-k8smaster1            1/1     Running    3          176m
kube-controller-manager-k8smaster1   1/1     Running    6          176m
kube-flannel-ds-7448j                0/1     Init:0/2   0          139m
kube-flannel-ds-chhcb                0/1     Init:0/2   0          139m
kube-flannel-ds-tl29j                0/1     Init:0/2   0          139m
kube-proxy-fg9l5                     1/1     Running    3          159m
kube-proxy-frjht                     1/1     Running    3          159m
kube-proxy-lzckf                     1/1     Running    3          175m
kube-scheduler-k8smaster1            1/1     Running    4          176m

#再应用下网络插件
kubectl apply -f kube-flannel.yml

#就变成了

[root@k8smaster1 ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS                  RESTARTS   AGE
coredns-6d56c8448f-6l2c8             0/1     Pending                 0          179m
coredns-6d56c8448f-ndsbt             0/1     Pending                 0          179m
etcd-k8smaster1                      1/1     Running                 3          3h
kube-apiserver-k8smaster1            1/1     Running                 3          3h
kube-controller-manager-k8smaster1   0/1     Running                 7          3h
kube-flannel-ds-7448j                0/1     Init:ImagePullBackOff   0          143m
kube-flannel-ds-chhcb                0/1     Init:ImagePullBackOff   0          143m
kube-flannel-ds-tl29j                0/1     Init:ImagePullBackOff   0          143m
kube-proxy-fg9l5                     1/1     Running                 3          164m
kube-proxy-frjht                     1/1     Running                 3          164m
kube-proxy-lzckf                     1/1     Running                 3          179m
kube-scheduler-k8smaster1            1/1     Running                 4          3h
-----------------------------------------------------------------------------------------
#镜像拉取失败
kube-flannel-ds-7448j                0/1     Init:ImagePullBackOff   0          143m
kube-flannel-ds-chhcb                0/1     Init:ImagePullBackOff   0          143m
kube-flannel-ds-tl29j                0/1     Init:ImagePullBackOff   0          143m

#再重新获取试试,实在不行用下面的方法
kubectl apply -f kube-flannel.yml

网上淘来的解决办法

1.手动从网上下(用这个解决了)

按照搭建Kubernetes时官网给的命令

 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

部署完成时查看

[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME                                   READY   STATUS                  RESTARTS   AGE
coredns-5c98db65d4-cr9lq               0/1     Pending                 0          74m
coredns-5c98db65d4-h4h8f               0/1     Pending                 0          74m
etcd-k8s-master01                      1/1     Running                 0          73m
kube-apiserver-k8s-master01            1/1     Running                 0          73m
kube-controller-manager-k8s-master01   1/1     Running                 0          73m
kube-flannel-ds-amd64-cpzh6            0/1     Init:ImagePullBackOff   0          51m
kube-proxy-sb68t                       1/1     Running                 0          74m

flannel状态为Init:ImagePullBackOff

原因
查看kube-flannel.yml文件时发现quay.io/coreos/flannel:v0.12.0-amd64

quay.io网站目前国内无法访问

下载flannel:v0.12.0-amd64导入到docker中

可以去https://github.com/coreos/flannel/releases官方仓库下载镜像



[root@k8s-master01 tmp]# docker load < flanneld-v0.12.0-amd64.docker
256a7af3acb1: Loading layer [==================================================>]  5.844MB/5.844MB
d572e5d9d39b: Loading layer [==================================================>]  10.37MB/10.37MB
57c10be5852f: Loading layer [==================================================>]  2.249MB/2.249MB
7412f8eefb77: Loading layer [==================================================>]  35.26MB/35.26MB
05116c9ff7bf: Loading layer [==================================================>]   5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.12.0-amd64
[root@k8s-master01 tmp]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel               v0.12.0-amd64       4e9f801d2217        4 months ago        52.8MB
k8s.gcr.io/kube-proxy                v1.15.1             89a062da739d        12 months ago       82.4MB
k8s.gcr.io/kube-scheduler            v1.15.1             b0b3c4c404da        12 months ago       81.1MB
k8s.gcr.io/kube-apiserver            v1.15.1             68c3eb07bfc3        12 months ago       207MB
k8s.gcr.io/kube-controller-manager   v1.15.1             d75082f1d121        12 months ago       159MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        18 months ago       40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        20 months ago       258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB



[root@k8s-master01 tmp]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-cr9lq               1/1     Running   0          104m
coredns-5c98db65d4-h4h8f               1/1     Running   0          104m
etcd-k8s-master01                      1/1     Running   0          103m
kube-apiserver-k8s-master01            1/1     Running   0          103m
kube-controller-manager-k8s-master01   1/1     Running   0          102m
kube-flannel-ds-amd64-cpzh6            1/1     Running   0          80m
kube-proxy-sb68t                       1/1     Running   0          104m
kube-scheduler-k8s-master01            1/1     Running   0          103m
[root@k8s-master01 tmp]#

2.单个拉取

问题
使用kubectl get nodes查看已加入的节点时,出现了Status为NotReady的情况。

root@master1:~# kubectl get nodes
NAME      STATUS      ROLES    AGE    VERSION
master1   NotReady    master   152m   v1.18.1
worker1   NotReady    <none>   94m    v1.18.1

这种情况是因为有某些关键的 pod 没有运行起来,首先使用如下命令来看一下kube-system的 pod 状态:

kubectl get pod -n kube-system
1
NAME                              READY   STATUS             RESTARTS   AGE
coredns-bccdc95cf-792px           1/1     Pending            0          3h11m
coredns-bccdc95cf-bc76j           1/1     Pending            0          3h11m
etcd-master1                      1/1     Running            2          3h10m
kube-apiserver-master1            1/1     Running            2          3h11m
kube-controller-manager-master1   1/1     Running            2          3h10m
kube-flannel-ds-amd64-9trbq       0/1     ImagePullBackoff   0          133m
kube-flannel-ds-amd64-btt74       0/1     ImagePullBackoff   0          174m
kube-proxy-27zfk                  1/1     Pending            2          3h11m
kube-proxy-lx4gk                  1/1     Pending            0          133m
kube-scheduler-master1            1/1     Running            2          3h11m

1
如上,可以看到 pod kube-flannel 的状态是ImagePullBackoff,意思是镜像拉取失败了,所以我们需要手动去拉取这个镜像。这里可以看到某些 pod 运行了两个副本是因为我有两个节点存在了。

你也可以通过kubectl describe pod -n kube-system <服务名>来查看某个服务的详细情况,如果 pod 存在问题的话,你在使用该命令后在输出内容的最下面看到一个[Event]条目,如下:

root@master1:~# kubectl describe pod kube-flannel-ds-amd64-9trbq -n kube-system

...

Events:
  Type     Reason                  Age                 From              Message
  ----     ------                  ----                ----              -------
  Normal   Killing                 29m                 kubelet, worker1  Stopping container kube-flannel
  Warning  FailedCreatePodSandBox  27m (x12 over 29m)  kubelet, worker1  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-flannel-ds-amd64-9trbq": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
  Normal   SandboxChanged          19m (x48 over 29m)  kubelet, worker1  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulling                 42s                 kubelet, worker1  Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"

#手动拉取镜像

flannel的镜像可以使用如下命令拉到,如果你是其他镜像没拉到的话,百度一下就可以找到国内的镜像源地址了,这里记得把最后面的版本号修改成你自己的版本,具体的版本号可以用上面说的kubectl describe命令看到:

#拉取镜像:

docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64

等镜像拉取完了之后需要把镜像名改一下,改成 k8s 没有拉到的那个镜像名称,我这里贴的镜像名和版本和你的不一定一样,注意修改:

docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

修改完了之后过几分钟 k8s 会自动重试,等一下就可以发现不仅flannel正常了,其他的 pod 状态也都变成了Running,这时再看 node 状态就可以发现问题解决了:

[kubeadm@server1 ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
server1   Ready    master   150m   v1.18.1
server2   Ready    <none>   150m   v1.18.1
server3   Ready    <none>   150m   v1.18.1

结尾:部署完成

看到都 Running且所有master和node都ready时,部署完毕!

[root@k8smaster1 ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-6l2c8             1/1     Running   0          3h23m
coredns-6d56c8448f-ndsbt             1/1     Running   0          3h23m
etcd-k8smaster1                      1/1     Running   3          3h23m
kube-apiserver-k8smaster1            1/1     Running   3          3h23m
kube-controller-manager-k8smaster1   1/1     Running   8          3h23m
kube-flannel-ds-lj52w                1/1     Running   0          12m
kube-flannel-ds-qp9k5                1/1     Running   0          12m
kube-flannel-ds-zrqzm                1/1     Running   0          12m
kube-proxy-fg9l5                     1/1     Running   3          3h7m
kube-proxy-frjht                     1/1     Running   3          3h7m
kube-proxy-lzckf                     1/1     Running   3          3h23m
kube-scheduler-k8smaster1            1/1     Running   4          3h23m
[root@k8smaster1 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8smaster1   Ready    master   3h29m   v1.19.4
k8snode1     Ready    <none>   3h13m   v1.19.4
k8snode2     Ready    <none>   3h13m   v1.19.4

扩展:kube-flannel.yml的配置
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg