k8s系列(9)-容忍、污点、亲和

完整系列

  1. k8s系列(1)-腾讯云CVM手动部署K8S_Dashboard安装1
  2. k8s系列(1)-腾讯云CVM手动部署K8S_Dashboard安装2
  3. k8s系列(2)-Service
  4. k8s系列(3)-StatefulSet的MongoDB实战
  5. k8s系列(4)-MongoDB数据持久化
  6. k8s系列(5)-Configmap和Secret
  7. k8s系列(6)-Helm
  8. k8s系列(7)-命名空间
  9. k8s系列(8)-Ingress
  10. k8s系列(9)-容忍、污点、亲和

一. 介绍

1. Kubernetes 里面有三个 taints 行为

  1. NoSchedule:表示 k8s 不会将Pod调度到具有该污点的Node上
  2. PreferNoSchedule:表示 k8s 将尽量避免将Pod调度到具有该污点的Node上
  3. NoExecute:表示 k8s 将不会将Pod调度到具有该污点的Node上,同时会将Node上已有的Pod驱逐出去(相当于结婚了还得离婚)

2. Taint 和 Toleration

Taints: 避免Pod调度到特定Node上

Tolerations: 允许Pod调度到持有Taints的Node上

二. 实战

1. 用法

下面是一个简单的示例:在 node1 上加一个 Taint,该 Taint 的键为 key,值为 value,Taint 的效果是 NoSchedule。这意味着除非 pod 明确声明可以容忍这个 Taint,否则就不会被调度到 node1 上

kubectl taint nodes node1 key=value:NoSchedule

然后需要在 pod 上声明 Toleration。下面的 Toleration 设置为可以容忍具有该 Taint 的 Node,使得 pod 能够被调度到 node1 上:

apiVersion: v1  
 kind: Pod  
 metadata:  
 name: pod-taints  
 spec:  
 tolerations:  
\\- key: "key"  
 operator: "Equal"

 value:"value" effect: "NoSchedule"

  
 containers:  
 - name: pod-taints  
 image: busybox:latest

也可以写成如下:

tolerations:  \\- key: "key"   operator: "Exists"   effect: "NoSchedule"
  • operator 的值为 Exists,这时无需指定 value
  • operator 的值为 Equal 并且 value 相等
  • 如果不指定 operator,则默认值为 Equal。

2. 实验

参考之前文章,腾讯云上配置一主两从,主机master,从机node1和node2

  1. 添加节点污点
    为node2添加污点,污点程度为NoSchedule,type=calculate为标签
    kubectl taint node node2 type=calculate:NoSchedule
  2. 查看节点污点
    kubectl describe nodes node2 | grep Taints
image.png
  1. 创建pod配置文件
apiVersion: apps/v1
kind: Deployment
metadata:
  name: taint-deploy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: taint-pod
  template:
    metadata:
      labels:
        app: taint-pod
    spec:
      containers:
      - image: busybox:latest
        name: taint-pod
        command: [ "/bin/sh", "-c", "tail -f /etc/passwd" ]
  1. 创建pod
[root@master demo]# kubectl apply -f taint-pod.yaml
deployment.apps/taint-deploy created
  1. 查看pods
    可以看到创建了3个pod,都在node1节点
[root@master demo]# kubectl get pods -o wide | grep taint
taint-deploy-69f9b6874c-2pbkd   1/1     Running   0          8m22s   10.40.0.4   node1   <none>           <none>
taint-deploy-69f9b6874c-h6kpj   1/1     Running   0          8m22s   10.40.0.3   node1   <none>           <none>
taint-deploy-69f9b6874c-qblnx   1/1     Running   0          8m22s   10.40.0.5   node1   <none>           <none>
  1. 扩容Pod 我们将Pod扩容至9台,可以直观的展现污点
    可以看出,全部在node1节点
[root@master demo]# kubectl scale --replicas=9 deploy/taint-deploy -n default
deployment.apps/taint-deploy scaled
[root@master demo]# kubectl get pods -o wide | grep taint
taint-deploy-69f9b6874c-2pbkd   1/1     Running             0          11m   10.40.0.4   node1   <none>           <none>
taint-deploy-69f9b6874c-5j8rx   0/1     ContainerCreating   0          14s   <none>      node1   <none>           <none>
taint-deploy-69f9b6874c-5ws25   0/1     ContainerCreating   0          14s   <none>      node1   <none>           <none>
taint-deploy-69f9b6874c-f7lck   0/1     ContainerCreating   0          14s   <none>      node1   <none>           <none>
taint-deploy-69f9b6874c-h6kpj   1/1     Running             0          11m   10.40.0.3   node1   <none>           <none>
taint-deploy-69f9b6874c-l686n   0/1     ContainerCreating   0          14s   <none>      node1   <none>           <none>
taint-deploy-69f9b6874c-qblnx   1/1     Running             0          11m   10.40.0.5   node1   <none>           <none>
taint-deploy-69f9b6874c-r2nln   0/1     ContainerCreating   0          14s   <none>      node1   <none>           <none>
taint-deploy-69f9b6874c-vjbqq   0/1     ContainerCreating   0          14s   <none>      node1   <none>           <none>
  1. 删除node2污点root@master demo# kubectl taint node node2 type:NoSchedule-
    node/node2 untainted
  2. 缩容再扩容
[root@master demo]# kubectl scale --replicas=1 deploy/taint-deploy -n default
[root@master demo]# kubectl get pods -o wide | grep taint
taint-deploy-69f9b6874c-2pbkd   1/1     Running   0          24m   10.40.0.4   node1   <none>           <none>
[root@master demo]# kubectl scale --replicas=9 deploy/taint-deploy -n default
deployment.apps/taint-deploy scaled
[root@master demo]# kubectl get pods -o wide | grep taint
taint-deploy-69f9b6874c-2pbkd   1/1     Running   0          38m     10.40.0.4   node1   <none>           <none>
taint-deploy-69f9b6874c-8cq67   1/1     Running   0          8m29s   10.32.0.6   node2   <none>           <none>
taint-deploy-69f9b6874c-bskp5   1/1     Running   0          8m29s   10.32.0.7   node2   <none>           <none>
taint-deploy-69f9b6874c-gbv66   1/1     Running   0          8m29s   10.40.0.6   node1   <none>           <none>
taint-deploy-69f9b6874c-jrz79   1/1     Running   0          8m29s   10.40.0.5   node1   <none>           <none>
taint-deploy-69f9b6874c-k7cvp   1/1     Running   0          8m29s   10.40.0.7   node1   <none>           <none>
taint-deploy-69f9b6874c-pj6d9   1/1     Running   0          8m29s   10.32.0.4   node2   <none>           <none>
taint-deploy-69f9b6874c-rqw8k   1/1     Running   0          8m29s   10.32.0.5   node2   <none>           <none>
taint-deploy-69f9b6874c-zdtxz   1/1     Running   0          8m29s   10.40.0.3   node1   <none>           <none>

三. 亲和

1. 用节点亲和性把 Pods 分配到节点

  1. 选择一个节点,给它添加一个标签:
kubectl label nodes <your-node-name> disktype=ssd

其中 <your-node-name> 是你所选节点的名称。

  1. 验证你所选节点具有 disktype=ssd 标签:
kubectl get nodes --show-labels

输出类似于此:

NAME      STATUS    ROLES    AGE     VERSION        LABELS

worker0   Ready     <none>   1d      v1.13.0        ...,disktype=ssd,kubernetes.io/hostname=worker0

worker1   Ready     <none>   1d      v1.13.0        ...,kubernetes.io/hostname=worker1

worker2   Ready     <none>   1d      v1.13.0        ...,kubernetes.io/hostname=

2. 依据强制的节点亲和性调度 Pod

下面清单描述了一个 Pod,它有一个节点亲和性配置 requiredDuringSchedulingIgnoredDuringExecutiondisktype=ssd 。 这意味着 pod 只会被调度到具有 disktype=ssd 标签的节点上。

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd            
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  1. 执行(Apply)此清单来创建一个调度到所选节点上的 Pod:
kubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml
  1. 验证 pod 已经在所选节点上运行:
kubectl get pods --output=wide
输出类似于此:
NAME     READY     STATUS    RESTARTS   AGE    IP           NODE
nginx    1/1       Running   0          13s    10.200.0.4   worker0

3. 使用首选的节点亲和性调度 Pod

本清单描述了一个Pod,它有一个节点亲和性设置 preferredDuringSchedulingIgnoredDuringExecutiondisktype: ssd 。 这意味着 pod 将首选具有 disktype=ssd 标签的节点。

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd          
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  1. 执行此清单创建一个会调度到所选节点上的 Pod:
kubectl apply -f https://k8s.io/examples/pods/pod-nginx-preferred-affinity.yaml
  1. 验证 pod 是否在所选节点上运行:
kubectl get pods --output=wide

输出类似于此:

NAME     READY     STATUS    RESTARTS   AGE    IP           NODE
nginx    1/1       Running   0          13s    10.200.0.4   worker0

三. 亲和

1. 用节点亲和性把 Pods 分配到节点

  1. 选择一个节点,给它添加一个标签:
kubectl label nodes <your-node-name> disktype=ssd

其中 <your-node-name> 是你所选节点的名称。

  1. 验证你所选节点具有 disktype=ssd 标签:
kubectl get nodes --show-labels

输出类似于此:

NAME      STATUS    ROLES    AGE     VERSION        LABELS
worker0   Ready     <none>   1d      v1.13.0        ...,disktype=ssd,kubernetes.io/hostname=worker0
worker1   Ready     <none>   1d      v1.13.0        ...,kubernetes.io/hostname=worker1
worker2   Ready     <none>   1d      v1.13.0        ...,kubernetes.io/hostname=

2. 依据强制的节点亲和性调度 Pod

下面清单描述了一个 Pod,它有一个节点亲和性配置 requiredDuringSchedulingIgnoredDuringExecutiondisktype=ssd 。 这意味着 pod 只会被调度到具有 disktype=ssd 标签的节点上。

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd            
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  1. 执行(Apply)此清单来创建一个调度到所选节点上的 Pod:
kubectl apply -f https://k8s.io/examples/pods/pod-nginx-required-affinity.yaml
  1. 验证 pod 已经在所选节点上运行:
kubectl get pods --output=wide
输出类似于此:
NAME     READY     STATUS    RESTARTS   AGE    IP           NODE
nginx    1/1       Running   0          13s    10.200.0.4   worker0

3. 使用首选的节点亲和性调度 Pod

本清单描述了一个Pod,它有一个节点亲和性设置 preferredDuringSchedulingIgnoredDuringExecutiondisktype: ssd 。 这意味着 pod 将首选具有 disktype=ssd 标签的节点。

pods/pod-nginx-preferred-affinity.yaml

image

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd          
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  1. 执行此清单创建一个会调度到所选节点上的 Pod:
kubectl apply -f https://k8s.io/examples/pods/pod-nginx-preferred-affinity.yaml
  1. 验证 pod 是否在所选节点上运行:kubectl get pods --output=wide

输出类似于此:

NAME     READY     STATUS    RESTARTS   AGE    IP           NODE
nginx    1/1       Running   0          13s    10.200.0.4   worker0

参考文章

  1. Kubernetes 调度 Node污点/容忍
  2. Kubernetes 污点与容忍详解
  3. 调度-李宽
  4. 用节点亲和性把 Pods 分配到节点
本站文章资源均来源自网络,除非特别声明,否则均不代表站方观点,并仅供查阅,不作为任何参考依据!
如有侵权请及时跟我们联系,本站将及时删除!
如遇版权问题,请查看 本站版权声明
THE END
分享
二维码
海报
k8s系列(9)-容忍、污点、亲和
下面是一个简单的示例:在 node1 上加一个 Taint,该 Taint 的键为 key,值为 value,Taint 的效果是 NoSchedule。这意味...
<<上一篇
下一篇>>