雲計算

自建Kubernetes集群接入ACK註冊集群后的ECS節點初始化配置

本文主要介紹自建Kubernetes集群接入ACK註冊集群並手動擴容阿里雲ECS節點時的注意事項。

PS:您也可以選擇使用ACK註冊集群的節點池功能擴容阿里雲ECS節點,請參考阿里雲註冊集群—混合集群-使用自定義節點添加腳本

為自建Kubernetes集群新擴容阿里雲ECS節點

需要用戶在節點初始化腳本中設置 --provider-id=${ALIBABA_CLOUD_PROVIDE_ID}* 以及追加*--node-labels=${ALIBABA_CLOUD_LABELS}

ALIBABA_CLOUD_PROVIDE_IDALIBABA_CLOUD_LABELS 變量的值如下所示:

$ clusterID=xxxxx
$ aliyunRegionID=$(curl 100.100.100.200/latest/meta-data/region-id)
$ aliyunInstanceID=$(curl 100.100.100.200/latest/meta-data/instance-id)

$ ALIBABA_CLOUD_PROVIDE_ID=${aliyunRegionID}.${aliyunInstanceID}
$ ALIBABA_CLOUD_LABELS="ack.aliyun.com=${clusterID},alibabacloud.com/instance-id=${aliyunInstanceID},alibabacloud.com/external=true"

批量為自建Kubernetes集群已有節點打標

自建Kubernetes集群接入ACK註冊集群后,需要為已有節點添加節點標籤,節點標籤的作用如下所示:

  • ack.aliyun.com=${clusterID}。用於ACK管控從集群維度識別自建Kubernetes中的阿里雲ECS節點。
  • alibabacloud.com/instance-id=${aliyunInstanceID}。用於ACK管控從節點維度識別自建Kubernetes中的阿里雲ECS節點。
  • alibabacloud.com/external=true。用於自建Kubernetes集群中Terway,CSI等組件識別阿里雲ECS節點 。

部署global-job-controller

$ cat <<EOF > global-job-controller.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: global-job-controller
  namespace: kube-system
  labels:
    app: global-job-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      app: global-job-controller
  template:
    metadata:
      labels:
        app: global-job-controller
    spec:
      restartPolicy: Always
      serviceAccount: jobs
      containers:
        - name: global-job-controller
          image: registry.cn-hangzhou.aliyuncs.com/acs/global-job:v1.0.0.36-g0d1ac97-aliyun
          env:
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: jobs
rules:
  - apiGroups:
      - jobs.aliyun.com
    resources:
      - globaljobs
    verbs:
      - "*"
  - apiGroups:
      - "*"
    resources:
      - pods
      - events
      - configmaps
    verbs:
      - "*"
  - apiGroups:
      - "*"
    resources:
      - nodes
    verbs:
      - "*"
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs:
      - get
      - list
      - create

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: jobs-role-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jobs
subjects:
  - kind: ServiceAccount
    name: jobs
    namespace: kube-system

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jobs
  namespace: kube-system
EOF
$ kubectl apply -f global-job-controller.yaml

等待global-job-controller運行正常。

部署globaljob

$ export CLUSTER_ID=xxxxxx
$ cat << EOF > globaljob.yaml
apiVersion: jobs.aliyun.com/v1alpha1
kind: GlobalJob
metadata:
  name: globaljob
  namespace: kube-system
spec:
  maxParallel: 100
  terminalStrategy:
    type: Never
  template:
    spec:
      serviceAccountName: ack
      restartPolicy: Never
      containers:
        - name: globaljob
          image: registry.cn-hangzhou.aliyuncs.com/acs/marking-agent:v1.13.1.39-g4186808-aliyun
          imagePullPolicy: Always
          env:
            - name: REGISTRY_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: CLUSTER_ID
              value: "$CLUSTER_ID"
$ kubectl apply -f globaljob.yaml

運行完畢後可以檢查ecs節點是否已經正確打標並釋放上述資源。

$ kubectl delete -f globaljob.yaml -f global-job-controller.yaml

Leave a Reply

Your email address will not be published. Required fields are marked *