k8s修改etcd 2379端口

k8s集群中 ETCD 一般以 static pod方式部署在 master 0/1/2 节点上,路径一般为 /etc/kubernetes/manifests/etcd.yaml需要针对k8s集群中已经在运行的 ETCD 3 副本进行端口切换变更,从 2379 端口变更到 2378,2380 保持不变。

etcd.yaml示例:

apiVersion: v1
kind: Pod
metadata:
  namespace: kube-system
  name: etcd
  labels:
    component: etcd
    tier: control-plane
spec:
  containers:
  - name: etcd
    image: xxx/etcd:v3.5.4-amd64
    command:
    - etcd
    - --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    env:
    - name: ETCD_NAME
      value: kube-etcd2
    - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
      value: https://10.0.1.6:2380
    - name: ETCD_LISTEN_PEER_URLS
      value: https://10.0.1.6:2380
    - name: ETCD_LISTEN_CLIENT_URLS
      value: https://10.0.1.6:2379,https://127.0.0.1:2379
    - name: ETCD_ADVERTISE_CLIENT_URLS
      value: https://10.0.1.6:2379
    - name: ETCD_INITIAL_CLUSTER_TOKEN
      value: xxx-k8s
    - name: ETCD_INITIAL_CLUSTER
      value: kube-etcd1=https://10.0.1.8:2380,kube-etcd2=https://10.0.1.6:2380,kube-etcd3=https://10.0.1.7:2380
    - name: ETCD_INITIAL_CLUSTER_STATE
      value: new
    - name: ETCD_EXPERIMENTAL_BACKEND_BBOLT_FREELIST_TYPE
      value: "map"
    - name: ETCD_AUTO_COMPACTION_RETENTION
      value: "5m"
    - name: ETCD_SNAPSHOT_COUNT
      value: "10000"
    - name: ETCD_MAX_SNAPSHOTS
      value: "5"
    - name: ETCD_MAX_WALS
      value: "5"
    - name: ETCD_HEARTBEAT_INTERVAL
      value: "1000"
    - name: ETCD_ELECTION_TIMEOUT
      value: "10000"
    - name: ETCD_QUOTA_BACKEND_BYTES
      value: "100000000000"
    - name: ETCD_BACKEND_BATCH_LIMIT
      value: "1000"
    - name: ETCD_BACKEND_BATCH_INTERVAL
      value: "10ms"
    - name: ETCD_CLIENT_CERT_AUTH
      value: "true"
    - name: ETCD_TRUSTED_CA_FILE
      value: /etc/kubernetes/pki/etcd/ca.crt
    - name: ETCD_CERT_FILE
      value: /etc/kubernetes/pki/etcd/server.crt
    - name: ETCD_KEY_FILE
      value: /etc/kubernetes/pki/etcd/server.key
    - name: ETCD_PEER_CLIENT_CERT_AUTH
      value: "true"
    - name: ETCD_PEER_TRUSTED_CA_FILE
      value: /etc/kubernetes/pki/etcd/ca.crt
    - name: ETCD_PEER_CERT_FILE
      value: /etc/kubernetes/pki/etcd/peer.crt
    - name: ETCD_PEER_KEY_FILE
      value: /etc/kubernetes/pki/etcd/peer.key
    - name: ETCD_DATA_DIR
      value: /var/lib/etcd/data
    - name: ETCD_LOG_LEVEL
      value: "info"
    - name: ETCD_LOG_OUTPUTS
      value: /var/lib/etcd/logs/etcd.log
    - name: ETCD_ENABLE_LOG_ROTATION
      value: "true"
    - name: ETCD_LOG_ROTATION_CONFIG_JSON
      value: "{\"maxsize\": 1024, \"maxage\": 30, \"maxbackups\": 5, \"localtime\": false, \"compress\": false}"
    livenessProbe:
      exec:
        command:
        - /bin/sh
        - -ec
        - ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
          --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key
          get test
      failureThreshold: 8
      initialDelaySeconds: 15
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 2000m
        memory: 2Gi
      limits:
        cpu: 4000m
        memory: 8Gi
    volumeMounts:
    - name: etcd-data
      mountPath: /var/lib/etcd/data
    - name: etcd-certs
      mountPath: /etc/kubernetes/pki/etcd
    - name: etcd-log
      mountPath: /var/lib/etcd/logs
  hostNetwork: true
  volumes:
  - name: etcd-certs
    hostPath:
      path: /etc/kubernetes/pki/etcd
      type: Directory
  - name: etcd-data
    hostPath:
      path: /home/t4/etcd/data
      type: DirectoryOrCreate
  - name: etcd-log
    hostPath:
      path: /home/t4/sigma-master/logs/etcd
      type: DirectoryOrCreate

ETCD参数说明:

参数项样例说明
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://10.0.1.6:2380List of this member’s peer URLs to advertise to the rest of the cluster. These addresses are used for communicating etcd data around the cluster. At least one must be routable to all cluster members. These URLs can contain domain names.
ETCD_LISTEN_PEER_URLShttps://10.0.1.6:2380List of URLs to listen on for peer traffic. This flag tells the etcd to accept incoming requests from its peers on the specified scheme://IP:port combinations. Scheme can be http or https. Alternatively, use unix:// or unixs:// for unix sockets. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
ETCD_LISTEN_CLIENT_URLShttps://10.0.1.6:2379,https://127.0.0.1:2379List of URLs to listen on for client traffic. This flag tells the etcd to accept incoming requests from the clients on the specified scheme://IP:port combinations as long as --listen-client-http-urls is not specified. Scheme can be either http or https. Alternatively, use unix:// or unixs:// for unix sockets. If 0.0.0.0 is specified as the IP, etcd listens to the given port on all interfaces. If an IP address is given as well as a port, etcd will listen on the given port and interface. Multiple URLs may be used to specify a number of addresses and ports to listen on. The etcd will respond to requests from any of the listed addresses and ports.
ETCD_ADVERTISE_CLIENT_URLShttps://10.0.1.6:2379List of this member’s client URLs to advertise to the rest of the cluster. These URLs can contain domain names.
ETCD_INITIAL_CLUSTER_TOKENxxx-k8sInitial cluster token for the etcd cluster during bootstrap
ETCD_INITIAL_CLUSTERkube-etcd1=https://10.0.1.8:2380,kube-etcd2=https://10.0.1.6:2380,kube-etcd3=https://10.0.1.7:2380Initial cluster configuration for bootstrapping
ETCD_INITIAL_CLUSTER_STATEnewInitial cluster state (“new” or “existing”). Set to new for all members present during initial static or DNS bootstrapping. If this option is set to existing, etcd will attempt to join the existing cluster. If the wrong value is set, etcd will attempt to start but fail safely
ETCD_EXPERIMENTAL_BACKEND_BBOLT_FREELIST_TYPE"map"The freelist type that etcd backend(bboltdb) uses (array and map are supported types)
ETCD_AUTO_COMPACTION_RETENTION"5m"Auto compaction retention for mvcc key value store in hour. 0 means disable auto compaction
ETCD_SNAPSHOT_COUNT"10000"Number of committed transactions to trigger a snapshot to disk
ETCD_MAX_SNAPSHOTS"5"Maximum number of snapshot files to retain (0 is unlimited)
ETCD_MAX_WALS"5"Maximum number of wal files to retain (0 is unlimited)
ETCD_HEARTBEAT_INTERVAL"1000"Time (in milliseconds) of a heartbeat interval
ETCD_ELECTION_TIMEOUT"10000"Time (in milliseconds) for an election to timeout
ETCD_QUOTA_BACKEND_BYTES"100000000000"Raise alarms when backend size exceeds the given quota (0 defaults to low space quota)
ETCD_BACKEND_BATCH_LIMIT"1000"BackendBatchLimit is the maximum operations before commit the backend transaction
ETCD_BACKEND_BATCH_INTERVAL"10ms"BackendBatchInterval is the maximum time before commit the backend transaction
ETCD_CLIENT_CERT_AUTH"true"Enable client cert authentication
ETCD_TRUSTED_CA_FILE/etc/kubernetes/pki/etcd/server.crtPath to the client server TLS trusted CA cert file
ETCD_KEY_FILE/etc/kubernetes/pki/etcd/server.keyPath to the client server TLS key file
ETCD_PEER_CLIENT_CERT_AUTH"true"Enable peer client cert authentication
ETCD_PEER_TRUSTED_CA_FILE/etc/kubernetes/pki/etcd/ca.crtPath to the peer server TLS trusted CA file
ETCD_PEER_CERT_FILE/etc/kubernetes/pki/etcd/peer.crtPath to the peer server TLS cert file. This is the cert for peer-to-peer traffic, used both for server and client
ETCD_PEER_KEY_FILE/etc/kubernetes/pki/etcd/peer.keyPath to the peer server TLS key file. This is the key for peer-to-peer traffic, used both for server and client
ETCD_DATA_DIR/var/lib/etcd/dataPath to the data directory
ETCD_LOG_LEVEL"info"Configures log level. Only supports debug, info, warn, error, panic, or fatal
ETCD_LOG_OUTPUTS/var/lib/etcd/logs/etcd.logSpecify ‘stdout’ or ‘stderr’ to skip journald logging even when running under systemd, or list of comma separated output targets
ETCD_ENABLE_LOG_ROTATION"true"
ETCD_LOG_ROTATION_CONFIG_JSON"{"maxsize": 1024, "maxage": 30, "maxbackups": 5, "localtime": false, "compress": false}"

ETCD端口通信架构:

客户端/API Server访问端口(2379):
+-------------------+       +-------------------+       +-------------------+
|      ETCD 0       |      |        ETCD 1      |       |      ETCD 2       |
|     (master 0)    |<---->|     (master 1)     |<----->|    (master 2)     |
|    IP: 10.0.1.6   |      |    IP: 10.0.1.7    |       |   IP: 10.0.1.8    |
+-------------------+       +-------------------+       +-------------------+
  ^ 客户端请求 (2379)          ^ 客户端请求 (2379)           ^ 客户端请求 (2379)
  |                          |                            |
  v                          v                            v
+----------------------------------------------------------------------------+
|                   客户端/API Server访问端口(读写键值数据)                      |
+----------------------------------------------------------------------------+


集群内部通信端口(2380):
+-------------------+     +-------------------+     +-------------------+
|      ETCD 0       |<--->|     ETCD 1        |<--->|     ETCD 2        |
|     (master 0)    |     |     (master 1)    |     |     (master 2)    |
|    IP: 10.0.1.6   |     |    IP: 10.0.1.7   |     |    IP: 10.0.1.8   |
|     2380 (集群)    |     |     2380 (集群)   |      |     2380 (集群)   |
+-------------------+     +-------------------+     +-------------------+

- 客户端通过 2379 端口与任意节点通信(读/写请求)。
- 节点间通过 2380 端口进行 Leader 选举、心跳检测、日志复制(Raft 协议)。

 查看ETCD集群信息:

ETCDCTL_API=3 etcdctl --endpoints=10.0.1.6:2379,10.0.1.7:2379,10.0.1.8:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key endpoint status -w table
ETCDCTL_API=3 etcdctl --endpoints=10.0.1.6:2379,10.0.1.7:2379,10.0.1.8:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key endpoint health -w table

备份ETCD数据(Lead节点):

ETCDCTL_API=3 etcdctl --endpoints=10.0.1.8:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key snapshot save /root/etcd_backup/etcd-backup-20250307.db

验证快照命令
etcdctl snapshot status /root/etcd_backup/etcd-backup-20250307.db -w table
14c436f9, 93609, 2535, 29 MB

修改 Master 0 ETCD 0 的端口配置

修改/etc/kubernetes/manifests/etcd.yaml

  • ETCD_LISTEN_CLIENT_URLS 2379 → 2378

  • ETCD_ADVERTISE_CLIENT_URLS 2379 → 2378

  • livenessProbe --endpoints 2379 → 2378

修改后重启 ETCD 0, 检查 pod 是否正常运行,如果修改后一直没有看到etcd pod,可以重启 kubelet 观察。

# 变更后检察etcd状态:
kubectl get pods -n kube-system -l component=etcd

# 查看 etcd 日志:
kubectl logs -n kube-system -l component=etcd

修改 Master 0/1/2 API Server 配置

kube-apiserver.yaml示例:

apiVersion: v1
kind: Pod
metadata:
  namespace: kube-system
  name: kube-apiserver
  labels:
    component: kube-apiserver
    tier: control-plane
spec:
  containers:
  - name: kube-apiserver
    image: xxx/kube-apiserver-amd64:v1.22.x
    command:
    - kube-apiserver
    - --advertise-address=10.0.1.6
    - --secure-port=6443
    - --insecure-port=0
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --enable-bootstrap-token-auth=true
    - --authorization-mode=Node,RBAC
    - --feature-gates=RotateKubeletServerCertificate=true
    - --etcd-servers=https://10.0.1.8:2379,https://10.0.1.6:2379,https://10.0.1.7:2379
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --allow-privileged=true
    - --apiserver-count=3
    - --requestheader-username-headers=X-Remote-User
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-allowed-names=front-proxy-client,aggregator
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --service-cluster-ip-range=192.16.0.0/16
    - --service-node-port-range=30000-32000
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --log-dir=/logs
    - --logtostderr=false
    ......

修改 /etc/kubernetes/manifests/kube-apiserver.yaml 中 - --etcd-servers 的 master 0 etcd 配置 2379 → 2378重启 api server 0, 检查其 pod 是否正常运行

kubectl get po -n kube-system -l component=kube-apiserver
kubectl delete pod -n kube-system apiserver-name
kubectl logs -n kube-system -l component=kube-apiserver

没有问题的话接下来就是一台台替换master 1/2上etcd 2379端口地址,重复上述操作 master 0/1/2 apiserver替换etcd端口操作,需检察替换过程中etcd集群和apiserver集群都无异常再进行下一步变更。

检查 ETCD 状态

ETCDCTL_API=3 etcdctl --endpoints=10.0.1.6:2379,10.0.1.7:2379,10.0.1.8:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key endpoint status -w table

可以看到客户端通信端口已由 2379 变更为 2378:

k8s修改etcd 2379端口


参考:

anzhihe 安志合个人博客,版权所有 丨 如未注明,均为原创 丨 转载请注明转自:https://chegva.com/6324.html | ☆★★每天进步一点点,加油!★★☆ | 

您可能还感兴趣的文章!

2 评论

发表评论

电子邮件地址不会被公开。 必填项已用*标注