k8s 集群添加Master和node节点

举报
Yuchuan 发表于 2022/08/05 16:41:31 2022/08/05
【摘要】 k8s 添加master 节点

1、添加node节点
这个就简单了,生成命令,然后在node执行即可

kubeadm token create --print-join-command
kubeadm join 192.168.1.2:6443 --token y9oz6v.vc9n8qlthfjorc4l --discovery-token-ca-cert-hash sha256:8bf00e03624031ed8354872dc7a2d6462d3b925807c16544b7243922fc9c209c

3、添加master节点
先执行添加node节点的命令

[root@k8s-master0 ~]# kubeadm token create --print-join-command
kubeadm join master0:6443 --token 82w0ib.lgltem7jldq30q8l     --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8 
[root@k8s-master0 ~]#

再生成Key

[root@k8s-master0 ~]# kubeadm init phase upload-certs --upload-certs
I0805 07:54:17.314637  486095 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a2fa55f2f331c6d351236b9de7140d06b83107161f5ac657a4bb4b67e0c695fb
[root@k8s-master0 ~]

再用--control-plane --certificate-key  把前面那条命令跟key证书连接起来获得master 的join命令,结果如下

kubeadm join 192.168.1.2:6443 --token y9oz6v.vc9n8qlthfjorc4l --discovery-token-ca-cert-hash sha256:8bf00e03624031ed8354872dc7a2d6462d3b925807c16544b7243922fc9c209c --control-plane --certificate-key  06802737a89f229bcae1ae15e46bb46d85dcb78c2f7a71d9963127f8dbb53f9c

之前添加过此节点:

[root@k8s-master1 ~]# kubeadm join master0:6443 --token 82w0ib.lgltem7jldq30q8l --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8 --control-plane --certificate-key a2fa55f2f331c6d351236b9de7140d06b83107161f5ac657a4bb4b67e0c695fb
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master1 ~]#

异常处理:

[root@k8s-master1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0805 07:58:52.130573  380128 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# kubeadm join master0:6443 --token 82w0ib.lgltem7jldq30q8l --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8 --control-plane --certificate-key a2fa55f2f331c6d351236b9de7140d06b83107161f5ac657a4bb4b67e0c695fb
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master0] and IPs [10.96.0.1 172.31.10.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [172.31.10.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [172.31.10.2 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
error execution phase kubelet-start: a Node with name "k8s-master1" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# kubeadm join master0:6443 --token 82w0ib.lgltem7jldq30q8l --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8 --control-plane --certificate-key a2fa55f2f331c6d351236b9de7140d06b83107161f5ac657a4bb4b67e0c695fb
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0805 08:14:13.826449  381196 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get node name from kubelet config: open /etc/kubernetes/kubelet.conf: no such file or directory
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0805 08:14:15.967704  381196 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-master1 ~]#

最后添加成功:

[root@k8s-master1 ~]# kubeadm join master0:6443 --token 82w0ib.lgltem7jldq30q8l --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8 --control-plane --certificate-key a2fa55f2f331c6d351236b9de7140d06b83107161f5ac657a4bb4b67e0c695fb
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [172.31.10.2 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [172.31.10.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master0] and IPs [10.96.0.1 172.31.10.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master1 ~]# 
[root@k8s-master0 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master0   Ready    control-plane,master   5h34m   v1.20.9
k8s-master1   Ready    control-plane,master   3h53m   v1.20.9
k8s-master2   Ready    control-plane,master   3h53m   v1.20.9
k8s-node0     Ready    <none>                 4h42m   v1.20.9
k8s-node1     Ready    <none>                 4h42m   v1.20.9
k8s-node2     Ready    <none>                 4h41m   v1.20.9
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-577f77cb5c-tb2fj   1/1     Running   0          5h17m
kube-system   calico-node-466nk                          1/1     Running   0          5h18m
kube-system   calico-node-684kn                          1/1     Running   0          5h6m
kube-system   calico-node-fttph                          1/1     Running   0          5h7m
kube-system   calico-node-pmjhq                          1/1     Running   0          4h18m
kube-system   calico-node-slxwx                          1/1     Running   0          5h6m
kube-system   calico-node-xm4nq                          1/1     Running   0          4h17m
kube-system   coredns-5897cd56c4-94qvc                   1/1     Running   0          5h58m
kube-system   coredns-5897cd56c4-cdr66                   1/1     Running   0          5h58m
kube-system   etcd-k8s-master0                           1/1     Running   0          5h58m
kube-system   etcd-k8s-master1                           1/1     Running   0          24m
kube-system   etcd-k8s-master2                           1/1     Running   0          29m
kube-system   kube-apiserver-k8s-master0                 1/1     Running   0          5h58m
kube-system   kube-apiserver-k8s-master1                 1/1     Running   0          24m
kube-system   kube-apiserver-k8s-master2                 1/1     Running   0          29m
kube-system   kube-controller-manager-k8s-master0        1/1     Running   1          5h58m
kube-system   kube-controller-manager-k8s-master1        1/1     Running   0          24m
kube-system   kube-controller-manager-k8s-master2        1/1     Running   0          29m
kube-system   kube-proxy-5xmxs                           1/1     Running   0          5h58m
kube-system   kube-proxy-745kw                           1/1     Running   0          5h6m
kube-system   kube-proxy-bld6b                           1/1     Running   0          5h7m
kube-system   kube-proxy-n4whr                           1/1     Running   0          4h17m
kube-system   kube-proxy-t6bxk                           1/1     Running   0          4h18m
kube-system   kube-proxy-x4w82                           1/1     Running   0          5h6m
kube-system   kube-scheduler-k8s-master0                 1/1     Running   1          5h58m
kube-system   kube-scheduler-k8s-master1                 1/1     Running   0          24m
kube-system   kube-scheduler-k8s-master2                 1/1     Running   0          29m
[root@k8s-master0 ~]# 
【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。