Kubeadm 简介
为了简化 Kubernetes 的部署工作,让它能够更“接地气”,社区里就出现了一个专门用来在集群中安装 Kubernetes 的工具,名字就叫“kubeadm”,意思就是“Kubernetes 管理员”。
Kubeadm 是用容器和镜像来封装 Kubernetes 的各种组件,但它的目标不是单机部署,而是要能够轻松地在集群环境里部署 Kubernetes,并且让这个集群接近甚至达到生产级质量。
实验环境架构
角色
|
主机名
|
IP
|
建议配置
|
master节点
|
master-01
|
172.19.16.4
|
2C4G
|
Worker 节点
|
worker-01
|
172.19.16.11
|
2C2G
|
所谓的多节点集群,要求服务器应该有两台或者更多,为了简化我们只取最小值,所以这个Kubernetes 集群就只有两台主机,一台是 Master 节点,另一台是 Worker 节点。当然,在完全掌握了 kubeadm 的用法之后,你可以在这个集群里添加更多的节点。
Master 节点需要运行 apiserver、etcd、scheduler、controller-manager 等组件,管理整个集群,所以对配置要求比较高,至少是 2 核 CPU、4GB 的内存。
而 Worker 节点没有管理工作,只运行业务应用,所以配置可以低一些,为了节省资源可以给它分配 2 核 CPU 和 2GB 的内存。
基于模拟生产环境的考虑,在 Kubernetes 集群之外还需要有一台起辅助作用的服务器。它的名字叫 Console,意思是控制台,我们要在上面安装命令行工具 kubectl,所有对Kubernetes 集群的管理命令都是从这台主机发出去的。这也比较符合实际情况,因为安全的原因,集群里的主机部署好之后应该尽量少直接登录上去操作。要提醒你的是,Console 这台主机只是逻辑上的概念,不一定要是独立,完全可以复用 Master/Worker 节点作为控制。
Both Master and Worker
修改主机名
Bash hostnamectl set-hostname master-01
|
Bash cat >> /etc/hosts << EOF 172.19.100.2 master-01 172.19.100.3 worker-01 EOF
|
禁用 Firewall
Bash systemctl disable firewalld; systemctl stop firewalld
|
禁用 Selinux
Bash setenforce 0 sed -i --follow-symlinks 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
|
禁用 swap
Bash swapoff -a; sed -i '/swap/d' /etc/fstab
|
修改网络设置
将桥接的 IPv4、IPv6 流量传递到iptables的链:
Bash cat >>/etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
|
安装 Docker
Bash yum install -y yum-utils yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install -y docker-ce-20.10.24-3.el7.x86_64 docker-ce-cli-20.10.24-3.el7.x86_64 containerd.io docker-buildx-plugin-0.10.5-1.el7.x86_64.rpm docker-compose-plugin
systemctl enable --now docker docker -v systemctl status docker
|
修改配置 /etc/docker/daemon.json
Bash mkdir -p /etc/docker
cat > /etc/docker/daemon.json << EOF { "exec-opts": ["native.cgroupdriver=systemd"] } EOF
systemctl daemon-reload && systemctl restart docker
|
Kubernetes 安装
添加 yum 仓库
Bash cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
|
安装 Kubernetes 组件
Bash yum install -y kubeadm-1.23.15-0 kubelet-1.23.15-0 kubectl-1.23.15-0
|
开启 kubelet 服务
Bash systemctl enable --now kubelet
|
On Master
初始化 Kubernetes 集群
生成 kubeadm 初始配置
YAML kubeadm config print init-defaults > kubeadm-config.yaml
|
修改 部分配置
YAML localAPIEndpoint: advertiseAddress: 172.19.16.4 # 修改为 master节点IP地址 bindPort: 6443 imageRepository: registry.aliyuncs.com/google_containers # 使用阿里云镜像 kubernetesVersion: 1.23.15 # 配置安装的版本号 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 # 添加 pod 网络 CIDR 地址 serviceSubnet: 10.96.0.0/12
|
完整版 kubeadm-config.yaml
YAML apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.19.16.3 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock imagePullPolicy: IfNotPresent name: master-01 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.23.15 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
|
在安装之前建议提前下载需要的镜像
Bash [root@master-01 ~]# kubeadm config --config=kubeadm-config.yaml images list registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.15 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.15 registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.15 registry.aliyuncs.com/google_containers/kube-proxy:v1.23.15 registry.aliyuncs.com/google_containers/pause:3.6 registry.aliyuncs.com/google_containers/etcd:3.5.6-0 registry.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master-01 ~]# kubeadm config --config=kubeadm-config.yaml images pull [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.15 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.15 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.15 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.15 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0 [config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
|
初始化集群
Bash kubeadm init --config=kubeadm-config.yaml
|
可以看到如下信息代表集群初始化成功。
Bash Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.19.16.3:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:69cb0f91095a702bfec5c18209520e6e770682d16d33441b7d26c28bb7584f23
|
配置 Kubeconfig
Bash mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
安装网络
https://docs.tigera.io/calico/3.24/about
Bash curl https://projectcalico.docs.tigera.io/archive/v3.24/manifests/calico.yaml -O
kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f calico.yaml
|
查看集群状态
Bash # 观察集群状态 kubectl get pods -A -w
[root@master-01 ~]# kubectl get pods -A -w NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7b8458594b-jcdqf 1/1 Running 0 56s kube-system calico-node-qvcf4 1/1 Running 0 56s kube-system coredns-6d8c4cb4d-55zjz 1/1 Running 0 4m46s kube-system coredns-6d8c4cb4d-d4r8z 1/1 Running 0 4m46s kube-system etcd-node 1/1 Running 0 5m kube-system kube-apiserver-node 1/1 Running 0 5m1s kube-system kube-controller-manager-node 1/1 Running 0 5m2s kube-system kube-proxy-bwbds 1/1 Running 0 4m46s kube-system kube-scheduler-node 1/1 Running 0 5m
[root@master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node Ready control-plane,master 29m v1.23.15
|
On Worker
加入集群
Bash kubeadm join 172.19.16.3:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:c0967195798b902eec0c8cffd3f2f2c8cb2bd2c416afc1e2cd4653b1d34dcd30
|
集群验证
查看集群节点状态
Bash [root@master-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node Ready control-plane,master 33m v1.23.15 worker-01 Ready <none> 32m v1.23.15
|
查看集群状态
Bash [root@master-01 ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true","reason":""}
|
故障排查
查看系统日志
Bash tail -f /var/log/messages
journalctl -f -u kubelet
|
重置集群
可以删除集群重新安装
附录:
打印 worker 节点的 join token
Bash [root@master-01 ~]# kubeadm token create --print-join-command kubeadm join 172.19.16.3:6443 --token ymad3e.7pmtagcwxm5uts1d --discovery-token-ca-cert-hash sha256:c0967195798b902eec0c8cffd3f2f2c8cb2bd2c416afc1e2cd4653b1d34dcd30
|
Master 节点需要的镜像
Bash [root@master-01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.23.15 8c0fe0bea6f9 10 months ago 135MB registry.aliyuncs.com/google_containers/kube-proxy v1.23.15 9dbdbaf158f6 10 months ago 112MB registry.aliyuncs.com/google_containers/kube-scheduler v1.23.15 7babf2612aef 10 months ago 53.5MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.15 891ffdfb093d 10 months ago 125MB registry.aliyuncs.com/google_containers/etcd 3.5.6-0 fce326961ae2 11 months ago 299MB calico/cni v3.24.5 628dd7088041 11 months ago 198MB calico/node v3.24.5 54637cb36d4a 11 months ago 226MB registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 2 years ago 46.8MB registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 2 years ago 683kB
|
Worker 节点需要的镜像
Bash [root@worker-01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.23.15 9dbdbaf158f6 10 months ago 112MB calico/kube-controllers v3.24.5 38b76de417d5 11 months ago 71.4MB calico/cni v3.24.5 628dd7088041 11 months ago 198MB calico/node v3.24.5 54637cb36d4a 11 months ago 226MB registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 2 years ago 46.8MB registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 2 years ago 683kB
|
Kubeadm 安装日志
Bash [init] Using Kubernetes version: v1.23.15 [preflight] Running pre-flight checks [WARNING Hostname]: hostname "node" could not be reached [WARNING Hostname]: hostname "node": lookup node on 183.60.82.98:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node] and IPs [10.96.0.1 172.19.16.3] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost node] and IPs [172.19.16.3 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost node] and IPs [172.19.16.3 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 9.004151 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.19.16.3:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:c0967195798b902eec0c8cffd3f2f2c8cb2bd2c416afc1e2cd4653b1d34dcd30
|
Kubeadm 支持的命令
Bash kubeadm init 用于搭建控制平面节点 kubeadm join 用于搭建工作节点并将其加入到集群中 kubeadm upgrade 用于升级 Kubernetes 集群到新版本 kubeadm config 如果你使用了 v1.7.x 或更低版本的 kubeadm 版本初始化你的集群,则使用 kubeadm upgrade 来配置你的集群 kubeadm token 用于管理 kubeadm join 使用的令牌 kubeadm reset 用于恢复通过 kubeadm init 或者 kubeadm join 命令对节点进行的任何变更 kubeadm certs 用于管理 Kubernetes 证书 kubeadm kubeconfig 用于管理 kubeconfig 文件 kubeadm version 用于打印 kubeadm 的版本信息 kubeadm alpha 用于预览一组可用于收集社区反馈的特性
|
Kube-proxy 启用 ipvs 模式
YAML apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs # kube-proxy 模式
|
【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
评论(0)