k8s学习一:使用kubeadm安装k8s

举报
仙士可 发表于 2023/06/30 12:19:42 2023/06/30
【摘要】 写在开头在学习整个k8s之前,先想办法搭建个k8s出现成果,然后根据这个成果进行深入学习,才会让人有学习的动力,本文将记录自己的安装k8s教程准备工作:一台ubuntu服务器(虚拟机)k8s环境配置host配置我们先给服务器定义好hosts,便于直接找到该服务器ip192.168.192.9 master复制注意,后面如果需要增加集群,也需要配置其他的hosts主机名修改(非必要)修改 /e...

写在开头

在学习整个k8s之前,先想办法搭建个k8s出现成果,然后根据这个成果进行深入学习,才会让人有学习的动力,本文将记录自己的安装k8s教程

准备工作:

一台ubuntu服务器(虚拟机)

k8s环境配置

host配置

我们先给服务器定义好hosts,便于直接找到该服务器ip

192.168.192.9 master
复制

注意,后面如果需要增加集群,也需要配置其他的hosts

主机名修改(非必要)

修改 /etc/hostname 改为 master

关闭防火墙

由于k8s的防火墙规则和系统的冲突,所以需要关闭系统的防火墙

sudo ufw disable
systemctl stop ufw
复制

关闭selinux

关闭selinux以允许容器访问宿主机的文件系统  (新装的Ubuntu好像没这个东西,可以自行百度)

禁用swap

swap会在内存不足的时候使用磁盘当做内存,但是效率会非常低,导致服务器直接卡死

sudo swapoff -a
vi /etc/fstab  # 将swap一行前面增加个# 号注释掉
复制

网络参数

vi /etc/sysctl.d/k8s.conf  #将下面的3行(去掉#号写入到此文件)

#net.bridge.bridge-nf-call-ip6tables = 1
#net.bridge.bridge-nf-call-iptables = 1
#net.ipv4.ip_forward = 1

modprobe br_netfilter  
sysctl -p /etc/sysctl.d/k8s.conf
复制

安装docker

apt-get install docker.io -y
# 设置开机启动并启动docker  
sudo systemctl start docker
sudo systemctl enble docker

# 修改docker运行时,并且增加镜像
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "registry-mirrors": [
    "https://reg-mirror.qiniu.com/"
  ]
  "storage-driver": "overlay2"
}
EOF
复制

配置ubuntu k8s 安装源

#使得 apt 支持 ssl 传输
apt-get update && apt-get install -y apt-transport-https
#下载 gpg 密钥
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
#添加 k8s 镜像源
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
#更新
apt-get update
复制

安装 kubeadm、kubelet、kubectl

注意!这里最好是只安装 1.24之前的版本,新版本可能安装很多问题,导致无心学习,我这里选择的是1.23.10

apt list--all-versions package_name  ##查看版本 

apt-get install -y kubeadm=1.23.10-00 kubelet=1.23.10-00 kubectl=1.23.10-00

# 阻止自动更新(apt upgrade时忽略)。所以更新的时候先unhold,更新完再hold。
apt-mark hold kubelet kubeadm kubectl
复制

通过kubeadm 进行初始化k8s集群

kubeadm init \
--apiserver-advertise-address 192.168.192.9 \
--apiserver-bind-port 6443 \
--pod-network-cidr 10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers
复制

如果初始化出问题,就需要自己百度多解决了

初始化成功之后:

o/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: mdutmg.pbecp9mqcowc4b0u
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.192.9:6443 --token mdutmg.pbecp9mqcowc4b0u \
        --discovery-token-ca-cert-hash sha256:61f8d9b13b94a3c7eff88e25faf1c873cfd559d1ee2f2988009ac85de11ec730
复制

保存最后的kubeadm join 代码,后面加入集群有用

查看集群是否成功部署:

 kubectl get nodes
复制

如果提示:The connection to the server localhost:8080 was refused - did you specify the right host or port?

是因为没有绑定好集群环境

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
复制

即可正常显示

这边可以看到,STATUS 是NotReady状态,因为还有网络插件没装

安装 Pod Network(flannel网络插件)

wget
复制

可以查看到kube-flannel的配置项

自己手动拉下镜像,避免部署的时候失败:

root@test02:/home/tioncico# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
复制

部署flannel插件

root@test02:/home/tioncico# kubectl get pod --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-flannel           kube-flannel-ds-rfx4q                        1/1     Running   0          4s
kube-system            coredns-6d8c4cb4d-575sw                      1/1     Running   0          65m
kube-system            coredns-6d8c4cb4d-p4nv8                      1/1     Running   0          65m
kube-system            etcd-master                                  1/1     Running   0          66m
kube-system            kube-apiserver-master                        1/1     Running   0          66m
kube-system            kube-controller-manager-master               1/1     Running   0          66m
kube-system            kube-proxy-n2n4n                             1/1     Running   0          65m
kube-system            kube-scheduler-master                        1/1     Running   0          66m
kubernetes-dashboard   dashboard-metrics-scraper-6f669b9c9b-86hm6   1/1     Running   0          28m
kubernetes-dashboard   kubernetes-dashboard-54c5fb4776-qb2lf        1/1     Running   0          28m
root@test02:/home/tioncico#
复制

可看到nodes,pods一切正常

root@test02:/home/tioncico# kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-flannel           kube-flannel-ds-rfx4q                        1/1     Running   0          58s
kube-system            coredns-6d8c4cb4d-575sw                      1/1     Running   0          66m
kube-system            coredns-6d8c4cb4d-p4nv8                      1/1     Running   0          66m
kube-system            etcd-master                                  1/1     Running   0          67m
kube-system            kube-apiserver-master                        1/1     Running   0          67m
kube-system            kube-controller-manager-master               1/1     Running   0          67m
kube-system            kube-proxy-n2n4n                             1/1     Running   0          66m
kube-system            kube-scheduler-master                        1/1     Running   0          67m
kubernetes-dashboard   dashboard-metrics-scraper-6f669b9c9b-86hm6   1/1     Running   0          29m
kubernetes-dashboard   kubernetes-dashboard-54c5fb4776-qb2lf        1/1     Running   0          29m
root@test02:/home/tioncico# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   67m   v1.23.10
root@test02:/home/tioncico#
复制
【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。