【云原生】k8s 环境安装

举报
Yuchuan 发表于 2022/08/04 23:41:37 2022/08/04
【摘要】 k8s

安装docker容器

0、创建VPC网络,创建ECS主机

HostName MAC IPv4
k8smaster0 52:54:00:00:00:80 192.168.122.154/24
k8smaster1 52:54:00:00:00:81 192.168.122.155/24
k8smaster2 52:54:00:00:00:82 192.168.122.156/24
k8snode1 52:54:00:00:00:83 192.168.122.157/24
k8snode2 52:54:00:00:00:84 192.168.122.158/24
k8snode3 52:54:00:00:00:85 192.168.122.159/24

1、移除以前docker相关包

sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

2、配置yum源

sudo yum install -y yum-utils
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3、安装指定版本的docker

[root@k8s-master0 ~]# yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6
Docker CE Stable - x86_64                                               2.1 kB/s | 7.1 kB     00:03    
No match for argument: docker-ce-20.10.7
No match for argument: docker-ce-cli-20.10.7
No match for argument: containerd.io-1.4.6
Error: Unable to find a match: docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
[root@k8s-master0 ~]#

查看本机 docker版本信息

[root@k8s-master0 ~]# yum list docker-ce --showduplicates | sort -r
Last metadata expiration check: 0:08:09 ago on Thu 04 Aug 2022 01:50:03 PM UTC.
docker-ce.x86_64                3:20.10.17-3.el9                docker-ce-stable
docker-ce.x86_64                3:20.10.16-3.el9                docker-ce-stable
docker-ce.x86_64                3:20.10.15-3.el9                docker-ce-stable
Available Packages
[root@k8s-master0 ~]# yum list docker-ce-cli --showduplicates | sort -r
Last metadata expiration check: 0:09:24 ago on Thu 04 Aug 2022 01:50:03 PM UTC.
docker-ce-cli.x86_64              1:20.10.17-3.el9              docker-ce-stable
docker-ce-cli.x86_64              1:20.10.16-3.el9              docker-ce-stable
docker-ce-cli.x86_64              1:20.10.15-3.el9              docker-ce-stable
Available Packages
[root@k8s-master0 ~]# yum list containerd --showduplicates | sort -r
Error: No matching Packages to list
Last metadata expiration check: 0:09:52 ago on Thu 04 Aug 2022 01:50:03 PM UTC.
[root@k8s-master0 ~]# yum list containerd.io --showduplicates | sort -r
Last metadata expiration check: 0:10:05 ago on Thu 04 Aug 2022 01:50:03 PM UTC.
containerd.io.x86_64               1.6.6-3.1.el9                docker-ce-stable
containerd.io.x86_64               1.6.4-3.1.el9                docker-ce-stable
Available Packages

根据对应信息安装对应版本。

[root@k8s-master0 ~]# yum install -y docker-ce-20.10.17 docker-ce-cli-20.10.17 containerd.io-1.6.6
Last metadata expiration check: 0:11:08 ago on Thu 04 Aug 2022 01:50:03 PM UTC.
Dependencies resolved.
========================================================================================================
 Package                            Architecture Version                   Repository              Size
========================================================================================================
Installing:
 containerd.io                      x86_64       1.6.6-3.1.el9             docker-ce-stable        32 M
 docker-ce                          x86_64       3:20.10.17-3.el9          docker-ce-stable        21 M
 docker-ce-cli                      x86_64       1:20.10.17-3.el9          docker-ce-stable        29 M
Installing dependencies:

4、启动

#查看docker状态
[root@k8s-master0 ~]# systemctl status docker
○ docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
     Active: inactive (dead)
TriggeredBy: ○ docker.socket
       Docs: https://docs.docker.com
[root@k8s-master0 ~]#

#既要现在启动也要开机启动
[root@k8s-master0 ~]# systemctl enable docker --now
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

#再次查看docker 运行状态
[root@k8s-master0 ~]# systemctl status docker
● docker.service - Docker Application Container Engine
     Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2022-08-04 14:22:09 UTC; 3s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 191325 (dockerd)
      Tasks: 9
     Memory: 33.9M
        CPU: 341ms
     CGroup: /system.slice/docker.service
             └─191325 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

3、kubeadm创建集群

请参照以前Docker安装。先提前为所有机器安装Docker

1、安装kubeadm

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
    • 设置防火墙放行规则
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
    • 设置不同hostname
  • 开启机器上的某些端口。请参见这里 了解更多详细信息。
    • 内网互信
  • 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。
    • 永久关闭

1、基础环境

所有机器执行以下操作

#各个机器设置自己的域名
hostnamectl set-hostname xxxx


# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭swap
swapoff -a  
sed -ri 's/.*swap.*/#&/' /etc/fstab

#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system

2、安装kubelet、kubeadm、kubectl

1)、查看kubelet kubeadm kubectl版本,你可以找到你所需要的版本。其中第一列是包的名字,第二列是版本信息。

[root@k8s-master0 ~]# yum list kubelet kubeadm kubectl --showduplicates | sort -r
Last metadata expiration check: 0:58:28 ago on Thu 04 Aug 2022 01:50:03 PM UTC.
kubectl.x86_64                     1.24.3-0                     google-cloud-sdk
Available Packages
[root@k8s-master0 ~]#
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
[root@k8s-master0 ~]# yum list kubelet kubeadm kubectl --showduplicates | sort -r
Kubernetes                                       63 kB/s | 149 kB     00:02    
kubectl.x86_64                     1.24.3-0                     google-cloud-sdk
Available Packages
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes
Last metadata expiration check: 0:04:05 ago on Thu 04 Aug 2022 02:50:20 PM UTC.
Dependencies resolved.
========================================================================================================
 Package                          Architecture     Version                   Repository            Size
========================================================================================================
Installing:
 kubeadm                          x86_64           1.20.9-0                  kubernetes           8.3 M
 kubectl                          x86_64           1.20.9-0                  kubernetes           8.5 M
 kubelet                          x86_64           1.20.9-0                  kubernetes            20 M
Installing dependencies:
 conntrack-tools        
[root@k8s-master1 ~]# systemctl enable --now kubelet
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@k8s-master1 ~]#

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环

[root@k8s-master0 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Thu 2022-08-04 14:56:47 UTC; 6ms ago
       Docs: https://kubernetes.io/docs/
   Main PID: 191933 (kubelet)
      Tasks: 1 (limit: 22945)
     Memory: 156.0K
        CPU: 1ms
     CGroup: /system.slice/kubelet.service
             └─191933 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf -->

Aug 04 14:56:47 k8s-master0 systemd[1]: Started kubelet: The Kubernetes Node Agent.
[root@k8s-master0 ~]# 
[root@k8s-master0 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /usr/lib/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Thu 2022-08-04 14:57:38 UTC; 6s ago
       Docs: https://kubernetes.io/docs/
    Process: 191974 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_K>
   Main PID: 191974 (code=exited, status=255/EXCEPTION)
        CPU: 115ms
[root@k8s-master0 ~]#


2、使用kubeadm引导集群

1、下载各个机器需要的镜像

[root@k8s-master0 ~]# sudo tee ./images.sh <<-'EOF'
> #!/bin/bash
> images=(
> kube-apiserver:v1.20.9
> kube-proxy:v1.20.9
> kube-controller-manager:v1.20.9
> kube-scheduler:v1.20.9
> coredns:1.7.0
> etcd:3.4.13-0
> pause:3.2
> )
> for imageName in ${images[@]} ; do
> docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
> done
> EOF
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
[root@k8s-master0 ~]# ll
total 4
-rw-r--r--. 1 root root 270 Aug  4 15:01 images.sh
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# chmod +x ./images.sh && ./images.sh
v1.20.9: Pulling from lfy_k8s_images/kube-apiserver
b49b96595fd4: Pull complete 
95d8d2e6184e: Pull complete 
aa4423369611: Pull complete
[root@k8s-master0 ~]# docker images
REPOSITORY                                                                 TAG        IMAGE ID       CREATED         SIZE
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-proxy                v1.20.9    8dbf9a6aa186   12 months ago   99.7MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-scheduler            v1.20.9    295014c114b3   12 months ago   47.3MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-apiserver            v1.20.9    0d0d57e4f64c   12 months ago   122MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/kube-controller-manager   v1.20.9    eb07fd4ad3b4   12 months ago   116MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/etcd                      3.4.13-0   0369cf4303ff   23 months ago   253MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns                   1.7.0      bfe3a36ebd25   2 years ago     45.2MB
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause                     3.2        80d28bedfe5d   2 years ago     683kB
[root@k8s-master0 ~]#

2、初始化主节点

#所有机器添加master域名映射,以下需要修改为自己的
echo "172.31.0.2 cluster-endpoint" >> /etc/hosts
echo "172.31.10.2 master1" >> /etc/hosts
echo "172.31.20.2 master2" >> /etc/hosts
echo "172.31.30.2 node0" >> /etc/hosts
echo "172.31.40.2 node1" >> /etc/hosts
echo "172.31.50.2 node2" >> /etc/hosts

[root@k8s-node2 ~]# echo "172.31.0.2 cluster-endpoint" >> /etc/hosts
[root@k8s-node2 ~]# echo "172.31.10.2 master1" >> /etc/hosts
[root@k8s-node2 ~]# echo "172.31.20.2 master2" >> /etc/hosts
[root@k8s-node2 ~]# echo "172.31.30.2 node0" >> /etc/hosts
[root@k8s-node2 ~]# echo "172.31.40.2 node1" >> /etc/hosts
[root@k8s-node2 ~]# echo "172.31.50.2 node2" >> /etc/hosts
[root@k8s-node2 ~]# ping cluster-endpoint
PING cluster-endpoint (172.31.0.2) 56(84) bytes of data.
64 bytes from cluster-endpoint (172.31.0.2): icmp_seq=1 ttl=64 time=51.1 ms
64 bytes from cluster-endpoint (172.31.0.2): icmp_seq=2 ttl=64 time=49.9 ms
64 bytes from cluster-endpoint (172.31.0.2): icmp_seq=3 ttl=64 time=49.9 ms
^C
--- cluster-endpoint ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 49.865/50.289/51.094/0.569 ms
[root@k8s-node2 ~]# 


#主节点初始化
kubeadm init \
--apiserver-advertise-address=172.31.0.2 \
--control-plane-endpoint=master0 \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16

#所有网络范围不重叠

端口异常:

[root@k8s-master0 ~]# kubeadm init \
> --apiserver-advertise-address=172.31.0.2 \
> --control-plane-endpoint=master0 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
> --kubernetes-version v1.20.9 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master0 ~]# 
[root@k8s-master0 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2022-08-05 02:19:35 UTC; 9min ago
       Docs: man:firewalld(1)
   Main PID: 223592 (firewalld)
      Tasks: 2 (limit: 22945)
     Memory: 25.7M
        CPU: 819ms
     CGroup: /system.slice/firewalld.service
             └─223592 /usr/bin/python3 -s /usr/sbin/firewalld --nofork --nopid

Aug 05 02:19:35 k8s-master0 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 05 02:19:35 k8s-master0 systemd[1]: Started firewalld - dynamic firewall daemon.
Aug 05 02:19:36 k8s-master0 firewalld[223592]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables:>
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# systemctl stop firewalld
[root@k8s-master0 ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Fri 2022-08-05 02:29:35 UTC; 9s ago
   Duration: 9min 59.467s
       Docs: man:firewalld(1)
    Process: 223592 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
   Main PID: 223592 (code=exited, status=0/SUCCESS)
        CPU: 943ms

Aug 05 02:19:35 k8s-master0 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 05 02:19:35 k8s-master0 systemd[1]: Started firewalld - dynamic firewall daemon.
Aug 05 02:19:36 k8s-master0 firewalld[223592]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables:>
Aug 05 02:29:35 k8s-master0 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Aug 05 02:29:35 k8s-master0 systemd[1]: firewalld.service: Deactivated successfully.
Aug 05 02:29:35 k8s-master0 systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@k8s-master0 ~]# 
[root@k8s-master0 ~]# 
[root@k8s-master0 ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
     Active: inactive (dead) since Fri 2022-08-05 02:29:35 UTC; 42s ago
   Duration: 9min 59.467s
       Docs: man:firewalld(1)
   Main PID: 223592 (code=exited, status=0/SUCCESS)
        CPU: 943ms

Aug 05 02:19:35 k8s-master0 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 05 02:19:35 k8s-master0 systemd[1]: Started firewalld - dynamic firewall daemon.
Aug 05 02:19:36 k8s-master0 firewalld[223592]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables:>
Aug 05 02:29:35 k8s-master0 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Aug 05 02:29:35 k8s-master0 systemd[1]: firewalld.service: Deactivated successfully.
Aug 05 02:29:35 k8s-master0 systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# kubeadm init \
> --apiserver-advertise-address=172.31.0.2 \
> --control-plane-endpoint=master0 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
> --kubernetes-version v1.20.9 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# vim /proc/sys/net/ipv4/ip_forward
[root@k8s-master0 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# kubeadm init \
> --apiserver-advertise-address=172.31.0.2 \
> --control-plane-endpoint=master0 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
> --kubernetes-version v1.20.9 \
> --service-cidr=10.96.0.0/16 \
> --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.20.9
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master0] and IPs [10.96.0.1 172.31.0.2]


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join master0:6443 --token pnf6kl.c61wofh5hxw9320u \
    --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master0:6443 --token pnf6kl.c61wofh5hxw9320u \
    --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8 
[root@k8s-master0 ~]#


mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

1、设置.kube/config

复制上面命令

2、安装网络组件

curl https://docs.projectcalico.org/manifests/calico.yaml -O

kubectl apply -f calico.yaml
[root@k8s-master0 ~]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
error: unable to recognize "calico.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1"
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# kubectl api-versions | grep polic
policy/v1beta1
[root@k8s-master0 ~]# 
[root@k8s-master0 ~]# 
[root@k8s-master0 ~]# 
[root@k8s-master0 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.9", GitCommit:"7a576bc3935a6b555e33346fd73ad77c925e9e4a", GitTreeState:"clean", BuildDate:"2021-07-15T21:01:38Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.9", GitCommit:"7a576bc3935a6b555e33346fd73ad77c925e9e4a", GitTreeState:"clean", BuildDate:"2021-07-15T20:56:38Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master0 ~]#

报错处理:

[root@k8s-master0 ~]# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  198k  100  198k    0     0   121k      0  0:00:01  0:00:01 --:--:--  121k
[root@k8s-master0 ~]#
[root@k8s-master0 ~]# kubectl apply -f calico.yaml 
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers configured
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node configured
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
poddisruptionbudget.policy/calico-kube-controllers created
[root@k8s-master0 ~]#
#查看集群所有节点
kubectl get nodes

#根据配置文件,给集群创建资源
kubectl apply -f xxxx.yaml

#查看集群部署了哪些应用?
docker ps   ===   kubectl get pods -A
# 运行中的应用在docker里面叫容器,在k8s里面叫Pod
kubectl get pods -A

4、加入node节点

Then you can join any number of worker nodes by running the following on each as root:

[root@k8s-node2 ~]# kubeadm join master0:6443 --token pnf6kl.c61wofh5hxw9320u \
>     --discovery-token-ca-cert-hash sha256:7002b790a43be0421dde1c051f56620d7fee87f1f98316f54eb718288f88d4f8
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node2 ~]#

新令牌

kubeadm token create --print-join-command

高可用部署方式,也是在这一步的时候,使用添加主节点的命令即可









【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。