Vm 安装Centos 配合xshell安装K8s

举报
隔壁老汪 发表于 2022/06/23 23:47:28 2022/06/23
【摘要】 Vm 安装Centos7 安装k8s集群前期准备: 网络环境: 节点主机名ipMasterk8s_master192.168.3.216Node1k8s_node1192.168.3.217Node2k8s_node2192.168.3.219 centos7版本:   [root@k8s_master ~]# cat /etc...

Vm 安装Centos7

安装k8s集群前期准备:
网络环境:

节点 主机名 ip
Master k8s_master 192.168.3.216
Node1 k8s_node1 192.168.3.217
Node2 k8s_node2 192.168.3.219

centos7版本:
  [root@k8s_master ~]# cat /etc/redhat-release
  CentOS Linux release 7.4.1708 (Core)

关闭firewalld:
  systemctl stop firewalld
  systemctl disable firewalld

三台主机基础服务安装:
  [root@k8s_master ~]#yum -y update
  [root@k8s_master ~]#yum -y install net-tools wget vim ntpd
  [root@k8s_master ~]#systemctl enable ntpd
  [root@k8s_master ~]#systemctl start ntpd

分别在三台主机,设置主机名
  Master
  hostnamectl --static set-hostname k8s_master
  Node1
  hostnamectl --static set-hostname k8s_client1
  Node2
  hostnamectl --static set-hostname k8s_client2

设置hosts,分别再三台主机执行:


  
  1. cat <<EOF > /etc/hosts
  2. 192.168.3.217 k8s_client1
  3. 192.168.3.219 k8s_client2
  4. 192.168.3.216 k8s_master
  5. EOF

部署Master操作:
  安装etcd服务: 
  [root@k8s_master ~]# yum -y install etcd

编辑配置文件 /etc/etcd/etcd.conf
  [root@k8s_master ~]# cat /etc/etcd/etcd.conf | grep -v "^#"


  
  1. ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
  2. ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
  3. ETCD_NAME="master"
  4. ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"

  设置开机启动并验证状态
  [root@k8s_master ~]#systemctl enable etcd
  [root@k8s_master ~]#systemctl start etcd

  etcd检查
  [root@k8s_master ~]# etcdctl -C http://k8s_master:4001 cluster-health
  member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
  cluster is healthy
  [root@k8s_master ~]# etcdctl -C http://k8s_master:2379 cluster-health
  member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
  cluster is healthy

安装docker服务
  [root@k8s_master ~]# yum -y install docker
  设置开机启动,开启服务:
  [root@k8s_master ~]#systemctl enable docker
  [root@k8s_master ~]#systemctl start docker
查看docker版本:
[root@k8s_master ~]# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64

Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.centos.1.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Tue Jan 30 09:17:00 2018
OS/Arch: linux/amd64

安装kubernetes
  [root@k8s_master ~]# yum install kubernetes

在kubernetes master上需要运行以下组件:
    Kubernets API Server
    Kubernets Controller Manager
    Kubernets Scheduler

修改apiserver服务配置文件:
[root@k8s_master ~]# cat /etc/kubernetes/apiserver | grep -v "^#"


  
  1. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
  2. KUBE_API_PORT="--port=8080"
  3. KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.3.216:2379"
  4. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
  5. KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
  6. KUBE_API_ARGS=""

修改config配置文件:
[root@k8s_master ~]# cat /etc/kubernetes/config | grep -v "^#"


  
  1. KUBE_LOGTOSTDERR="--logtostderr=true"
  2. KUBE_LOG_LEVEL="--v=0"
  3. KUBE_ALLOW_PRIV="--allow-privileged=false"
  4. KUBE_MASTER="--master=http://192.168.3.216:8080"

设置开机启动,开启服务
  [root@k8s_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler
  [root@k8s_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler

查看服务端口:
[root@k8s_master ~]# netstat -tnlp


  
  1. Active Internet connections (only servers)
  2. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  3. tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 973/etcd
  4. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 970/sshd
  5. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1184/master
  6. tcp6 0 0 :::6443 :::* LISTEN 1253/kube-apiserver
  7. tcp6 0 0 :::2379 :::* LISTEN 973/etcd
  8. tcp6 0 0 :::10251 :::* LISTEN 675/kube-scheduler
  9. tcp6 0 0 :::10252 :::* LISTEN 674/kube-controller
  10. tcp6 0 0 :::8080 :::* LISTEN 1253/kube-apiserver
  11. tcp6 0 0 :::22 :::* LISTEN 970/sshd
  12. tcp6 0 0 ::1:25 :::* LISTEN 1184/master
  13. tcp6 0 0 :::4001 :::* LISTEN 973/etcd

部署Node:
安装docker
   参考Master安装方法
安装kubernetes
  参考Master安装方法
配置、启动kubernetes
  node节点上需要运行一下组件
   kubelet kube-proxy

Node节点主机做以下配置:
config:
[root@k8s_client1 ~]# cat /etc/kubernetes/config | grep -v "^#"


  
  1. KUBE_LOGTOSTDERR="--logtostderr=true"
  2. KUBE_LOG_LEVEL="--v=0"
  3. KUBE_ALLOW_PRIV="--allow-privileged=false"
  4. KUBE_MASTER="--master=http://192.168.3.216:8080"
  5. kubelet:
  6. [root@k8s_client1 ~]# cat /etc/kubernetes/kubelet | grep -v "^#"
  7. KUBELET_ADDRESS="--address=0.0.0.0"
  8. KUBELET_HOSTNAME="--hostname-override=192.168.3.217"
  9. KUBELET_API_SERVER="--api-servers=http://192.168.3.216:8080"
  10. KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
  11. KUBELET_ARGS=""

设置开机启动、开启服务
  [root@k8s_client1 ~]# systemctl enable kubelet kube-proxy
  [root@k8s_client1 ~]# systemctl start kubelet kube-proxy

查看端口:
[root@k8s_client1 ~]# netstat -ntlp


  
  1. Active Internet connections (only servers)
  2. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
  3. tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 942/sshd
  4. tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2258/master
  5. tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 17932/kubelet
  6. tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 17728/kube-proxy
  7. tcp6 0 0 :::10250 :::* LISTEN 17932/kubelet
  8. tcp6 0 0 :::10255 :::* LISTEN 17932/kubelet
  9. tcp6 0 0 :::22 :::* LISTEN 942/sshd
  10. tcp6 0 0 ::1:25 :::* LISTEN 2258/master
  11. tcp6 0 0 :::4194 :::* LISTEN 17932/kubelet

Master上查看集群中的节点及节点状态
  [root@k8s_master ~]# kubectl get node


  
  1. NAME STATUS AGE
  2. 127.0.0.1 NotReady 1d
  3. 192.168.3.217 Ready 1d
  4. 192.168.3.219 Ready 1d

  [root@k8s_master ~]# kubectl -s http://k8s_master:8080 get node


  
  1. NAME STATUS AGE
  2. 127.0.0.1 NotReady 1d
  3. 192.168.3.217 Ready 1d
  4. 192.168.3.219 Ready 1d

kubernetes集群搭建完成,还需flannel安装
flannel是CoreOS提供用于解决Dokcer集群跨主机通讯的覆盖网络工具。它的主要思路是:预先留出一个网段,每个主机使用其中一部分,然后每个容器被分配不同的ip;让所有的容器认为大家在同一个直连的网络,底层通过UDP/VxLAN等进行报文的封装和转发。

Master/Node上flannel安装:
  [root@k8s_master ~]#yum install flannel

flannel配置:
  Master/Node上修改/etc/sysconfig/flanneld

Master:
  [root@k8s_master ~]# cat /etc/sysconfig/flanneld | grep -v "^#"


  
  1. FLANNEL_ETCD_ENDPOINTS="http://192.168.3.216:2379"
  2. FLANNEL_ETCD_PREFIX="/atomic.io/network"

Node:
  [root@k8s_client1 ~]# cat /etc/sysconfig/flanneld | grep -v "^#"


  
  1. FLANNEL_ETCD_ENDPOINTS="http://192.168.3.216:2379"
  2. FLANNEL_ETCD_PREFIX="/atomic.io/network"

添加网络:
[root@k8s_master ~]#etcdctl mk //atomic.io/network/config '{"Network":"172.8.0.0/16"}'

Master/Node设置服务开机启动
[root@k8s_master ~]# systemctl enable flanneld
[root@k8s_master ~]# systemctl start flanneld

Master/Node节点重启服务:
Master:

for SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES ; done
 

Node:
  [root@k8s_client1 ~]#systemctl restart kube-proxy kubelet docker

查看flannel网络:
  Master节点:
[root@k8s_master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:98:3b:d4 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.216/24 brd 192.168.3.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:3bd4/64 scope link 
valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none 
inet 10.8.57.0/16 scope global flannel0
valid_lft forever preferred_lft forever
inet6 fe80::3578:6e81:8dc9:ed82/64 scope link flags 800 
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
link/ether 02:42:8b:7c:fd:8d brd ff:ff:ff:ff:ff:ff
inet 10.8.57.1/24 scope global docker0
valid_lft forever preferred_lft forever

  Node节点:
[root@k8s_client1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:50:56:98:65:e0 brd ff:ff:ff:ff:ff:ff
inet 192.168.3.217/24 brd 192.168.3.255 scope global ens160
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe98:65e0/64 scope link 
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
link/ether 02:42:23:4b:85:6f brd ff:ff:ff:ff:ff:ff
inet 10.8.6.1/24 scope global docker0
valid_lft forever preferred_lft forever
9: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none 
inet 10.8.6.0/16 scope global flannel0
valid_lft forever preferred_lft forever
inet6 fe80::827:f63e:34ee:1f8e/64 scope link flags 800 
valid_lft forever preferred_lft forever

 

 

文章来源: blog.csdn.net,作者:隔壁老瓦,版权归原作者所有,如需转载,请联系作者。

原文链接:blog.csdn.net/wxb880114/article/details/85229396

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。