kubernet1.13.0高可用集群安装
一、环境介绍
Centos:7.5 1804
内核:3.10.0-862.e17.x86_64
keepalived:1.3.5-6.el7.x86_64
haproxy:1.5.18-8.el7.x86_64
docker:18.06
kubernetes:1.13.0
master:192.168.20.230\192.168.20.231\192.168.20.232
VIP:192.168.20.236
node:192.168.20.233\192.168.20.234\192.168.20.235
etcd
etcd-3.2.22-1.el7.x86_6
IP:192.168.20.230\192.168.20.231\192.168.20.232
二、准备工作
1、关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
2、关闭swap
swapoff -a
3、关闭selinux:
setenforce 0
4、安装keepalived+haproxy
yum install keepalived(edit keepalived.conf)
global_defs {
router_id lb-master(从服务器改为lb-backup)
}
vrrp_instance VI-kube-master {
state MASTER(从服务器改为BACKUP)
priority 110
dont_track_primary
interface ens160(网卡标识)
virtual_router_id 51
advert_int 3
virtual_ipaddress {
192.168.20.236/24
}
}
yum install haproxy(edit haproxy.cfg)
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
nbproc 1
defaults
log global
timeout connect 5000
timeout client 50000
timeout server 50000
listen kube-master
bind 0.0.0.0:8443
mode tcp
option tcplog
balance roundrobin
server k8smaster01 192.168.20.230:6443 check inter 10000 fall 2 rise 2 weight 1
server k8smaster02 192.168.20.231:6443 check inter 10000 fall 2 rise 2 weight 1
server k8smaster03 192.168.20.232:6443 check inter 10000 fall 2 rise 2 weight 1
systemctl daemon-reload
systemctl start keepalived
stystemctl start haproxy
以上文件在192.168.20.230\231\232分别执行安装
5、安装cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH
三、master服务器安装
1、创建CA配置证书
mkdir /root/ssl
cd /root/ssl
cfssl print-defaults config > config.json
cfssl print-defaults csr > csr.json
改变证书过期时间为87600H
2、创建CA证书和私钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
3、创建kubernetes证书和私钥
编辑签名请求文件kubernetes-csr.json
生成证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
4、创建admin证书和私钥
编辑签名请求文件admin-csr.json
生成证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
5、创建kube-proxy证书和私钥
编辑签名文件kube-proxy-csr.json
生成证书和私钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
6、将以上生成的证书和私钥全部移到/etc/kubernetes/ssl目录下
mkdir -p /etc/kubernetes/ssl
cp *.pem /etc/kubernetes/ssl
7、创建TLS BOOTSTRAP TOKEN
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
8、创建kubelet bootstrapping kubeconfig文件
cd /etc/kubernetes
export KUBE_APISERVER="https://192.168.20.236:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
9、创建kube-proxy kubeconfig文件
export KUBE_APISERVER="https://172.20.0.113:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
10、分发以上两个文件到/etc/kubernetes目录下
cp bootstrap.kubeconfig kube-proxy.kubeconfig /etc/kubernetes/
11、安装etcd高可用
yum install etcd
创建etcd的systemd unit文件,注意分别IP改为你需要部署的三台机器IP
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster infra1=https://XXXX:2380,infra2=https://XXXX:2380,infra3=https://XXXX:2380 \
--initial-cluster-state new \
--data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target环境变量文件etc/etcd/etcd.conf,IP分别为三台etcd主机IP
# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://XXXX:2380"
ETCD_LISTEN_CLIENT_URLS="https://XXXX:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://XXXX:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://XXXX:2379"启动ETCD服务
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
12、安装kubectl \kube-apiserver\kube-controller-manager\kube-scheduler
github.com/kubernetes/kubernetes中去下载kubernetes-server-linux-amd64.tar.gz,解压缩后将kubernetes/server/bin/目录下的kubectl\kube-apiserver\kube-controller-manager\kube-scheduler文件复制到/usr/bin目录下。
创建kubectl kubeconfig文件
export KUBE_APISERVER="https://172.20.0.113:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER}
# 设置客户端认证参数
kubectl config set-credentials admin \
--client-certificate=/etc/kubernetes/ssl/admin.pem \
--embed-certs=true \
--client-key=/etc/kubernetes/ssl/admin-key.pem
# 设置上下文参数
kubectl config set-context kubernetes \
--cluster=kubernetes \
--user=admin
# 设置默认上下文
kubectl config use-context kubernetes创建/etc/kubernetes/config文件
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://test-001.jimmysong.io:8080"
KUBE_MASTER="--master=http://192.168.20.236:8080"创建/usr/lib/systemd/system/kube-apiserver.service及/etc/kubernetes/kube-apiserver文件,并启动。
****kube-apiserver.service****
[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target****kube-apiserver****
###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io"
KUBE_API_ADDRESS="--advertise-address=192.168.20.236 --bind-address=192.168.20.236 --insecure-bind-address=192.168.20.236"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.20.230:2379,https://192.168.20.231:2379,
https://192.168.20.232:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,Namespac
eExists,LimitRanger,ResourceQuota"
#
## Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC --runtime-config=rbac.authorization.k8s.io/v1
beta1 --kubelet-https=true --experimental-bootstrap-token-auth --token-auth-file=/etc/
kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubern
etes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
--client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes
/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernet
es/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-s
wagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --aud
it-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver创建/usr/lib/systemd/system/kube-controller-manager.server及/etc/kubernetes/kube-controller-manager文件,并启动。
****kube-controller-manager.service****
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target****kube-controller-manager****
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-k
ey-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
创建/usr/lib/systemd/system/kube-scheduler.server及/etc/kubernetes/kube-scheduler文件,并启动。
****kube-scheduler.service****
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
****kube-scheduler****
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler
13、安装flannel
yum install -y flannel
配置/usr/lib/systemd/system/flanneld.service文件及/etc/sysconfig/flanneld文件
****flanneld.service****
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX} \
$FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run
/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service****flanneld.service****
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://192.168.20.230:2379,https://192.168.20.231:2379,https://192.168.20.232:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"配置ETCD中flannel网络配置
etcdctl --endpoints=https://172.20.0.113:2379,https://172.20.0.114:2379,https://172.20
.0.115:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mkdir /kube-centos/network
etcdctl --endpoints=https://192.168.20.233:2379,https://192.168.20.234:2379,https://192.168.20.235:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":
{"Type":"vxlan"}}'systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld创建kubelet角色定义及赋权限
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrapkubectl create clusterrolebinding kubelet-nodes \
--clusterrole=system:node \
--group=system:nodes
四、node安装
1、安装docker
2、复制master节点ssl目录内容到node节点相同目录下
3、下载kubernetes-client-linux-amd64.tar.gz,并解压复制kubelet\kube-proxy文件到/usr/bin目录下
4、配置kubelet.servicet和kubelet文件,并启动。
****kubelet.service****
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
****kubelet****
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interface
s)
KUBELET_ADDRESS="--address=192.168.20.233"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.20.233"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
KUBELET_API_SERVER="--api-servers=http://192.168.20.233:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=jimmysong/pause-amd64:3.0"
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --require-kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
节点第一次加入集群时需认证加入
master
#kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-2b308 4m kubelet-bootstrap Pending#kubectl certificate approve csr-2b308
certificatesigningrequest "csr-2b308" approved
5、配置kube-proxy并启动。
yum install -y conntrack-tools
配置kube-proxy.servicet和kube-proxy文件,并启动。
****kube-proxy.service****
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
****kube-proxy****
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.20.233 --hostname-override=192.168.20.233 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
五、验证下集群状态
1、关闭一台master主机,看VIP192。168。20。236是否还可以PING通。
2、kubectl get componentstatuses看下组件状态是否正常
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
3、kubectl cluster-info集群状态
- 点赞
- 收藏
- 关注作者
评论(0)