aarch64搭建docker 18.09.8+kuberlet 1.14.2
4.18.0-80.7.2.el7.aarch64搭建docker 18.09.8+kuberlet 1.14.2
前期准备:
防火墙设置
关闭SELinux及防火墙Firewalld或者使用规则开放端口
setenforce 0 只是暂时禁用SELinux
长期禁用需修改/etc/selinux/config文件,将SELINUX=enforcing改为disabled 或permissive后重启服务生效
systemctl stop firewalld / systemctl disable firewalld
重启主机reboot
设置路由
modprobe br_netfilter 从内核中加载模块
sysctl -w net.bridge.bridge-nf-call-iptables=1 启用内核选项
echo "net.bridge.bridge-nf-call-iptables=1" > /etc/sysctl.d/k8s.conf 设置iptables
禁用交换分区:
swapoff -a 临时关闭
cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')
sed -i "s/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g" /etc/fstab 永久关闭
安装docker:
获取安装包:wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/computing_technician/docker-18.09.8.tgz
解压安装包:tar -zxf docker-18.09.8.tgz
复制执行文件到指定位置:cp -p docker/* /usr/bin
创建docker.service:
cat >/usr/lib/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target docker.socket
[Service]
Type=notify
EnvironmentFile=-/run/flannel/docker
WorkingDirectory=/usr/local/bin
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock --selinux-enabled=false --log-opt max-size=1g
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
启动docker:
systemctl daemon-reload; systemctl restart docker; systemctl enable docker
查看版本命令:docker version、docker -v /docker --version
验证docker安装结果:docker run hello-world
查看本机所有容器的docker服务:docker ps -a
安装k8s
- 配置kubernetes源
vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1
- 组件安装:
直接使用此命令下载yum install -y kubelet kubeadm kubectl kubernetes-cni 会直接下载最新版本,但 docker跟kubernetes是需版本对应 需要带版本号下载
yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 kubernetes-cni-1.14.2 --disable exclude s=kubernetes
可以分步安装,yum命令一次只能下载一个软件,所以当你需要下载第二个软件包的时候,系统会用进程锁锁定yum;删除其进程文件后再次运行yum可用
rm -rf /var/run/yum.pid
验证安装结果:
rpm -qa | grep kubelet; rpm -qa | grep kubeadm;rpm -qa | grep kubectl; rpm -qa | grep kubernetes-cni
开机启动:systemctl enable kubelet
- 准备kubernetes相关镜像
查看下载节点初始化所需镜像:kubeadm config images lis
执行以下命令下载镜像到当前服务器:
wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/kunpeng-kubernetes/kube-apiserver-arm64.tar
wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/kunpeng-kubernetes/kube-controller-manager-arm64.tar
wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/kunpeng-kubernetes/kube-scheduler-arm64.tar
wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/kunpeng-kubernetes/kube-proxy-arm64.tar
wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/kunpeng-kubernetes/pause-arm64.tar
wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/kunpeng-kubernetes/etcd-arm64.tar
wget https://sandbox-experiment-resource-north-4.obs.cn-north-4.myhuaweicloud.com/kunpeng-kubernetes/coredns.tar
导入镜像:
docker load < kube-apiserver-arm64.tar
docker load < kube-controller-manager-arm64.tar
docker load < kube-scheduler-arm64.tar
docker load < kube-proxy-arm64.tar
docker load < pause-arm64.tar
docker load < etcd-arm64.tar
docker load < coredns.tar
修改导入镜像标签:
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-arm64:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-arm64:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-arm64:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag docker.io/mirrorgooglecontainers/kube-proxy-arm64:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag docker.io/mirrorgooglecontainers/pause-arm64:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/etcd-arm64:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
删除下载的旧镜像:
docker rmi docker.io/mirrorgooglecontainers/kube-apiserver-arm64:v1.14.2
docker rmi docker.io/mirrorgooglecontainers/kube-controller-manager-arm64:v1.14.2
docker rmi docker.io/mirrorgooglecontainers/kube-scheduler-arm64:v1.14.2
docker rmi docker.io/mirrorgooglecontainers/kube-proxy-arm64:v1.14.2
docker rmi docker.io/mirrorgooglecontainers/pause-arm64:3.1
docker rmi docker.io/mirrorgooglecontainers/etcd-arm64:3.3.10
docker rmi docker.io/coredns/coredns:1.3.1
同上登录其他节点服务器:同理安装docker及k8s应用到服务器
部署集群:
① 计算节点不需要执行集群初始化操作;② 如果在主节点初始化的时候提示:
“/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1”,可以通过执行
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables命令将其置为1 (vim不可修改此文件);
③ 执行该操作时,实际生产环境下建议排查是否有配置代理,避免kubeadm init初始化超时失败,删除代理的操作如下:
export -n http_proxy
export -n https_proxy
export -n no_proxy
配置hosts文件
主节点和计算节点都需要配置hosts文件。
执行vi /etc/hosts命令,修改hosts文件,添加集群所有节点的ip及hostname信息
重复搭建集群时,请先执行kubeadm reset命令,清除旧的K8s集群设置。在主节点上执行集群初始化命令。
主节点初始化:
(--pod-network-cidr > 选项用于指定kubernetes网络可以使用的IP地址段,由于后续使用Flannel网络插件,该插件固定使用的IP段为:10.244.0.0/16 )
kubeadm init --kubernetes-version=1.14.2 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
初始化成功,如下图所示,后续步骤将执行下图红框所示两段信息:①“1”表示在配置物理机上的主节点需要拷贝的配置信息;②“2”表示计算节点加入集群的token命令,请务必保存该段命令。
执行如下命令(“1”中的提示命令),配置集群:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
执行以下命令,查看集群节点信息:
kubectl get node
可以看到当前只有一个“master”节点:
从节点加入集群:
kubeadm join 192.168.0.175:6443 --token 5oi71p.9qn7kjdhed3c1a5o \
--discovery-token-ca-cert-hash sha256:487cc222efe14a83c70a49f770f34d738b1f02d9fd21735651aa6163c1cf5626
等待1分钟后,在主节点(master)执行kubectl get nodes
systemctl status kubelet 查看服务状态
添加Flannel网络插件
kubectl apply -f https://gitee.com/mirrors/flannel/raw/master/Documentation/kube-flannel.yml
卸载k8s组件
先执行kubeadm reset命令,清空K8s集群设置。
yum erase -y kubelet kubectl kubeadm kubernetes-cni
基础组件通过Docker镜像部署,因此只需要强制删除对应镜像即可卸载。
运行与验证
vi nginx_deploy.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- image: nginx
name: nginx-deployment
kubectl create -f nginx_deploy.yaml
kubectl get pod --all-namespaces -o wide
已经成功地构建了基于K8s的容器集群化部署系统,结束!
- 点赞
- 收藏
- 关注作者
评论(0)