关于K8s集群环境工作组隔离配置多集群切换的一些笔记

举报
山河已无恙 发表于 2022/12/16 17:42:46 2022/12/16
【摘要】 写在前面分享一些 K8s 中集群管理的笔记博文内容涉及集群环境隔离的相关配置:单集群多命名空间隔离及用户认证鉴权配置 Demo多集群的统一管理配置,集群切换 Demo理解不足小伙伴帮忙指正食用方式: 需要了解 K8s 集群,RBAC 鉴权,CA 认证相关知识 佛告须菩提:"凡所有相,皆是虚妄,若见诸相非相,则见如来" --- 《金刚经》在一个 Team 中,如果使用同一个集群,不同的工作组需...

写在前面


  • 分享一些 K8s 中集群管理的笔记
  • 博文内容涉及集群环境隔离的相关配置:
    • 单集群多命名空间隔离及用户认证鉴权配置 Demo
    • 多集群的统一管理配置,集群切换 Demo
  • 理解不足小伙伴帮忙指正
  • 食用方式: 需要了解 K8s 集群,RBAC 鉴权,CA 认证相关知识

佛告须菩提:"凡所有相,皆是虚妄,若见诸相非相,则见如来" --- 《金刚经》


在一个 Team 中,如果使用同一个集群,不同的工作组需要在集群内做到隔离,考虑最小权限原则,同时防止相互影响误操作等,可以通过创建不同的用户,给予不同命名空间,赋予不同的权限做到。命名空间可以隔离大部分 API 对象,通过限制用户对命名空间的访问来实现隔离。独立的命名空间,独立的用户,同一个集群,需要上下文(运行环境)来区分。

  • 比如对于测试来讲,可能只需要自己命名空间的 get、create 权限,确认服务正常运行,可以正常测试,或者运行一些测试脚本的 job 。
  • 对于 开发来讲,需要自己命名空间的 create delete deletecollection get list patch update watch 等权限,需要持续的集成部署开发测试
  • 对与 运维 来讲,需要所有命名空间的所有权限,负责整个集群的健康平稳的运行,分配一些 集群级别的资源对象,比如 SC 等。

一般情况下,如果有 k8s 面板工具,应该会有相关的比较完善的功能,今天和小伙伴们分享,如何通过 kubectl 实现,分两种情况考虑:

  • 第一种为共享单个集群,使用命名空间用户隔离实现集群环境的共享隔离
  • 第二种为多个集群的情况,要考虑集群的统一管理,控制台共享,通过一个客户端,实现多集群切换管理。

单集群多命名空间隔离管理

这是假设在 team 中,有以下三个工作组:dev、prod、test ,对于集群的使用,各自分配一个命名空间,用户,上下文(运行环境),下面为操作完的效果。

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
          ctx-dev                       kubernetes   dev                liruilong-dev
          ctx-prod                      kubernetes   prod               liruilong-prod
          ctx-test                      kubernetes   test               liruilong-test
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin   awx
┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$

当前所在运行环境为 kubernetes-admin@kubernetes,这是 kubeadm 安装 k8s 默认配置的,给系统管理员用的一个上下文和超级用户,可以跳过RBAC鉴权

开始之前,需要知道当前集群的一些基本信息,查看当前的集群信息

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config view  -o json | jq .clusters
[
  {
    "name": "kubernetes",
    "cluster": {
      "server": "https://192.168.26.81:6443",
      "certificate-authority-data": "DATA+OMITTED"
    }
  }
]

创建工作组对应的命名空间

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl create ns liruilong-dev
namespace/liruilong-dev created
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl create ns liruilong-prod
namespace/liruilong-prod created
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl create ns liruilong-test
namespace/liruilong-test created
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$

查看创建的命名空间

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl get ns -o wide --show-labels  |  grep liruilong
liruilong-dev                Active   4m21s   kubernetes.io/metadata.name=liruilong-dev
liruilong-prod               Active   2m15s   kubernetes.io/metadata.name=liruilong-prod
liruilong-test               Active   119s    kubernetes.io/metadata.name=liruilong-test

接下来,需要为这每个工作组分别定义一个 Context上下文,即运行环境。这个运行环境将属于集群下的命名空间,并且指定上下文的拥有用户。

在指定集群、命名空间、用户下创建上下文运行环境。

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config set-context ctx-dev --namespace=liruilong-dev --cluster=kubernetes  --user=dev
Context "ctx-dev" created.
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config set-context ctx-prod --namespace=liruilong-prod --cluster=kubernetes  --user=prod
Context "ctx-prod" created.
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config set-context ctx-test --namespace=liruilong-test --cluster=kubernetes  --user=test
Context "ctx-test" created.
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$

需要确认创建的配置信息

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config view  -o json | jq .contexts
[
  {
    "name": "ctx-dev",
    "context": {
      "cluster": "kubernetes",
      "user": "dev",
      "namespace": "liruilong-dev"
    }
  },
  {
    "name": "ctx-prod",
    "context": {
      "cluster": "kubernetes",
      "user": "prod",
      "namespace": "liruilong-prod"
    }
  },
  {
    "name": "ctx-test",
    "context": {
      "cluster": "kubernetes",
      "user": "test",
      "namespace": "liruilong-test"
    }
  },
  {

]

做完环境的隔离,还需要做一些 认证和鉴权的工作,上面定义了一些新的用户用于对集群操作,为了让普通用户能够通过认证并调用 API, 首先,该用户必须拥有 Kubernetes 集群签发的证书, 然后将该证书提供给 Kubernetes API。所以这里需要为这些特定的用户添加认证信息,同时配置权限。

K8s中认证鉴权方式有很多,可以是基于 HTTP Token认证,或者 kubeconfig 证书认证(基于CA根证书签名的双向数字证书认证方式)。鉴权一般使用 RBAC 的方式,也有其他的 ABAC 、Web钩子等鉴权方式。

一般情况下,基于 Token 的方式,如果之前没有配置过,需要修改 kube-apiserver的启动参数配置文件,重启 kubelet 服务,如果配置过(当 API 服务器的命令行设置了--token-auth-file=SOMEFILE选项时),会从文件中 读取持有者令牌。目前,令牌会长期有效,并且在不重启 API 服务器的情况下 无法更改令牌列表,所以这里使用 Kubeconfig 文件认证

使用 Kubeconfig 证书认证

关于 kubeconfig 证书认证, 在k8s 中 使用 kubeconfig 文件来组织有关集群、用户、命名空间和身份认证机制的信息。kubectl 命令行工具使用 kubeconfig 文件来查找选择集群所需的信息,并与集群的 API 服务器进行通信。

要做到命名空间隔离,需要创建一个新的 kubeconfig 文件,而不是把相关的用户信息添加到现有的 kubeconfig 文件.

现有的 kubeconfig 文件包含 当前集群管理员的用户信息,在集群创建过程中,kubeadmadmin.conf 中的证书进行签名时,将其配置为 Subject: O=system:masters, CN=kubernetes-admin, system:masters 是一个例外的超级用户组,可以绕过鉴权层(例如 RBAC)。 强烈建议不要将 admin.conf 文件与任何人共享。

当然,如果使用 kubeadm 安装 k8s 集群,可以直接通过 kubeadm 命导令出普通用户的 kubeconfig 证书文件。分别来看下

默认情况下,kubectl$HOME/.kube 目录下查找名为 config 的文件。一个kubeconfig 文件包括一下几部分:

  • 集群信息:
    • 集群CA证书
    • 集群地址
  • 上下文信息
    • 所有上下文信息
    • 当前上下文
  • 用户信息
    • 用户CA证书
    • 用户私钥

生成 kubeconfig 文件

下面以 dev 工作组为 Demo ,创建一个证书生成 kubeconfig 文件, k8s 中 证书 API 支持 X.509 的自动化配置, 它为 Kubernetes API 的客户端提供一个编程接口, 用于从证书颁发机构(CA)请求并获取 X.509 证书。

通过 openssl 生成一个 2048 位的 私钥

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$openssl genrsa -out dev-ca.key 2048
Generating RSA private key, 2048 bit long modulus
....................................................................................................+++...........................................................+++
e is 65537 (0x10001)

使用用户名 dev 生成一个证书签名请求(CSR),且该用户属于 2022 用户组。设置 CSR 的 CN 和 O 属性很重要。CN 是用户名,O 是该用户归属的组

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$openssl req -new -key dev-ca.key -out dev-ca.csr -subj "/CN=dev/O=2022"
┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$ls
dev-ca.csr  dev-ca.key

目前为止,有一个 dev 用户的 私钥,一个证书签名请求(CSR)文件,下面需要创建一个 CertificateSigningRequest,并通过 kubectl 将其提交到 Kubernetes 集群

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$cat dev-csr.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: dev-ca
spec:
  signerName: kubernetes.io/kube-apiserver-client
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1lqQ0NBVW9DQVFBd0hURU1NQW9HQTFVRUF3d0RaR1YyTVEwd0N3WURWUVFLREFReU1ESXlNSUlCSWpBTgpCZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEzVHBUTEM1cjFCTEtYNHBkQXA1UHlReUN0VlNYCmxQZ1BqWTJ5TFZDckFkVGh0ZEVtMHcvclA4VlJrc3BucmhSQnhsQUNIa2FPcWNnSGFieUo0bHlWOGQ5RUIxcFMKanMxaThSc2VOUm5rd1NxN3FzRnpmR01FTXU5SjVjZUpaWnQxWWU1ZEhaZG5EbjFpK0VEdDJ0Z0VTcjRMTGlNdgpoVTNBTWtBbC82dTBLZCtZZ2tWOXlES0JMaEhoTUlZUHlUb0pTV215K0VJZkdDNzcyZXNmeER1Y1Q5SCtTTmN1CkdMQ3hvUUcvT2VVbEFVaFdqRkJ0L29MUWI2NU4yTTd3Ky9SVkN1YVRzZXZIZkQ5MXZNczE3dU5vN25mVXQ1a2MKd2E2czNKUm81UnYxQm9vRlVTRGtxa0NwMW5MejNOamszby9zdUdmY1QxTEpDRnJQVStLL3d5Z1JJUUlEQVFBQgpvQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1sTllNb21qbnBhZzVBMkQzTnRTUlA4UmRWajY0akEzZ2VICkFjZFFDR0x4VnRvZXNkUUMxV1RGaXZFTGZIdnFZVzdpekFNamNMUVh2U1g2MXFNSzg3ejVNcllEL0g5d1lrNk4Ka2VYNUJRamZxN2xCV0ZSU0VkTzd5WDFESHhTSGVRSEsxTU9QNTU1dWErT2haYldldFlwVmZEVmxReVpEME9LMwptTFhac1dnWnk0N2syek5jdmlWYkl0Rm9nT2Y2ZGhQenU0UHFhWXVuTzNNUmVJT2JCZGVINzMxVDhuQUFKZldRCmswMGJLYlRPeEphSkVSUktMcGVUS2k1dit5a09oNjBYTC9vK1k5TVd4T2EySUErS3JxcmgxYmhjWmxha1JiTnoKODZ4VkJLTDBDc3d1RW9abWZSSUJZejlBR0RxWlJEVUdTUkhOaUlzNTNCbnl1MlBuU3VzPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K
  usages:
  - client auth

需要注意的几点:

  • usage 字段必须是 'client auth'
  • expirationSeconds 可以设置为更长(例如 864000 是十天)或者更短(例如 3600 是一个小时),这里没有定义使用默认值
  • request 字段是 CSR 文件内容的 base64 编码值。 要得到该值,可以执行命令 cat myuser.csr | base64 | tr -d "\n"
┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$cat dev-ca.csr | base64  | tr -d '\n'
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1lqQ0NBVW9DQVFBd0hURU1NQW9HQTFVRUF3d0RaR1YyTVEwd0N3WURWUVFLREFReU1ESXlNSUlCSWpBTgpCZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEzVHBUTEM1cjFCTEtYNHBkQXA1UHlReUN0VlNYCmxQZ1BqWTJ5TFZDckFkVGh0ZEVtMHcvclA4VlJrc3BucmhSQnhsQUNIa2FPcWNnSGFieUo0bHlWOGQ5RUIxcFMKanMxaThSc2VOUm5rd1NxN3FzRnpmR01FTXU5SjVjZUpaWnQxWWU1ZEhaZG5EbjFpK0VEdDJ0Z0VTcjRMTGlNdgpoVTNBTWtBbC82dTBLZCtZZ2tWOXlES0JMaEhoTUlZUHlUb0pTV215K0VJZkdDNzcyZXNmeER1Y1Q5SCtTTmN1CkdMQ3hvUUcvT2VVbEFVaFdqRkJ0L29MUWI2NU4yTTd3Ky9SVkN1YVRzZXZIZkQ5MXZNczE3dU5vN25mVXQ1a2MKd2E2czNKUm81UnYxQm9vRlVTRGtxa0NwMW5MejNOamszby9zdUdmY1QxTEpDRnJQVStLL3d5Z1JJUUlEQVFBQgpvQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1sTllNb21qbnBhZzVBMkQzTnRTUlA4UmRWajY0akEzZ2VICkFjZFFDR0x4VnRvZXNkUUMxV1RGaXZFTGZIdnFZVzdpekFNamNMUVh2U1g2MXFNSzg3ejVNcllEL0g5d1lrNk4Ka2VYNUJRamZxN2xCV0ZSU0VkTzd5WDFESHhTSGVRSEsxTU9QNTU1dWErT2haYldldFlwVmZEVmxReVpEME9LMwptTFhac1dnWnk0N2syek5jdmlWYkl0Rm9nT2Y2ZGhQenU0UHFhWXVuTzNNUmVJT2JCZGVINzMxVDhuQUFKZldRCmswMGJLYlRPeEphSkVSUktMcGVUS2k1dit5a09oNjBYTC9vK1k5TVd4T2EySUErS3JxcmgxYmhjWmxha1JiTnoKODZ4VkJLTDBDc3d1RW9abWZSSUJZejlBR0RxWlJEVUdTUkhOaUlzNTNCbnl1MlBuU3VzPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K

批准证书签名请求,使用 kubectl 创建 CSR 并批准。

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl apply  -f dev-csr.yaml
certificatesigningrequest.certificates.k8s.io/dev-ca created
┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl  get csr
NAME     AGE   SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
dev-ca   4s    kubernetes.io/kube-apiserver-client   kubernetes-admin   <none>              Pending

批准 CSR:

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl certificate approve  dev-ca
certificatesigningrequest.certificates.k8s.io/dev-ca approved
┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl  get csr/dev-ca
NAME     AGE     SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
dev-ca   3m32s   kubernetes.io/kube-apiserver-client   kubernetes-admin   <none>              Approved,Issued

从 CSR 取得证书,证书的内容使用 base64 编码,存放在字段 status.certificate ,从 CertificateSigningRequest 导出颁发的证书

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl  get csr dev-ca  -o  jsonpath='{.status.certificate}'| base64 -d > dev-ca.crt

到这里,得到了用户的 .crt 证书, 当前用户的 .key 私钥.

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$ll
总用量 16
-rw-r--r-- 1 root root 1103 12月 11 16:32 dev-ca.crt
-rw-r--r-- 1 root root  903 12月 11 16:14 dev-ca.csr
-rw-r--r-- 1 root root 1675 12月 11 13:30 dev-ca.key
-rw-r--r-- 1 root root 1390 12月 11 16:25 dev-csr.yaml

前面讲到,一个 kubeconfig 文件包含三部分,集群信息,上下文信息,用户信息,这里依次构建这三部分内容

拷贝当前集群的证书到 当前目录下,制作新的 kubeconfig 文件需要 集群的ca证书

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$cp  /etc/kubernetes/pki/ca.crt .

--kubeconfig=dev-config 指定新kubeconfig 文件名字,--certificate-authority=ca.crt 指定集群证书

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl config --kubeconfig=dev-config set-cluster kubernetes   --server=https://192.168.26.81:6443 --certificate-authority=ca.crt --embed-certs=true
Cluster "kubernetes" set.

--embed-certs=true 的意思把配置信息写入到 kubeconfig 文件,然后需要添加之前配置好的一些工作组命名空间,用户相关的上下文

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl config --kubeconfig=dev-config set-context ctx-dev --namespace=liruilong-dev --cluster=kubernetes  --user=dev
Context "ctx-dev" created.

上下文添加完之后,只剩用户信息了,这里 用过 set-credentials dev 指定用户,--client-key=dev-ca.key指定用户的私钥,--client-certificate=dev-ca.crt指定用户的 CA证书

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl config --kubeconfig=dev-config set-credentials dev --client-certificate=dev-ca.crt --client-key=dev-ca.key  --embed-certs=true
User "dev" set.

到这里,就做完了用户的认证,创建了 dev 用户的 kubeconfig 文件

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$cat dev-config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXhNakUyTURBME1sb1hEVE14TVRJeE1ERTJNREEwTWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkdkCisrWnhFRDJRQlR2Rm5ycDRLNFBrd2lsYXUrNjdXNTVobVdwc09KSHF6ckVoWUREY3l4ZTU2Z1VJVDFCUTFwbU0KcGFrM0V4L0JZRStPeHY4ZmxtellGbzRObDZXQjl4VXovTW5HQi96dHZsTGpaVEVHZy9SVlNIZTJweCs2MUlSMQo2Mkh2OEpJbkNDUFhXN0pmR3VXNDdKTXFUNTUrZUNuR00vMCtGdnI2QUJnT2YwNjBSSFFuaVlzeGtpSVJmcjExClVmcnlPK0RFTGJmWjFWeDhnbi9tcGZEZ044cFgrVk9FNFdHSDVLejMyNDJtWGJnL3A0emd3N2NSalpSWUtnVlUKK2VNeVIyK3pwaTBhWW95L2hLYmg4RGRUZ3FZeERDMzR6NHFoQ3RGQnVia1hmb3Ftc3FGNXpQUm1ZS051RUgzVAo2c1FNSFl4emZXRkZvSGQ2Y0JNQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHRGNLU3V1VjVNNXlaTkJHUDEvNmg3TFk3K2VNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRVE0SUJhM0hBTFB4OUVGWnoyZQpoSXZkcmw1U0xlanppMzkraTdheC8xb01SUGZacElwTzZ2dWlVdHExVTQ2V0RscTd4TlFhbVVQSFJSY1RrZHZhCkxkUzM5Y1UrVzk5K3lDdXdqL1ZrdzdZUkpIY0p1WCtxT1NTcGVzb3lrOU16NmZxNytJUU9lcVRTbGpWWDJDS2sKUFZxd3FVUFNNbHFNOURMa0JmNzZXYVlyWUxCc01EdzNRZ3N1VTdMWmg5bE5TYVduSzFoR0JKTnRndjAxdS9MWAo0TnhKY3pFbzBOZGF1OEJSdUlMZHR1dTFDdEFhT21CQ2ZjeTBoZHkzVTdnQXh5blR6YU1zSFFTamIza0JDMkY5CkpWSnJNN1FULytoMStsOFhJQ3ZLVzlNM1FlR0diYm13Z1lLYnMvekswWmc1TE5sLzFJVThaTUpPREhTVVBlckQKU09ZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.26.81:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: liruilong-dev
    user: dev
  name: ctx-dev
current-context: ""
kind: Config
preferences: {}
users:
- name: dev
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBakNDQWVxZ0F3SUJBZ0lRWVlWTHZPWkthTXEyUHV5akg4d0VvakFOQmdrcWhraUc5dzBCQVFzRkFEQVYKTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1CNFhEVEl5TVRJeE1UQTRNakV4TjFvWERUSXpNVEl4TVRBNApNakV4TjFvd0hURU5NQXNHQTFVRUNoTUVNakF5TWpFTU1Bb0dBMVVFQXhNRFpHVjJNSUlCSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEzVHBUTEM1cjFCTEtYNHBkQXA1UHlReUN0VlNYbFBnUGpZMnkKTFZDckFkVGh0ZEVtMHcvclA4VlJrc3BucmhSQnhsQUNIa2FPcWNnSGFieUo0bHlWOGQ5RUIxcFNqczFpOFJzZQpOUm5rd1NxN3FzRnpmR01FTXU5SjVjZUpaWnQxWWU1ZEhaZG5EbjFpK0VEdDJ0Z0VTcjRMTGlNdmhVM0FNa0FsCi82dTBLZCtZZ2tWOXlES0JMaEhoTUlZUHlUb0pTV215K0VJZkdDNzcyZXNmeER1Y1Q5SCtTTmN1R0xDeG9RRy8KT2VVbEFVaFdqRkJ0L29MUWI2NU4yTTd3Ky9SVkN1YVRzZXZIZkQ5MXZNczE3dU5vN25mVXQ1a2N3YTZzM0pSbwo1UnYxQm9vRlVTRGtxa0NwMW5MejNOamszby9zdUdmY1QxTEpDRnJQVStLL3d5Z1JJUUlEQVFBQm8wWXdSREFUCkJnTlZIU1VFRERBS0JnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUI4R0ExVWRJd1FZTUJhQUZHRGMKS1N1dVY1TTV5Wk5CR1AxLzZoN0xZNytlTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFEQkhHeE81VTcydGpQdQplN2NIYlhYaVVLNVhoSlF2TjlGQWdRaG00ZVdaNVRkNjJyRE9kQ3BwQ2JENDRLYkNabXFjRGR2d0krSWQyTmtxCnNtQ3R0Nmw4UW15WUhuQTl5ajc2TFQySzFJMm9CcVlpYzdIRlROTzhtN2lsVTBRYTZreWJJSUVxeGdxb3d2V0EKcVpOL3BHcUVWbTdxaEhkNW0yMFQrNjNYZ3FoR1JVaGcyayt3SnJBd3VHUy9wWGlObG1yVldDT2E3enhFWSs2dgpVVU81YS9EbjFZaXZjUExKRzRqdUY1VTdkWmJMS1FMMnkxTndUbEpDZTdRQkhxQzBzSnhuNTNDWC82blhIbW9OCm9mWThBMEVkZFd0WVNXT2FxSGVScVRpRG5XT2pIUk8vbmJVbTlwR3VLQmhhMmxqQUR4dmh4K1VlWnFselBZN3YKSHlWeXltU2gKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBM1RwVExDNXIxQkxLWDRwZEFwNVB5UXlDdFZTWGxQZ1BqWTJ5TFZDckFkVGh0ZEVtCjB3L3JQOFZSa3NwbnJoUkJ4bEFDSGthT3FjZ0hhYnlKNGx5VjhkOUVCMXBTanMxaThSc2VOUm5rd1NxN3FzRnoKZkdNRU11OUo1Y2VKWlp0MVllNWRIWmRuRG4xaStFRHQydGdFU3I0TExpTXZoVTNBTWtBbC82dTBLZCtZZ2tWOQp5REtCTGhIaE1JWVB5VG9KU1dteStFSWZHQzc3MmVzZnhEdWNUOUgrU05jdUdMQ3hvUUcvT2VVbEFVaFdqRkJ0Ci9vTFFiNjVOMk03dysvUlZDdWFUc2V2SGZEOTF2TXMxN3VObzduZlV0NWtjd2E2czNKUm81UnYxQm9vRlVTRGsKcWtDcDFuTHozTmprM28vc3VHZmNUMUxKQ0ZyUFUrSy93eWdSSVFJREFRQUJBb0lCQVFDbURNdzNBbFR2Sm5kKwpCTlhSVEZDb29GcFBqc0lFRDdsa3ozRm9yLzdiYmhWSXFrZFE3c2J0NDhaWnZ0RFppZHpnNUZiaXNLVU9iTlNiCm1lZUkzMk93MjVzdFJhOW4vbU9BZzVGRjNEeW1mTlBGMUZSQmpmU3Q0b3YrQzZwbWVLdy9xSEY5NzVGci85TlUKY1MvWExvTHlNdmtqVlVlcTcvUU9BN1pCMUhoellEcHpHUnJlcmdOTVU1K1lDRUo0Y0ZvN3pzSFh1ejhpWU5Nawp0K0k0by9BWEFtUktlOFBaWFZRWVdMR1lpdlBWcGU1bVFKUTZXKzdFTHlzMnAzYlRCVVJqN3JLSTBaT0dvcmpZCjNBTlRqQlRKY282eHAySTk5VXVQNHlCOWh6RG9RWGNublhURzdZcVZxZjZyL29vbXhZS3VVSUpZT1VaL21OOVUKZXF5SnpRdU5Bb0dCQVBrQzNmVzVWNDBGdjRYT29mS2VmcnNXVERESUEyMWhsVE9tSVJlNmpndjg2MnAwdElZVAo0L29WdjB0K3ZQeGE4Rkl5d0ZUUDJicFJQbUE2MlBnOVl6TUpuYytrRXY3cGFlT0p0MjNoUUxMM2lKTXFqNVBOCnp0SzZhSkI2K2w0RTVrVzZjRjZpVE5ROEpwZzRvUTgrNWlBZUJXbjhoM01ERjlKZDZMMzRYVmt6QW9HQkFPTnYKMWZSV2V6cE1TZEtPSktkakRzUlprYlczeXdxM3NFaEZEU2ZSTU5OYXNGcHhTcWpJZUx2OWRON21aL25adHpwSApzbXc4aWcyV3JrSENjRk5BM29zUkFzbWxXekdmNFVIWlNqVEFKSk1qK0ZEYSt1NjVhYTF4ckJ5R0Z3Q1RtWnB2CmRsZitJQktab21NTlpOcnpmK011Q0RQR2hhczFkZ2xhL1h1NnVUUmJBb0dBZEFQMzhmSm1iaGZOZ2NRaUErNEEKVVo0ejVVNXErbDFLcklPc1MyZnBvb0EyRnFWRkxtcTUvdHgvQWVlTW1XNnRKVDdzQ1JmRjgxN0Mxd2JUNitSKwpBVnRyb1VCcWNVWEN4ZloxOWNYSzVSY2JGS1h4dXdWYVpTZmdhK0JBSWVuYWQ0WkRzSE9ocEFoYVd2V1haSWtECm90Y1o0cVY3WGdTRTVzaEdGYXhQb2EwQ2dZQkFKa290dWJyV0xiQmcwRERzZVpjdnNLZlZubnFKa2xnSmVsaUUKazQ5Mi9jeGlKalJOdVFXODJIZC9hM09HV0c5QzQvZ2lhVXp6R2o0YVZES0VlUGFNT1FjVlF5dWVxcDdKaVBWUwpQYVBUVU1ENFpWdUR2QTVmbW9GV0prZ1VwSTBkcnpTdEN3T1cyM2lmQWFjaHpxNlNzR2dsMm1mWGE2UFliYTZ6Cm1HNG1vd0tCZ0VjS3U0WDQ3RGhLTmk4L1poZi9pank5UWsyYVZGR01jMVczVzhUWjdxY2R3aVMwK2VvSmtmTVoKRUFFVW1xME9IUWdmVGxHYnJobEpJZzlpRXIxbXFYMUUvaWFGU3N0Q0RYc3RhTG9FY3FwckhtcDFiZnplbUlWNgpLSGxiRTRqdG4vM1dCckNyUFdpeFVOUEZHYjMvUVRwZkJQNCtFa0F1WTVzemk3d1kxeUFuCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

当前生成的文件,把文件传到任意 装有 kubectl 的客户机上面测试。在测试之前,需要修改一下 kubeconfig 配置文件信息,添加上下文信息:current-context: ""current-context: "ctx-dev"

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$ll
总用量 28
-rw-r--r-- 1 root root 1099 12月 11 17:12 ca.crt
-rw-r--r-- 1 root root 1103 12月 11 16:32 dev-ca.crt
-rw-r--r-- 1 root root  903 12月 11 16:14 dev-ca.csr
-rw-r--r-- 1 root root 1675 12月 11 13:30 dev-ca.key
-rw------- 1 root root 5535 12月 11 17:12 dev-config
-rw-r--r-- 1 root root 1390 12月 11 16:25 dev-csr.yaml
┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$scp dev-config  root@192.168.26.55:~
root@192.168.26.55's password:
dev-config

dev 用户的认证做完了,还需要授权,只有一个 认证信息,相当于你可以访问集群,但是关于集群的信息什么也看不了。

使用 kubeadm 生成证书文件

可以通过 kubeadm kubeconfig user 来为普通用户生成一个新的 kubeconfig 文件,需要定义一个 ClusterConfiguration 资源对象,这里需要说明的是,这是基于 kubeadm 的,并不是 k8s 体系的 API 对象。

定义资源对象

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$cat example.yaml
# example.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# kubernetes 将作为 kubeconfig 中集群名称
clusterName: "kubernetes"
# some-dns-address:6443 将作为集群 kubeconfig 文件中服务地址(IP 或者 DNS 名称)
controlPlaneEndpoint: "192.168.26.81:6443"
# 从本地挂载集群的 CA 秘钥和 CA 证书
certificatesDir: "/etc/kubernetes/pki"

资源对象中需要的 集群信息 可以通过下面的方式查看

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl get cm kubeadm-config -n kube-system -o=jsonpath="{.data.ClusterConfiguration}"
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.22.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

kubeadm kubeconfig user 生成证书 ,这里我们为上面的 prod 用户生成证书,生成一个有效期为 10000 小时的 证书文件 kubeconfig。

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubeadm kubeconfig user --config example.yaml --org 2022 --client-name prod --validity-period 10000h > prod-config
I1215 20:56:40.505079  127804 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.22
W1215 20:56:43.639674  127804 kubeconfig.go:88] WARNING: the specified certificate validity period 10000h0m0s is longer than the default duration 8760h0m0s, this may increase security risks.
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$ls
admin.conf  controller-manager.conf  example.yaml  kubelet.conf  manifests  pki  prod-config  scheduler.conf
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$cat prod-config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXhNakUyTURBME1sb1hEVE14TVRJeE1ERTJNREEwTWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkdkCisrWnhFRDJRQlR2Rm5ycDRLNFBrd2lsYXUrNjdXNTVobVdwc09KSHF6ckVoWUREY3l4ZTU2Z1VJVDFCUTFwbU0KcGFrM0V4L0JZRStPeHY4ZmxtellGbzRObDZXQjl4VXovTW5HQi96dHZsTGpaVEVHZy9SVlNIZTJweCs2MUlSMQo2Mkh2OEpJbkNDUFhXN0pmR3VXNDdKTXFUNTUrZUNuR00vMCtGdnI2QUJnT2YwNjBSSFFuaVlzeGtpSVJmcjExClVmcnlPK0RFTGJmWjFWeDhnbi9tcGZEZ044cFgrVk9FNFdHSDVLejMyNDJtWGJnL3A0emd3N2NSalpSWUtnVlUKK2VNeVIyK3pwaTBhWW95L2hLYmg4RGRUZ3FZeERDMzR6NHFoQ3RGQnVia1hmb3Ftc3FGNXpQUm1ZS051RUgzVAo2c1FNSFl4emZXRkZvSGQ2Y0JNQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHRGNLU3V1VjVNNXlaTkJHUDEvNmg3TFk3K2VNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRVE0SUJhM0hBTFB4OUVGWnoyZQpoSXZkcmw1U0xlanppMzkraTdheC8xb01SUGZacElwTzZ2dWlVdHExVTQ2V0RscTd4TlFhbVVQSFJSY1RrZHZhCkxkUzM5Y1UrVzk5K3lDdXdqL1ZrdzdZUkpIY0p1WCtxT1NTcGVzb3lrOU16NmZxNytJUU9lcVRTbGpWWDJDS2sKUFZxd3FVUFNNbHFNOURMa0JmNzZXYVlyWUxCc01EdzNRZ3N1VTdMWmg5bE5TYVduSzFoR0JKTnRndjAxdS9MWAo0TnhKY3pFbzBOZGF1OEJSdUlMZHR1dTFDdEFhT21CQ2ZjeTBoZHkzVTdnQXh5blR6YU1zSFFTamIza0JDMkY5CkpWSnJNN1FULytoMStsOFhJQ3ZLVzlNM1FlR0diYm13Z1lLYnMvekswWmc1TE5sLzFJVThaTUpPREhTVVBlckQKU09ZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.26.81:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: prod
  name: prod@kubernetes
current-context: prod@kubernetes
kind: Config
preferences: {}
users:
- name: prod
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDekNDQWZPZ0F3SUJBZ0lJUmxBQ2U4SG1kSXd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1USXhOakF3TkRKYUZ3MHlOREF5TURVd05EVTJORE5hTUI0eApEVEFMQmdOVkJBb1RCREl3TWpJeERUQUxCZ05WQkFNVEJIQnliMlF3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURhaWlIOXZqNnVpdG16VTFOd3VRdGVkN3I5aUhEMzZLNEFtaVJDUTBNaGRCWlYKZlkxU1JUTFhqV1VibXBNWjF3c1hDT3IrUnhFVHhkdlhIcmNXSEh1cHBMWlE1ZVRFWVQrUitkZFdhUlFoRnpPdQpEM1l1YWdkdjNUdTdXT1pnS1R2ckIzeWxWRWptQ1M0RTRsQXZidmJ4MkJ1OHk5bDVHQ3ZQSW9wUCtjbklNRGZmCjMyMkhtOXZkVE1uRjFZUUZKSEJqbnMrOUI0TnUxdjc2WUdwbElUbFpIQ01tTjZ2aDYzbHVxL01lajdYaEFwM24KRTM0dUNQRFRQSlFlU3p5QUltc2t0ZnRGV3NJdEpTSUhyS1kyY0RBVjltVXI0VFBhY2FGZDFrcjNSTTkvQjBnWApGTEhUdWYrL2tyd3BaM1FDd0luaHRDSGc5UkZ2MjNOSkRENHNNMVZIQWdNQkFBR2pWakJVTUE0R0ExVWREd0VCCi93UUVBd0lGb0RBVEJnTlZIU1VFRERBS0JnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUI4R0ExVWQKSXdRWU1CYUFGR0RjS1N1dVY1TTV5Wk5CR1AxLzZoN0xZNytlTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBWAp0bmpWVTZhbEJzeTNoQ08xMGVnNkQzRElqODJMRGJSSmZQeU8waldoUGYveXU2VFNwS0VJUFVTQndvcWFFREhEClEraUlEZEdIRmZoa3FLRDlzOENNQ2tMZHF5S2ZDajQ4TjhvdXg5aUJXWXZVR0YzdDFyWWVCaDdCTmhrOTZPMnoKYW9MZm9GNnFqUkYyYzZIMitORjJmWFJMdlA2V0EwK1lXS3FXY2lickYvcEhYT2FidTBMN0NGR0VQSmJyUVZNcQpyeGUwTytWWkg5ZXIrQmZrelZOYmg5djJoNU5qaU1ZSHhSMkdkc01tUDNGT28vWTBUbzFlMzhVbGFndzh3Ty82Ci8zR1RtT01aUzJhZUNxazdyUmlic0ljKys5NVNwRTRqTlFpSmNPc0tFbDcrUXhsdHRaMlpib0ZheDZDaHJ4bEUKWnc4MmpvWlhxaVViaXhpbEgrUEIKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMm9vaC9iNCtyb3JaczFOVGNMa0xYbmU2L1lodzkraXVBSm9rUWtORElYUVdWWDJOClVrVXkxNDFsRzVxVEdkY0xGd2pxL2tjUkU4WGIxeDYzRmh4N3FhUzJVT1hreEdFL2tmblhWbWtVSVJjenJnOTIKTG1vSGI5MDd1MWptWUNrNzZ3ZDhwVlJJNWdrdUJPSlFMMjcyOGRnYnZNdlplUmdyenlLS1Qvbkp5REEzMzk5dApoNXZiM1V6SnhkV0VCU1J3WTU3UHZRZURidGIrK21CcVpTRTVXUndqSmplcjRldDVicXZ6SG8rMTRRS2Q1eE4rCkxnancwenlVSGtzOGdDSnJKTFg3UlZyQ0xTVWlCNnltTm5Bd0ZmWmxLK0V6Mm5HaFhkWks5MFRQZndkSUZ4U3gKMDduL3Y1SzhLV2QwQXNDSjRiUWg0UFVSYjl0elNRdytMRE5WUndJREFRQUJBb0lCQVFDZnYwMXRrRDE5bFIzaAp5YzA2bnVsQ21yN2pTWE5hcElsZEEwL3g1LzBRWFMxZVBMS3JLczRwWnNBNzExZ2tFVitYN1BycCtNVHc4VGJzCkh4V3lZZ3U3VEIzQk1PdHk2YXR3WjNNVFJTaGpyL1FsRGtSVFZVb3VhVWVhZ1RlVm4wNmZWUSsyUXRBdTV4THUKbXdnR1JGVGJJQi9XZUNSMk1rY0QyTG5HRUUrQnR2TVRRQzY2emQyYTI4Wm9PYlA0OWd0cmptNEcxRnhpcUJMWAowTWliMSt1VWhiNk1DMVZnVjZvak9YeXlRNkk2aXJ4Mmh5eDNSNS9qK0xMZ3lEb0lXUHFNUDFkNXpjN3BCSU81CmJGK1BLd0VVdXF5TGNYZDJ4OHJaclJPdGd1cjBOMWpnTWMvcXZaOFVpYWZFTkdzdURIQ1JkTE5BdHZPb2MxOTIKUC9XMEtNZHhBb0dCQVBzNmxsYmEwVWdqNy9NTXVzbVpmQkQ3UURoWVU4L3NqVmpTTUVVak5aVE9aVFFtVFRoNQpMSEZqRjE2VHdHb3FpcGU0VU5ndGhMcER2LzlUUUxEQzVHMVgrMkY1ODJZYTFKZmdZVFYwNkhsdGZJVWpMT0YvCk5NZ1hmejlBNVBsbGFpb3VyN010My9TSDJad3R2b0xGZjFiaFVFdTNOb3d2L0NMM3FlbDJnUmk5QW9HQkFONncKbmtwS2NOejVGZDNKbmJ4WE84NVkyaGE2dHJLcmZHVjhwM1dvTFRRYklRWU56M0pFYjFUYUxIT1B4eGlCN3lTSwpFNzJaQ2VYa2hsTGlyOXQwZHdRQXJwRm95d24vUUVJc1FOS0tERDBHSkxzZVVJeFRHUVBqbVFOWFF5TThzRmFGCnZvdWxlQzJ2TDJjbkNqYWtYQ1pwZnVlTFVpVnk5dFp0aWMrbUJwQlRBb0dBWDVmZVpyUWlXQW5jbnFYa1dSdCsKMnROUGoyRUVteVJPY0ZLaUxWeUZZZGJiS1dtOWpsU0ZOYXZYMDVQeTdqSzd3NWxOb2NSSU1idmZ6WjUzQ2d0TwpjZEM5aFV5cThkb1p0S1NiT0lVQWhGdkZ1cjgwcjZVQWgzWnhZN2NrcVVVT2pYaHdRSVNmSitPZFNORWJJWlZXCnE4OVdCMGx5aHdzbkxJTUNjeVExWVIwQ2dZQUpIcnFjMkVlZkJTUjhITkcwOE8ybUdjVjB3TmpTb0d0THpMc2UKK25BL2ZnendMb2ljYVdrVjFJbVZnZ0hwWXdqa09qTnN4R08vWW9pTnhITG5UZkhCM0RWS0J6eXBnQ2FsanlKbwpmUGJiV1BFUUtNR3J2WXQ4dVVsKzlZZnVYWUhyU1Rid2lTcE8xS25nVTV6N2QrZStPdnZUaDhVcGUzZlllRXY0CmtSZ2J1UUtCZ0FwcitUNU0wcGt5K1NQbVE0NHdTb3hTQkd4Mmp3aDArU3gyWE9iTUJjVVN0VFhKUE1oOUFjUDYKMXcwMDNBUVJuSjFRcy8vVnBid21iSWFENS9nQW02V1pJdVhjTjB3WWh0ejJWV2sxRW9kdjhpQmkzK0hmRXlVawo4L2tmdFRLSXNsZzRNbDgzR3pCcGhycFF1ODVsN2ZoUUhCQmZFT3k3NTNnZHZXcFVxNzJ2Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

上下文,命名空间都不是我们想要的,所以需要 修改 这个 kubeconfig 文件,然后可以指定 kubeconfig 文件为刚刚创建的,查询 当前认证集群信息,确认修改信息

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl --kubeconfig=prod-config  config  view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.26.81:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: liruilong-prod
    user: prod
  name: ctx-prod
current-context: ctx-prod
kind: Config
preferences: {}
users:
- name: prod
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

因为我们没有做鉴权,所以什么权限也没有,啥也看不了

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl --kubeconfig=prod-config  get nodes
Error from server (Forbidden): nodes is forbidden: User "prod" cannot list resource "nodes" in API group "" at the cluster scope

使用 RBAC 鉴权

这里关于 RBAC 的授权鉴权不多讲,感兴趣小伙伴可以看看官网介绍或者之前的文章。当前直接找一个现成的 集群角色 admin 来绑定. 查看角色权限信息。

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl describe clusterrole admin
Name:         admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  rolebindings.rbac.authorization.k8s.io          []                 []              [create delete deletecollection get list patch update watch]
  roles.rbac.authorization.k8s.io                 []                 []              [create delete deletecollection get list patch update watch]
  configmaps                                      []                 []              [create delete deletecollection patch update get list watch]
  events                                          []                 []              [create delete deletecollection patch update get list watch]
  persistentvolumeclaims                          []                 []              [create delete deletecollection patch update get list watch]
  pods                                            []                 []              [create delete deletecollection patch update get list watch]
  ..................
  ..............

把集群角色 admin 绑定到 用户 dev 上,并且限制其权限只对 liruilong-dev 命名空间有效。

┌──[root@vms81.liruilongs.github.io]-[~/.kube/ca]
└─$kubectl create rolebinding dev-admin  --clusterrole=admin --user=dev -n liruilong-dev
rolebinding.rbac.authorization.k8s.io/dev-admin created

对于通过 kubeadm 生成 kubeconfig 文件的 prod 用户,这里我们创建一个角色,然后绑定。

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl create  role prod-role --verb=get,list --resource=pod -n liruilong-prod
role.rbac.authorization.k8s.io/prod-role created
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl get -n liruilong-prod role prod-role
NAME        CREATED AT
prod-role   2022-12-15T14:17:57Z
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl -n liruilong-prod describe role prod-role
Name:         prod-role
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  pods       []                 []              [get list]
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl create rolebinding  prod-bind --role=prod-role  --user=prod -n liruilong-prod
rolebinding.rbac.authorization.k8s.io/prod-bind created
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$kubectl get rolebindings.rbac.authorization.k8s.io  -n liruilong-prod
NAME        ROLE             AGE
prod-bind   Role/prod-role   68s

客户端测试

dev 用户测试,为了方便 定义 kubeconfig 文件位置的 全局变量,一般情况下,除非超级管理员,建议不直接在 集群节点 机器上连接 k8s 集群,而是通过拷贝 kubeconfig 文件 ,到客户端机器安装 kubectl 客户端的方式来操作集群。

┌──[root@liruilongs.github.io]-[~]
└─$export KUBECONFIG=/root/dev-config

环境变量是全局的,适用于任何用户。当前命名空间的: get、create 权限测试

┌──[root@liruilongs.github.io]-[~]
└─$kubectl get pods
No resources found in liruilong-dev namespace.
┌──[root@liruilongs.github.io]-[~]
└─$kubectl run web  --image nginx
pod/web created
┌──[root@liruilongs.github.io]-[~]
└─$kubectl get pods -owide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE                          NOMINATED NODE   READINESS GATES
web    1/1     Running   0          19s   10.244.217.15   vms155.liruilongs.github.io   <none>           <none>
┌──[root@liruilongs.github.io]-[~]
└─$

其他命名空间权限拒绝测试

┌──[root@liruilongs.github.io]-[~]
└─$ kubectl get pods -A
Error from server (Forbidden): pods is forbidden: User "dev" cannot list resource "pods" in API group "" at the cluster scope
┌──[root@liruilongs.github.io]-[~]
└─$
┌──[root@liruilongs.github.io]-[~]
└─$ kubectl get deployments.apps  -n awx
Error from server (Forbidden): deployments.apps is forbidden: User "dev" cannot list resource "deployments" in API group "apps" in the namespace "awx"

查看上下文信息,可以看到只能看到自己的上下文信息

┌──[root@liruilongs.github.io]-[~]
└─$ kubectl config get-contexts
CURRENT   NAME      CLUSTER      AUTHINFO   NAMESPACE
*         ctx-dev   kubernetes   dev        liruilong-dev

prod 用户测试,这里直接拷贝到指定的文件位置 ~/.kube/config 里,~/.kube/config的优先级要高于环境变量的方式。

┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$scp  prod-config   root@192.168.26.55:~/.kube/config
root@192.168.26.55's password:
prod-config                                                                 100% 5562     4.3MB/s   00:00
┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes]
└─$ssh root@192.168.26.55
root@192.168.26.55's password:
Last login: Sun Dec 11 17:03:02 2022 from 192.168.26.1
┌──[root@liruilongs.github.io]-[~/.kube]
└─$ kubectl config get-contexts
CURRENT   NAME       CLUSTER      AUTHINFO   NAMESPACE
*         ctx-prod   kubernetes   prod       liruilong-pord

权限测试

┌──[root@liruilongs.github.io]-[~]
└─$ kubectl get pods
No resources found in liruilong-prod namespace.
┌──[root@liruilongs.github.io]-[~]
└─$ kubectl get pods -A
Error from server (Forbidden): pods is forbidden: User "prod" cannot list resource "pods" in API group "" at the cluster scope
┌──[root@liruilongs.github.io]-[~]
└─$ kubectl get deployments.apps
Error from server (Forbidden): deployments.apps is forbidden: User "prod" cannot list resource "deployments" in API group "apps" in the namespace "liruilong-prod"
┌──[root@liruilongs.github.io]-[~]
└─$

多集群统一管理

对于多集群来讲,天然隔离,可以一个工作组分配一个集群,需要做的是如何通过一个控制台来管理多个集群

假设拥有这样一个集群给开发用。这里称为 A 集群

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
vms81.liruilongs.github.io   Ready    control-plane,master   23h   v1.21.1
vms82.liruilongs.github.io   Ready    <none>                 23h   v1.21.1
vms83.liruilongs.github.io   Ready    <none>                 23h   v1.21.1

现在在创建第二个集群,给测试用,这里用于演示,一个 node 节点,一个 master 节点,称为 B 集群

[root@vms91 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE    VERSION
vms91.liruilongs.github.io   Ready    control-plane,master   139m   v1.21.1
vms92.liruilongs.github.io   Ready    <none>                 131m   v1.21.1

查看 新创建的 B 集群信息

[root@vms91 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.26.91:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
[root@vms91 ~]#

要做到一个控制台管理多个集群,实现多集群切换,需要添加一个新的 kubeconfig 文件 ,将 A、B 集群配置文件合并为一个。对于 kubeconfig 的每一个部分要分别合并。

假设以 A 的集群 master 地址为控制台。备份 A 集群 kubeconfig 文件,然后修改他

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$pwd;ls
/root/.kube
cache  config

对于 Kubeconfig 文件合并,首先是集群信息合并,然后是上下文信息合并,最后是用户信息合并,合并之后需要注意各个部分都不能同名,所以需要修改

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0.........0tCg==
    server: https://192.168.26.81:6443
  name: cluster1
- cluster:
    certificate-authority-data: LS0.........0tCg==
    server: https://192.168.26.91:6443
  name: cluster2
contexts:
- context:
    cluster: cluster1
    namespace: kube-public
    user: kubernetes-admin1
  name: context1
- context:
    cluster: cluster2
    namespace: kube-system
    user: kubernetes-admin2
  name: context2
current-context: context2
kind: Config
preferences: {}
users:
- name: kubernetes-admin1
  user:
    client-certificate-data: LS0.......0tCg==
    client-key-data: LS0......LQo=
- name: kubernetes-admin2
  user:
    client-certificate-data: LS0.......0tCg==
    client-key-data: LS0......0tCg==

多集群切换

合并之后就可以依托当前 kubeconfig 文件实现 多集群切换:kubectl config use-context context2,其实本质是上下文切换

查看当前上下文信息

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO            NAMESPACE
*         context1   cluster1   kubernetes-admin1   kube-public
          context2   cluster2   kubernetes-admin2   kube-system

查看当前集群信息,可以看到是 A 集群信息

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
vms81.liruilongs.github.io   Ready    control-plane,master   23h   v1.21.1
vms82.liruilongs.github.io   Ready    <none>                 23h   v1.21.1
vms83.liruilongs.github.io   Ready    <none>                 23h   v1.21.1

切换上下文实现集群切换,并查看上下

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config use-context  context2
Switched to context "context2".
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO            NAMESPACE
          context1   cluster1   kubernetes-admin1   kube-public
*         context2   cluster2   kubernetes-admin2   kube-system

查看 B集群信息

┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
vms91.liruilongs.github.io   Ready    control-plane,master   8h    v1.21.1
vms92.liruilongs.github.io   Ready    <none>                 8h    v1.21.1
┌──[root@vms81.liruilongs.github.io]-[~/.kube]
└─$

博文参考


https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/

https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/

https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/

【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。