K8s 1.33 原地扩缩容特性
【摘要】 Kubernetes 1.33 引入了原地扩缩容特性,允许直接调整运行中 Pod 的 CPU 和内存配置而无需重启容器。此功能为 Beta 版本,默认启用。通过 `kubectl patch` 命令可动态修改资源配置,示例中创建了一个资源监控 Pod,演示了无缝调整 CPU 和内存限额的过程,并通过日志验证变更生效。这一特性极大提升了资源管理的灵活性和效率。
K8s 1.33 原地扩缩容特性
背景
在创建好的pod容器中,进行了资源限制,在之前的版本中,修改资源配置是需要重启pod才可生效,在1.33版本的kubernetes可以直接调整正在运行的 Pod 的 CPU 和内存配置,而无需重启pod容器。
需要注意的是,此功能目前为bate版本,并且在集群中默认开启。我们可以直接使用该新特性。
操作演示
创建一个资源监控 Pod
[root@k8s-master01 ~]# vim resize.yaml
[root@k8s-master01 ~]# cat resize.yaml
apiVersion: v1
kind: Pod
metadata:
name: resize-demo
spec:
containers:
- name: resource-watcher
image: ubuntu:22.04
command:
- "/bin/bash"
- "-c"
- |
apt-get update && apt-get install -y procps bc
echo "=== Pod Started: $(date) ==="
# Functions to read container resource limits
get_cpu_limit() {
if [ -f /sys/fs/cgroup/cpu.max ]; then
# cgroup v2
local cpu_data=$(cat /sys/fs/cgroup/cpu.max)
local quota=$(echo $cpu_data | awk '{print $1}')
local period=$(echo $cpu_data | awk '{print $2}')
if [ "$quota" = "max" ]; then
echo "unlimited"
else
echo "$(echo "scale=3; $quota / $period" | bc) cores"
fi
else
# cgroup v1
local quota=$(cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us)
local period=$(cat /sys/fs/cgroup/cpu/cpu.cfs_period_us)
if [ "$quota" = "-1" ]; then
echo "unlimited"
else
echo "$(echo "scale=3; $quota / $period" | bc) cores"
fi
fi
}
get_memory_limit() {
if [ -f /sys/fs/cgroup/memory.max ]; then
# cgroup v2
local mem=$(cat /sys/fs/cgroup/memory.max)
if [ "$mem" = "max" ]; then
echo "unlimited"
else
echo "$((mem / 1048576)) MiB"
fi
else
# cgroup v1
local mem=$(cat /sys/fs/cgroup/memory/memory.limit_in_bytes)
echo "$((mem / 1048576)) MiB"
fi
}
# Print resource info every 5 seconds
while true; do
echo "---------- Resource Check: $(date) ----------"
echo "CPU limit: $(get_cpu_limit)"
echo "Memory limit: $(get_memory_limit)"
echo "Available memory: $(free -h | grep Mem | awk '{print $7}')"
sleep 5
done
resizePolicy:
- resourceName: cpu
restartPolicy: NotRequired
- resourceName: memory
restartPolicy: NotRequired
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "100m"
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# kubectl apply -f resize.yaml
pod/resize-demo created
[root@k8s-master01 ~]#
查看 Pod 的初始状态
[root@k8s-master01 ~]# kubectl describe pod resize-demo | grep -A5 Limits:
Limits:
cpu: 100m
memory: 128Mi
Requests:
cpu: 100m
memory: 128Mi
无缝调整 CPU
# 执行调整动作
kubectl patch pod resize-demo --subresource resize --patch \
'{"spec":{"containers":[{"name":"resource-watcher", "resources":{"requests":{"cpu":"200m"}, "limits":{"cpu":"200m"}}}]}}'
# 检查调整状态
[root@k8s-master01 ~]# kubectl get pod resize-demo -o yaml | grep resources -A8
spec:
containers:
--
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 200m
memory: 128Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
--
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
restartCount: 0
started: true
[root@k8s-master01 ~]#
查看现在的资源使用情况
[root@k8s-master01 ~]# kubectl describe pod resize-demo | grep -A8 Limits:
Limits:
cpu: 200m
memory: 128Mi
Requests:
cpu: 200m
memory: 128Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h7cpt (ro)
查看容器日志
查看日志资源是否发生了变化,在操作之前一直在追踪日志
[root@k8s-master01 ~]# kubectl logs -f resize-demo
---------- Resource Check: Fri Jun 6 11:59:51 UTC 2025 ----------
CPU limit: .100 cores
Memory limit: 128 MiB
Available memory: 1.9Gi
---------- Resource Check: Fri Jun 6 11:59:56 UTC 2025 ----------
CPU limit: .200 cores
Memory limit: 128 MiB
Available memory: 1.9Gi
---------- Resource Check: Fri Jun 6 12:00:01 UTC 2025 ----------
CPU limit: .200 cores
Memory limit: 128 MiB
Available memory: 1.9Gi
调整内存
# 执行调整动作
kubectl patch pod resize-demo --subresource resize --patch \
'{"spec":{"containers":[{"name":"resource-watcher", "resources":{"requests":{"memory":"256Mi"}, "limits":{"memory":"256Mi"}}}]}}'
# 检查调整状态
[root@k8s-master01 ~]# kubectl describe pod resize-demo | grep -A8 Limits:
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 200m
memory: 256Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h7cpt (ro)
[root@k8s-master01 ~]#
查看容器日志
---------- Resource Check: Fri Jun 6 12:07:20 UTC 2025 ----------
CPU limit: .200 cores
Memory limit: 128 MiB
Available memory: 1.9Gi
---------- Resource Check: Fri Jun 6 12:07:25 UTC 2025 ----------
CPU limit: .200 cores
Memory limit: 256 MiB
Available memory: 1.9Gi
---------- Resource Check: Fri Jun 6 12:07:30 UTC 2025 ----------
CPU limit: .200 cores
Memory limit: 256 MiB
Available memory: 1.9Gi
关于
CSDN、GitHub、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今日头条、新浪微博、个人博客
全网可搜《小陈运维》
文章主要发布于微信公众号:《Linux运维交流社区》
【声明】本内容来自华为云开发者社区博主,不代表华为云及华为云开发者社区的观点和立场。转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)