使用 BPF 监控 Kubernetes 集群(k8s BPF 工具 kubectl-trace 认知)
写在前面
-
学习中遇到,整理分享,博文内容涉及: -
kubectl-trace
安装,在节点,容器中如何使用 -
需要注意的问题: job
闪完成,一直Pending
状态解决 -
理解不足小伙伴帮忙指正
不必太纠结于当下,也不必太忧虑未来,当你经历过一些事情的时候,眼前的风景已经和从前不一样了。——村上春树
kubectl-trace 安装
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$curl -L -o kubectl-trace.tar.gz https://github.com/iovisor/kubectl-trace/releases/download/v0.1.0-rc.1/kubectl-trace_0.1.0-rc.1_linux_amd64.tar.gz
└─$pwd
/root/ansible/trace
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$ls
kubectl-trace.tar.gz
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$tar -xvf kubectl-trace.tar.gz
LICENSE
README.md
kubectl-trace
trace-runner
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$mv kubectl-trace /usr/local/bin/kubectl-trace
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$
查看版本,检查安装是否成功
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl trace version
git commit: d34d1d586b110af718aeadc9f3213c78e543a961
build date: 2019-09-19 23:00:13 +0800 CST
如何使用
这里我们看一下帮助文档
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl trace run --help
Execute a bpftrace program on resources
Usage:
trace run (POD | TYPE/NAME) [-c CONTAINER] [--attach] [flags]
Examples:
# Count system calls using tracepoints on a specific node
kubectl trace run node/kubernetes-node-emt8.c.myproject.internal -e 'kprobe:do_sys_open { printf("%s: %s\n", comm, str(arg1)) }'
# Execute a bpftrace program from file on a specific node
kubectl trace run node/kubernetes-node-emt8.c.myproject.internal -f read.bt
# Run an bpftrace inline program on a pod container
kubectl trace run pod/nginx -c nginx -e "tracepoint:syscalls:sys_enter_* { @[probe] = count(); }"
kubectl trace run pod/nginx nginx -e "tracepoint:syscalls:sys_enter_* { @[probe] = count(); }"
# Run a bpftrace inline program on a pod container with a custom image for the init container responsible to fetch linux headers
kubectl trace run pod/nginx nginx -e "tracepoint:syscalls:sys_enter_* { @[probe] = count(); } --init-imagename=quay.io/custom-init-image-name --fetch-headers"
# Run a bpftrace inline program on a pod container with a custom image for the bpftrace container that will run your program in the cluster
kubectl trace run pod/nginx nginx -e "tracepoint:syscalls:sys_enter_* { @[probe] = count(); } --imagename=quay.io/custom-bpftrace-image-name"
...............
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$
在节点上使用
在使用之前,可能需要做一些准备工作,提前拉去镜像
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible k8s_node -m shell -a "docker pull quay.io/iovisor/kubectl-trace-bpftrace:latest" -i host.yaml
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible k8s_node -m shell -a "docker pull quay.io/iovisor/kubectl-trace-init:latest" -i host.yaml
使用 tracepoints
在特定节点
上计算系统调用次数(帮助文档中的Demo,这里实际上是动态跟踪):
kubectl trace run node/kubernetes-node-emt8.c.myproject.internal -e 'kprobe:do_sys_open { printf("%s: %s\n", comm, str(arg1)) }'
kprobe:do_sys_open
是一个 bpftrace 的探针(probe),用于跟踪 Linux 内核中的 do_sys_open
系统调用。
实际上kprobe
是一种用于在内核函数入口处执行的探针类型,属于动态跟踪
。do_sys_open
是一个内核函数,它在打开文件时被调用。而 tracepoint
属于静态跟踪
实际的 动态跟踪
Demo
运行方式,这里使用编码方式,传递字符串
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl trace run node/vms103.liruilongs.github.io -e 'kprobe:do_sys_open { printf("%s: %s\n", comm, str(arg1)) }'
trace a659c18c-e50c-11ee-ba0d-000c290e5d5f created
会启动一个 job
在对应的 节点上
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl get jobs
NAME COMPLETIONS DURATION AGE
kubectl-trace-a659c18c-e50c-11ee-ba0d-000c290e5d5f 0/1 20s 20s
webhook-cert-setup 0/1 124d 124d
获取 pod
信息
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl get jobs | grep a659c18c-e50c-11ee-ba0d-000c290e5d5f
kubectl-trace-a659c18c-e50c-11ee-ba0d-000c290e5d5f 0/1 29s 29s
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl get pods | grep a659c18c-e50c-11ee-ba0d-000c290e5d5f
kubectl-trace-a659c18c-e50c-11ee-ba0d-000c290e5d5f-7292n 1/1 Running 0 65s
查看 Job 日志,即上面的 跟踪日志
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl logs kubectl-trace-a659c18c-e50c-11ee-ba0d-000c290e5d5f-7292n --tail=3
dockerd: /docker/data/overlay2/b212e7a41dd9417f549f234ce768a3fd0fc0e0baf
dockerd: /docker/data/image/overlay2/imagedb/content/sha256/1f0ce7a730e7
dockerd: /docker/data/image/overlay2/imagedb/metadata/sha256/1f0ce7a730e
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$
静态跟踪 Demo
这里的静态跟踪实际上是帮助文档中讲的 tracepoint
内核跟踪点,需要注意部分机器可能需要添加 --fetch-headers
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl trace run vms105.liruilongs.github.io -e "tracepoint:syscalls:sys_enter_execve { @[comm] = count() }" --fetch-headers
trace 305a7d60-e5a0-11ee-ba88-000c290e5d5f created
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods -w
NAME READY STATUS RESTARTS AGE
hello-webhook-deployment-7f599b95c4-hjx86 1/1 Running 1 (2d16h ago) 18d
kubectl-trace-305a7d60-e5a0-11ee-ba88-000c290e5d5f-ctfl6 0/1 Init:0/1 0 4s
kubectl-trace-305a7d60-e5a0-11ee-ba88-000c290e5d5f-ctfl6 0/1 PodInitializing 0 15s
kubectl-trace-305a7d60-e5a0-11ee-ba88-000c290e5d5f-ctfl6 1/1 Running 0 21s
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$
查看日志信息
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl logs kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f-wwprt
Defaulted container "kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f" out of: kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f, kubectl-trace-init (init)
if your program has maps to print, send a SIGINT using Ctrl-C, if you want to interrupt the execution send SIGINT two times
Attaching 1 probe...
在跟踪节点传递结束信息(重新打个页面执行)
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.105 -m shell -a "pgrep bpftrace" -i host.yaml
192.168.26.105 | CHANGED | rc=0 >>
37312
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.105 -m shell -a "kill -2 37312" -i host.yaml
192.168.26.105 | CHANGED | rc=0 >>
对应的跟踪日志信息输出
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl logs kubectl-trace-305a7d60-e5a0-11ee-ba88-000c290e5d5f-ctfl6 -f
Defaulted container "kubectl-trace-305a7d60-e5a0-11ee-ba88-000c290e5d5f" out of: kubectl-trace-305a7d60-e5a0-11ee-ba88-000c290e5d5f, kubectl-trace-init (init)
if your program has maps to print, send a SIGINT using Ctrl-C, if you want to interrupt the execution send SIGINT two times
Attaching 1 probe...
@[bash]: 1
@[systemd-udevd]: 1
@[calico]: 2
@[containerd]: 6
@[nsenter]: 15
@[kubelet]: 18
@[kube-proxy]: 34
@[runc:[2:INIT]]: 207
@[exe]: 207
@[runc]: 208
@[containerd-shim]: 220
@[calico-node]: 303
@[cri-dockerd]: 313
@[dockerd]: 3907
pod 状态为完成
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-webhook-deployment-7f599b95c4-hjx86 1/1 Running 1 (2d16h ago) 18d
kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f-wwprt 0/1 Completed 0 31m
运行方式通过 文件的方式,使用 bpftrace 自带的脚步执行
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl trace run vms105.liruilongs.github.io -f opensnoop.bt --fetch-headers
trace e0b7df1e-e5a1-11ee-898f-000c290e5d5f created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl get pods | grep e0b7df1e-e5a1-11ee-898f-000c290e5d5f
kubectl-trace-e0b7df1e-e5a1-11ee-898f-000c290e5d5f-wv8w2 0/1 PodInitializing 0 12s
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl trace logs e0b7df1e-e5a1-11ee-898f-000c290e5d5f
if your program has maps to print, send a SIGINT using Ctrl-C, if you want to interrupt the execution send SIGINT two times
Attaching 6 probes...
Tracing open syscalls... Hit Ctrl-C to end.
PID COMM FD ERR PATH
1038 dockerd 133 0 /docker/data/image/overlay2/imagedb/content/sha256/4873874c08ef
1038 dockerd 133 0 /docker/data/image/overlay2/imagedb/content/sha256/4873874c08ef
1038 dockerd -1 2 /docker/data/image/overlay2/imagedb/metadata/sha256/4873874c08e
1316 kubelet 17 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.stat
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.usage
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.max_u
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.failc
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.limit
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.memsw
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.memsw
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.memsw
1316 kubelet 19 0 /sys/fs/cgroup/memory/system.slice/kubelet.service/memory.memsw
............................
bpftrace
自带了opensnoop.bt
,这个工具可以同时对每个系统调用的开始和结束位置进行跟踪,然后将结果分列输出:
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$cat opensnoop.bt
#!/usr/bin/bpftrace
/*
* opensnoop Trace open() syscalls.
* For Linux, uses bpftrace and eBPF.
*
* Also a basic example of bpftrace.
*
* USAGE: opensnoop.bt
*
* This is a bpftrace version of the bcc tool of the same name.
*
* Copyright 2018 Netflix, Inc.
* Licensed under the Apache License, Version 2.0 (the "License")
*
* 08-Sep-2018 Brendan Gregg Created this.
*/
BEGIN
{
printf("Tracing open syscalls... Hit Ctrl-C to end.\n");
printf("%-6s %-16s %4s %3s %s\n", "PID", "COMM", "FD", "ERR", "PATH");
}
tracepoint:syscalls:sys_enter_open,
tracepoint:syscalls:sys_enter_openat
{
@filename[tid] = args->filename;
}
tracepoint:syscalls:sys_exit_open,
tracepoint:syscalls:sys_exit_openat
/@filename[tid]/
{
$ret = args->ret;
$fd = $ret >= 0 ? $ret : -1;
$errno = $ret >= 0 ? 0 : - $ret;
printf("%-6d %-16s %4d %3d %s\n", pid, comm, $fd, $errno,
str(@filename[tid]));
delete(@filename[tid]);
}
END
{
clear(@filename);
}
容器中使用
容器相关的这里不做演示,下面是帮助文档的 Demo
在 Pod 容器上运行内联的 bpftrace 程序:
kubectl trace run pod/nginx -c nginx -e "tracepoint:syscalls:sys_enter_* { @[probe] = count(); }"
在 Pod 容器上使用自定义镜像运行内联的 bpftrace 程序:
kubectl trace run pod/nginx nginx -e "tracepoint:syscalls:sys_enter_* { @[probe] = count(); }" --imagename=quay.io/custom-bpftrace-image-name
需要注意的问题
job 闪完成
在编译或构建过程中,表示缺少了必要的头文件,job 启 Pod 之后会马上闪退
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl trace run node/vms103.liruilongs.github.io -f opensnoop.bt
trace fc86c785-e513-11ee-8e04-000c290e5d5f created
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl get pods | grep fc86c785-e513-11ee-8e04-000c290e5d5f
kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm 0/1 ContainerCreating 0 8s
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl get pods kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm -w
NAME READY STATUS RESTARTS AGE
kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm 0/1 Completed 0 19s
kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm 0/1 Completed 0 20s
kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm 0/1 Completed 0 21s
kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm 0/1 Terminating 0 26s
kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm 0/1 Terminating 0 26s
^C┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$kubectl logs kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm
Error from server (NotFound): pods "kubectl-trace-fc86c785-e513-11ee-8e04-000c290e5d5f-g67gm" not found
日志信息
if your program has maps to print, send a SIGINT using Ctrl-C, if you want to interrupt the execution send SIGINT two ││ times │
│/bpftrace/include/clang_workarounds.h:14:10: fatal error: 'linux/types.h' file not found ││exit status 1
执行环境
┌──[root@vms100.liruilongs.github.io]-[~/ansible/trace]
└─$hostnamectl
Static hostname: vms100.liruilongs.github.io
Icon name: computer-vm
Chassis: vm
Machine ID: e93ae3f6cb354f3ba509eeb73568087e
Boot ID: 51ad5f1933914654affd2dcf9ebca862
Virtualization: vmware
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 5.4.266-1.el7.elrepo.x86_64
Architecture: x86-64
解决办法: 添加 --fetch-headers
尝试 https://github.com/iovisor/kubectl-trace/issues/177
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl trace run vms105.liruilongs.github.io -e "tracepoint:syscalls:sys_enter_execve { @[comm] = count() }" --fetch-headers
trace 3d9981a0-e59b-11ee-a570-000c290e5d5f created
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-webhook-deployment-7f599b95c4-hjx86 1/1 Running 1 (2d16h ago) 18d
kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f-wwprt 0/1 Init:0/1 0 12s
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods -w
NAME READY STATUS RESTARTS AGE
hello-webhook-deployment-7f599b95c4-hjx86 1/1 Running 1 (2d16h ago) 18d
kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f-wwprt 0/1 Init:0/1 0 33s
kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f-wwprt 0/1 Init:0/1 0 41s
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods -w
NAME READY STATUS RESTARTS AGE
hello-webhook-deployment-7f599b95c4-hjx86 1/1 Running 1 (2d16h ago) 18d
kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f-wwprt 0/1 Init:0/1 0 64s
┌──[root@vms100.liruilongs.github.io]-[~/ansible]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-webhook-deployment-7f599b95c4-hjx86 1/1 Running 1 (2d16h ago) 18d
kubectl-trace-3d9981a0-e59b-11ee-a570-000c290e5d5f-wwprt 1/1 Running 0 16m
一直 Pending 状态
这里需要注意,如果是 master
节点做分析,需要把对应的 master
的污点去掉,负责无法调度
┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl trace run vms102.liruilongs.github.io -e "tracepoint:syscalls:sys_enter_execve { @[comm] = count() }"
trace 4f30d986-e595-11ee-a5ef-000c290e5d5f created
┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-webhook-deployment-7f599b95c4-hjx86 1/1 Running 1 (2d15h ago) 18d
kubectl-trace-4f30d986-e595-11ee-a5ef-000c290e5d5f-xw6qm 0/1 Pending 0 7s
┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl describe pods kubectl-trace-4f30d986-e595-11ee-a5ef-000c290e5d5f-xw6qm
..................
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s default-scheduler 0/6 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 5 node(s) didn't match Pod's node affinity/selector. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
┌──[root@vms100.liruilongs.github.io]-[~]
└─$kubectl trace delete 4f30d986-e595-11ee-a5ef-000c290e5d5f
trace job kubectl-trace-4f30d986-e595-11ee-a5ef-000c290e5d5f deleted
trace configuration kubectl-trace-4f30d986-e595-11ee-a5ef-000c290e5d5f deleted
在实际生产中可以考虑通过 sidecar 边车的方式运行,监视 pod 中的所有容器。这里不做分享,感兴趣小伙伴可以研究下 ^_^
博文部分内容参考
© 文中涉及参考链接内容版权归原作者所有,如有侵权请告知 :)
https://github.com/iovisor/kubectl-trace
https://medium.com/@calavera/spy-on-your-kubernetes-cluster-with-bpf-b09032bd1cdc
https://github.com/iovisor/kubectl-trace/issues/177
© 2018-2024 liruilonger@gmail.com, All rights reserved. 保持署名-非商用-相同方式共享(CC BY-NC-SA 4.0)
- 点赞
- 收藏
- 关注作者
评论(0)