在华为云平台用Tiup搭建TiDB集群
在华为云平台用Tiup搭建TiDB集群
前言
最近学习了分布式云计算相关的课程,成功在本地VM虚拟机上部署TiDB集群,尝试利用华为云平台搭建一个类似的TiDB集群,实现集群的监控和数据库测试
环境准备
购买云服务器
登陆华为云平台,选择产品 弹性云服务器(ECS)
基础配置:
- 区域:华北-北京四
- 计费模式:按需计费
- 可用区:随机分配
- CPU架构:鲲鹏计算
- 规格:kc1.large.2
- 镜像:公共镜像,centos 7.6
- 安全防护:不使用安全防护
- 系统盘:高IO,40GB
- 购买量:4
购买4台服务器,1台pd server,1台tidb server,1台tiflash,1台tikv
网络配置:
- 网络:vpc-default
- 安全组:default
- 弹性公网IP:现在购买
- 线路:静态BGP
- 公网带宽:按流量计费
- 带宽大小:100
- 释放行为:随实例释放
高级配置
随便定义名称,起个密码,其他不用动了
然后直接确认配置购买
搭建步骤
安装Tiup和组件
使用Tiup工具可以很轻松的搭建自定义的TiDB集群,首先下载Tiup并配置到环境
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
该命令将Tiup安装在 $HOME/.tiup
文件夹下,之后安装的组件以及组件运行产生的数据也会放在该文件夹下
重新声明全局环境变量
source .bash_profile
检查是否安装成功
which tiup
安装并更新cluster组件
tiup cluster
tiup update --self && tiup update cluster
如果成功,会输出Update successfully!
,如图所示
编辑配置文件
创建文件topology.yaml
,填入以下内容
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
# # The user who runs the tidb cluster.
user: "tidb"
# # group is used to specify the group name the user belong to if it's not the same as user.
# group: "tidb"
# # SSH port of servers in the managed cluster.
ssh_port: 22
# # Storage directory for cluster deployment files, startup scripts, and configuration files.
deploy_dir: "/tidb-deploy"
# # TiDB Cluster data storage directory
data_dir: "/tidb-data"
# # Supported values: "amd64", "arm64" (default: "amd64")
arch: "amd64"
# # Resource Control is used to limit the resource of an instance.
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
# # Supports using instance-level `resource_control` to override global `resource_control`.
# resource_control:
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryLimit=bytes
# memory_limit: "2G"
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota=
# # The percentage specifies how much CPU time the unit shall get at maximum, relative to the total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU.
# # Example: CPUQuota=200% ensures that the executed processes will never get more than two CPU time.
# cpu_quota: "200%"
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#IOReadBandwidthMax=device%20bytes
# io_read_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"
# io_write_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"
# # Monitored variables are applied to all the machines.
monitored:
# # The communication port for reporting system information of each node in the TiDB cluster.
node_exporter_port: 9100
# # Blackbox_exporter communication port, used for TiDB cluster port monitoring.
blackbox_exporter_port: 9115
# # Storage directory for deployment files, startup scripts, and configuration files of monitoring components.
# deploy_dir: "/tidb-deploy/monitored-9100"
# # Data storage directory of monitoring components.
# data_dir: "/tidb-data/monitored-9100"
# # Log storage directory of the monitoring component.
# log_dir: "/tidb-deploy/monitored-9100/log"
# # Server configs are used to specify the runtime configuration of TiDB components.
# # All configuration items can be found in TiDB docs:
# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/
# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/
# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/
# # - TiFlash: https://docs.pingcap.com/tidb/stable/tiflash-configuration
# #
# # All configuration items use points to represent the hierarchy, e.g:
# # readpool.storage.use-unified-pool
# # ^ ^
# # - example: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml.
# # You can overwrite this configuration via the instance-level `config` field.
# server_configs:
# tidb:
# tikv:
# pd:
# tiflash:
# tiflash-learner:
# # Server configs are used to specify the configuration of PD Servers.
pd_servers:
# # The ip address of the PD Server.
- host: 192.168.0.21
# # SSH port of the server.
# ssh_port: 22
# # PD Server name
# name: "pd-1"
# # communication port for TiDB Servers to connect.
# client_port: 2379
# # Communication port among PD Server nodes.
# peer_port: 2380
# # PD Server deployment file, startup script, configuration file storage directory.
# deploy_dir: "/tidb-deploy/pd-2379"
# # PD Server data storage directory.
# data_dir: "/tidb-data/pd-2379"
# # PD Server log file storage directory.
# log_dir: "/tidb-deploy/pd-2379/log"
# # numa node bindings.
# numa_node: "0,1"
# # The following configs are used to overwrite the `server_configs.pd` values.
# config:
# schedule.max-merge-region-size: 20
# schedule.max-merge-region-keys: 200000
# ssh_port: 22
# name: "pd-1"
# client_port: 2379
# peer_port: 2380
# deploy_dir: "/tidb-deploy/pd-2379"
# data_dir: "/tidb-data/pd-2379"
# log_dir: "/tidb-deploy/pd-2379/log"
# numa_node: "0,1"
# config:
# schedule.max-merge-region-size: 20
# schedule.max-merge-region-keys: 200000
# ssh_port: 22
# name: "pd-1"
# client_port: 2379
# peer_port: 2380
# deploy_dir: "/tidb-deploy/pd-2379"
# data_dir: "/tidb-data/pd-2379"
# log_dir: "/tidb-deploy/pd-2379/log"
# numa_node: "0,1"
# config:
# schedule.max-merge-region-size: 20
# schedule.max-merge-region-keys: 200000
# # Server configs are used to specify the configuration of TiDB Servers.
tidb_servers:
# # The ip address of the TiDB Server.
- host: 192.168.0.4
# # SSH port of the server.
# ssh_port: 22
# # The port for clients to access the TiDB cluster.
# port: 4000
# # TiDB Server status API port.
# status_port: 10080
# # TiDB Server deployment file, startup script, configuration file storage directory.
# deploy_dir: "/tidb-deploy/tidb-4000"
# # TiDB Server log file storage directory.
# log_dir: "/tidb-deploy/tidb-4000/log"
# # The ip address of the TiDB Server.
# ssh_port: 22
# port: 4000
# status_port: 10080
# deploy_dir: "/tidb-deploy/tidb-4000"
# log_dir: "/tidb-deploy/tidb-4000/log"
# ssh_port: 22
# port: 4000
# status_port: 10080
# deploy_dir: "/tidb-deploy/tidb-4000"
# log_dir: "/tidb-deploy/tidb-4000/log"
# # Server configs are used to specify the configuration of TiKV Servers.
tikv_servers:
# # The ip address of the TiKV Server.
- host: 192.168.0.228
# # SSH port of the server.
# ssh_port: 22
# # TiKV Server communication port.
# port: 20160
# # TiKV Server status API port.
# status_port: 20180
# # TiKV Server deployment file, startup script, configuration file storage directory.
# deploy_dir: "/tidb-deploy/tikv-20160"
# # TiKV Server data storage directory.
# data_dir: "/tidb-data/tikv-20160"
# # TiKV Server log file storage directory.
# log_dir: "/tidb-deploy/tikv-20160/log"
# # The following configs are used to overwrite the `server_configs.tikv` values.
# config:
# log.level: warn
# # The ip address of the TiKV Server.
# # Server configs are used to specify the configuration of TiFlash Servers.
tiflash_servers:
# # The ip address of the TiFlash Server.
- host: 192.168.0.175
# # SSH port of the server.
# ssh_port: 22
# # TiFlash TCP Service port.
# tcp_port: 9000
# # TiFlash raft service and coprocessor service listening address.
# flash_service_port: 3930
# # TiFlash Proxy service port.
# flash_proxy_port: 20170
# # TiFlash Proxy metrics port.
# flash_proxy_status_port: 20292
# # TiFlash metrics port.
# metrics_port: 8234
# # TiFlash Server deployment file, startup script, configuration file storage directory.
# deploy_dir: /tidb-deploy/tiflash-9000
## With cluster version >= v4.0.9 and you want to deploy a multi-disk TiFlash node, it is recommended to
## check config.storage.* for details. The data_dir will be ignored if you defined those configurations.
## Setting data_dir to a ','-joined string is still supported but deprecated.
## Check https://docs.pingcap.com/tidb/stable/tiflash-configuration#multi-disk-deployment for more details.
# # TiFlash Server data storage directory.
# data_dir: /tidb-data/tiflash-9000
# # TiFlash Server log file storage directory.
# log_dir: /tidb-deploy/tiflash-9000/log
# # The ip address of the TiKV Server.
# ssh_port: 22
# tcp_port: 9000
# flash_service_port: 3930
# flash_proxy_port: 20170
# flash_proxy_status_port: 20292
# metrics_port: 8234
# deploy_dir: /tidb-deploy/tiflash-9000
# data_dir: /tidb-data/tiflash-9000
# log_dir: /tidb-deploy/tiflash-9000/log
# # Server configs are used to specify the configuration of Prometheus Server.
monitoring_servers:
# # The ip address of the Monitoring Server.
- host: 192.168.0.21
# # SSH port of the server.
# ssh_port: 22
# # Prometheus Service communication port.
# port: 9090
# # ng-monitoring servive communication port
# ng_port: 12020
# # Prometheus deployment file, startup script, configuration file storage directory.
# deploy_dir: "/tidb-deploy/prometheus-8249"
# # Prometheus data storage directory.
# data_dir: "/tidb-data/prometheus-8249"
# # Prometheus log file storage directory.
# log_dir: "/tidb-deploy/prometheus-8249/log"
# # Server configs are used to specify the configuration of Grafana Servers.
grafana_servers:
# # The ip address of the Grafana Server.
- host: 192.168.0.21
# # Grafana web port (browser access)
# port: 3000
# # Grafana deployment file, startup script, configuration file storage directory.
# deploy_dir: /tidb-deploy/grafana-3000
# # Server configs are used to specify the configuration of Alertmanager Servers.
alertmanager_servers:
# # The ip address of the Alertmanager Server.
- host: 192.168.0.21
# # SSH port of the server.
# ssh_port: 22
# # Alertmanager web service port.
# web_port: 9093
# # Alertmanager communication port.
# cluster_port: 9094
# # Alertmanager deployment file, startup script, configuration file storage directory.
# deploy_dir: "/tidb-deploy/alertmanager-9093"
# # Alertmanager data storage directory.
# data_dir: "/tidb-data/alertmanager-9093"
# # Alertmanager log file storage directory.
# log_dir: "/tidb-deploy/alertmanager-9093/log"
其中pd_server等地址需要修改为服务器的地址,注意prometheus, grafana等监控也在pd server服务器上
查看可用版本并启动搭建
以下命令可以查看支持的tidb版本
tiup list tidb
这里选择v7.1.0,执行命令启动搭建
tiup cluster deploy tidb-test v7.1.0 ./topology.yaml --user root -p
如果没有问题,等待搭建完成即可
最后会通知搭建successfully
启动集群
tiup cluster start tidb-test
查看集群状态
tiup cluster display tidb-test
总体来说配置很顺利,没有遇到本地虚拟机遇到的那些问题
访问http://{pd-server-host}:2379/dashboard
可以查看类似下图的状态监控
也可以通过grafana查看(端口3000),在此不再演示
测试TiDB集群(mysql)
TiDB高度兼容mysql,所以可以连接上mysql进行测试
通过sysbench测试,测试脚本如下
测试结果不是重点,在此不再演示
总结
至此,通过华为云平台部署Tidb集群已经完成,可以根据需求扩大集群,后续可以在该集群上进行更复杂的探索和实验
- 点赞
- 收藏
- 关注作者
评论(0)