金鱼哥RHCA回忆录:RH236客户端配置--通过glusterfs挂载(native client)
进入到第五章的学习,关于如何使用客户端进行挂载使用。GlusterFS支持三种客户端类型:Gluster Native Client(本征挂载)、NFS和CIFS。
🎹 个人简介:大家好,我是 金鱼哥,CSDN运维领域新星创作者,华为云·云享专家,阿里云社区·专家博主
📚个人资质:CCNA、HCNP、CSNA(网络分析师),软考初级、中级网络工程师、RHCSA、RHCE、RHCA、RHCI、ITIL😜
💬格言:努力不一定成功,但要想成功就必须努力🔥
可以使用Gluster Native Client方法在GNU / Linux客户端中实现高并发性,性能和透明故障转移。可以使用NFS v3访问gluster卷。已经对GNU / Linux客户端和其他操作系统中的NFS实现进行了广泛的测试,例如FreeBSD,Mac OS X,以及Windows 7(Professional和Up)和Windows Server 2003.其他NFS客户端实现可以与gluster一起使用NFS服务器。使用Microsoft Windows以及SAMBA客户端时,可以使用CIFS访问卷。对于此访问方法,Samba包需要存在于客户端。
总结:GlusterFS支持三种客户端类型。Gluster Native Client、NFS和CIFS。Gluster Native Client是在用户空间中运行的基于FUSE的客户端,官方推荐使用Native Client,可以使用GlusterFS的全部功能。
通过glusterfs挂载(native client)
访问Red Hat Gluster存储卷的推荐方法是使用本征客户机。本征客户机是围绕FUSE(用户空间的文件系统)技术构建的。本征客户端支持POSIX ACL和自动故障转移。
与安装Red Hat Gluster存储卷的其他选项不同,本征客户机不依赖于任何可用的单个主机。在挂载过程中,将从指定的服务器或任何指定的备份服务器检索关于要挂载的卷的信息,然后本征客户机将直接与组成卷的单元通信。与挂载卷的其他选项相比,这允许更高的吞吐量和更好的可靠性。
重要:所有客户端都应该使用本征客户机的相同版本。在升级期间,建议在升级客户端之前升级所有服务器。
如果服务端glusterfs是3.1.2,那么客户端只能用rhel7.2或rhel6.7的系统才行
# yum install -y glusterfs glusterfs-fuse
注意:如果是5.x的系统,需要加载一个内核模块:modprobe fuse
# mount -t glusterfs node1:/vol1 /mnt/gluster
思考:
如果挂载后,所指定的主机down了怎么办?
如果是以glusterfs方式挂载,默认会自动切换到其它主机,但切换过程中,挂载点不可用,那如果挂载的时候,指定的主机就已经down了呢,需要指定切换主机:
mount -t glusterfs -obackup-volfile-servers=node2:node3:node4:node5 \
node1:/vol1 /mnt/gluster
注:挂载的时候,通过node1挂载,如果node1 down了,就从backup-volfile-servers后面指定的server中,选择一个切换挂载,如果集群只有2个节点,则使用-obackupvolfile-server=SERVER说明另外一个节点。具体挂载参数可参考 # man 8 mount.glusterfs
添加自动挂载:
# echo "node1:/vol1 /mnt/vol1 glusterfs defaults,_netdev 0 0" >> /etc/fstab
# mount -a
# 如果使用有备份的自动挂载,后面的参数是
glusterfs _netdev,backup-volfile-servers=SERVER:SERVER 0 0
注意:开机自动挂载,要加上_netdev选项。
手动挂载卷选项:
使用该mount -t glusterfs命令时,可以指定以下选项 。请注意,您需要用逗号分隔所有选项。
backupvolfile-server=server-name # 在安装fuse客户端时添加了这个选择,则当第一个vofile服务器故障时,该选项执行的的服务器将用作volfile服务器来安装客户端
backup-volfile-servers=SERVERLIST # 多服务器所用
volfile-max-fetch-attempts=number of attempts # 指定在装入卷时尝试获取卷文件的尝试次数。
log-level=loglevel # 日志级别
log-file=logfile # 日志文件
transport=transport-type # 指定传输协议
direct-io-mode=[enable|disable]
use-readdirp=[yes|no] # 设置为ON,则强制在fuse内核模块中使用readdirp模式
举个例子:
# mount -t glusterfs -obackupvolfile-server=volfile_server2,use-readdirp=no,volfile-max-fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs
课本练习
[root@workstation ~]# lab native-client setup
Setting up for lab exercise work:
• Testing if all hosts are reachable.......................... SUCCESS
• Adding glusterfs to runtime firewall on servera............. SUCCESS
• Adding glusterfs to permanent firewall on servera........... SUCCESS
• Adding glusterfs to runtime firewall on serverb............. SUCCESS
• Adding glusterfs to permanent firewall on serverb........... SUCCESS
• Adding glusterfs to runtime firewall on serverc............. SUCCESS
• Adding glusterfs to permanent firewall on serverc........... SUCCESS
• Adding glusterfs to runtime firewall on serverd............. SUCCESS
• Adding glusterfs to permanent firewall on serverd........... SUCCESS
• Adding servera to trusted storage pool...................... SUCCESS
• Adding serverb to trusted storage pool...................... SUCCESS
• Adding serverc to trusted storage pool...................... SUCCESS
• Adding serverd to trusted storage pool...................... SUCCESS
• Ensuring thin LVM pool vg_bricks/thinpool exists on servera. SUCCESS
…………
1. 软件包安装
[root@workstation ~]# yum -y install glusterfs-fuse
Loaded plugins: langpacks, search-disabled-repos
Package glusterfs-fuse-3.7.1-16.el7.x86_64 already installed and latest version
Nothing to do
[root@workstation ~]# rpm -ql glusterfs-fuse
/etc/logrotate.d/glusterfs
/sbin/mount.glusterfs
/usr/bin/fusermount-glusterfs
/usr/lib64/glusterfs/3.7.1/xlator/mount/fuse.so
/usr/sbin/glusterfs
/usr/sbin/glusterfsd
2. 配置挂载
[root@workstation ~]# mkdir /mnt/custdata
[root@workstation ~]# echo "servera:/custdata /mnt/custdata glusterfs _netdev,backup-volfile-servers=serverb:serverc:serverd 0 0" >> /etc/fstab
[root@workstation ~]# mount -a
[root@workstation ~]# mount | grep custdata
servera:/custdata on /mnt/custdata type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@workstation ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/vda1 xfs 10G 3.0G 7.0G 31% /
devtmpfs devtmpfs 902M 0 902M 0% /dev
tmpfs tmpfs 920M 84K 920M 1% /dev/shm
tmpfs tmpfs 920M 17M 904M 2% /run
tmpfs tmpfs 920M 0 920M 0% /sys/fs/cgroup
tmpfs tmpfs 184M 16K 184M 1% /run/user/42
tmpfs tmpfs 184M 0 184M 0% /run/user/0
servera:/custdata fuse.glusterfs 4.0G 66M 4.0G 2% /mnt/custdata
3. 测试写入
[root@workstation ~]# touch /mnt/custdata/file{00..39}
[root@workstation ~]# ll /mnt/custdata/file* | tail -5
-rw-r--r--. 1 root root 0 Nov 26 18:36 /mnt/custdata/file35
-rw-r--r--. 1 root root 0 Nov 26 18:36 /mnt/custdata/file36
-rw-r--r--. 1 root root 0 Nov 26 18:36 /mnt/custdata/file37
-rw-r--r--. 1 root root 0 Nov 26 18:36 /mnt/custdata/file38
-rw-r--r--. 1 root root 0 Nov 26 18:36 /mnt/custdata/file39
[root@workstation ~]# tail /var/log/glusterfs/mnt-custdata.log
[2020-11-26 10:36:14.575965] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 248: REMOVEXATTR() /file34 => -1 (No data available)
[2020-11-26 10:36:14.617213] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 255: REMOVEXATTR() /file35 => -1 (No data available)
[2020-11-26 10:36:14.660573] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 262: REMOVEXATTR() /file36 => -1 (No data available)
[2020-11-26 10:36:14.703933] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 269: REMOVEXATTR() /file37 => -1 (No data available)
[2020-11-26 10:36:14.746414] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 276: REMOVEXATTR() /file38 => -1 (No data available)
[2020-11-26 10:36:14.796054] W [fuse-bridge.c:1263:fuse_err_cbk] 0-glusterfs-fuse: 283: REMOVEXATTR() /file39 => -1 (No data available)
The message " [MSGID: 114031] [client-rpc-fops.c:1298:client3_3_removexattr_cbk] 0-custdata-client-1: remote operation failed [No data available]" repeated 22 times between [2020-11-26 10:36:13.162522] and [2020-11-26 10:36:14.743807]
The message " [MSGID: 114031] [client-rpc-fops.c:1298:client3_3_removexattr_cbk] 0-custdata-client-0: remote operation failed [No data available]" repeated 22 times between [2020-11-26 10:36:13.162603] and [2020-11-26 10:36:14.743835]
The message " [MSGID: 114031] [client-rpc-fops.c:1298:client3_3_removexattr_cbk] 0-custdata-client-2: remote operation failed [No data available]" repeated 16 times between [2020-11-26 10:36:13.072544] and [2020-11-26 10:36:14.791993]
The message " [MSGID: 114031] [client-rpc-fops.c:1298:client3_3_removexattr_cbk] 0-custdata-client-3: remote operation failed [No data available]" repeated 16 times between [2020-11-26 10:36:13.073358] and [2020-11-26 10:36:14.793395]
4. 查看连接
[root@foundation0 ~]# rht-vmctl stop servera
Stopping servera..
[root@workstation ~]# tail -f /var/log/glusterfs/mnt-custdata.log
[2020-11-26 10:41:13.702202] W [socket.c:642:__socket_rwv] 0-glusterfs: readv on 172.25.250.10:24007 failed (No data available)
[2020-11-26 10:41:38.059754] W [socket.c:642:__socket_rwv] 0-custdata-client-0: readv on 172.25.250.10:49153 failed (Connection timed out)
[2020-11-26 10:41:38.060146] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-custdata-client-0: disconnected from custdata-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2020-11-26 10:42:06.885308] E [socket.c:2332:socket_connect_finish] 0-custdata-client-0: connection to 172.25.250.10:24007 failed (No route to host)
[2020-11-26 10:42:27.924908] E [socket.c:2332:socket_connect_finish] 0-glusterfs: connection to 172.25.250.10:24007 failed (No route to host)
[root@foundation0 ~]# rht-vmctl start servera
Starting servera.
[root@workstation ~]# tail -f /var/log/glusterfs/mnt-custdata.log
[2020-11-26 10:44:15.864677] I [glusterfsd-mgmt.c:1512:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2020-11-26 10:44:15.901944] E [MSGID: 114058] [client-handshake.c:1524:client_query_portmap_cbk] 0-custdata-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2020-11-26 10:44:15.902554] I [MSGID: 114018] [client.c:2042:client_rpc_notify] 0-custdata-client-0: disconnected from custdata-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2020-11-26 10:44:22.578448] I [rpc-clnt.c:1851:rpc_clnt_reconfig] 0-custdata-client-0: changing port to 49153 (from 0)
[2020-11-26 10:44:22.591481] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-custdata-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2020-11-26 10:44:22.593683] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-custdata-client-0: Connected to custdata-client-0, attached to remote volume '/bricks/brick-a2/brick'.
[2020-11-26 10:44:22.593933] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-custdata-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2020-11-26 10:44:22.596271] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-custdata-client-0: Server lk version = 1
5. 评分脚本
[root@workstation ~]# lab native-client grade
总结
-
使用mount -t glusterfs直接挂载使用。
-
需要了解常用挂载卷选项。
RHCA认证需要经历5门的学习与考试,还是需要花不少时间去学习与备考的,好好加油,可以噶🤪。
以上就是【金鱼哥】对 第五章 客户端配置–通过glusterfs挂载(native client) 的简述和讲解。希望能对看到此文章的小伙伴有所帮助。
💾红帽认证专栏系列:
RHCSA专栏:戏说 RHCSA 认证
RHCE专栏:戏说 RHCE 认证
此文章收录在RHCA专栏:RHCA 回忆录
如果这篇【文章】有帮助到你,希望可以给【金鱼哥】点个赞👍,创作不易,相比官方的陈述,我更喜欢用【通俗易懂】的文笔去讲解每一个知识点。
如果有对【运维技术】感兴趣,也欢迎关注❤️❤️❤️ 【金鱼哥】❤️❤️❤️,我将会给你带来巨大的【收获与惊喜】💕💕!
- 点赞
- 收藏
- 关注作者
评论(0)