Ceph-nautilus和glusterfs6对比测试结果记录 作者: sysit 分类: d 发表于 2019-09-07 119人围观 * 本文只作为本次环境的测试记录,在实际环境上线前要做很多的参数优化工作,对测试的结果也有很大的影响,本次实验数据不具备参考意义。 ## 1. 环境说明 ### 1.1 ceph环境 * ceph版本:nautilus * OSD数量:6 * 机器数量:6 * ceph信息如下: ``` [root@master1 ~]# ceph -s cluster: id: aa569515-22fc-4710-895f-5f1c779ca29e health: HEALTH_OK services: mon: 3 daemons, quorum master1,master2,master3 (age 9m) mgr: master1(active, since 8m) osd: 6 osds: 6 up (since 5m), 6 in (since 5m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 6.0 GiB used, 22 TiB / 22 TiB avail pgs: ``` ### 1.2 glusterfs环境 * glusterfs版本:6.5 * 磁盘数量:6 * 服务器数量:6 * 卷信息如下: ``` [root@master1 ~]# gluster volume info test1 Volume Name: test1 Type: Distributed-Disperse Volume ID: 30a8c606-2715-4884-ada4-85b186eaecf3 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: master1:/bricks/brick1/gv0 Brick2: master2:/bricks/brick1/gv0 Brick3: master3:/bricks/brick1/gv0 Brick4: node1:/bricks/brick1/gv0 Brick5: node2:/bricks/brick1/gv0 Brick6: node3:/bricks/brick1/gv0 Options Reconfigured: transport.address-family: inet nfs.disable: on ``` ## 2. 测试工具 利用fio编写的工具,参见:https://www.sysit.cn/blog/post/sysit/%E7%94%A8go%E8%AF%AD%E8%A8%80%E6%89%A7%E8%A1%8Cfio%E6%B5%8B%E8%AF%95%E7%A3%81%E7%9B%98%E8%AF%BB%E5%86%99%E6%80%A7%E8%83%BD%E5%B9%B6%E8%BF%94%E5%9B%9E%E6%95%B0%E6%8D%AE ## 3. 测试 ### 3.1 测试物理磁盘IO * fio测试 ``` [root@node3 ~]# mount /dev/sdd1 /data1 [root@node3 ~]# ./fio 100M 1G /data1/tmp.log +-------------+-------------+--------+-------------+--------+ | 项目 | 写IOPS | 写带宽 | 读IOPS | 读带宽 | +-------------+-------------+--------+-------------+--------+ | rand-4k-1 | 989.716230 | 3958 | 222.918472 | 891 | | rand-4k-64 | 850.244113 | 3400 | 1185.020599 | 4740 | | seq-128k-64 | 1863.512284 | 238529 | 1858.439201 | 237880 | +-------------+-------------+--------+-------------+--------+ ``` * dd测试 ``` [root@node3 ~]# dd if=/dev/zero of=/data1/tmp1.log bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.78369 s, 224 MB/s [root@node3 ~]# dd if=/dev/zero of=/data1/tmp1.log bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.42904 s, 242 MB/s ``` ### 3.2 glusterfs测试 > 服务器之间是1Gbps带宽,因此如下结果受网络影响大。 * fio测试 ``` [root@node3 ~]# mount.glusterfs 192.168.112.51:/test1 /data2 [root@node3 ~]# ./fio 100M 1G /data2/tmp.log +-------------+------------+--------+-------------+--------+ | 项目 | 写IOPS | 写带宽 | 读IOPS | 读带宽 | +-------------+------------+--------+-------------+--------+ | rand-4k-64 | 845.414616 | 3381 | 1193.473193 | 4773 | | seq-128k-64 | 233.363719 | 29870 | 892.471947 | 114236 | | rand-4k-1 | 817.134285 | 3268 | 269.891004 | 1079 | +-------------+------------+--------+-------------+--------+ ``` * dd测试 ``` [root@node3 ~]# dd if=/dev/zero of=/data2/tmp1.log bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 9.08386 s, 118 MB/s [root@node3 ~]# dd if=/dev/zero of=/data2/tmp1.log bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 9.08411 s, 118 MB/s ``` ### 3.3 ceph测试 * 创建rbd块设备 ``` # pool [root@master1 ~]# ceph osd pool create data 128 pool 'data' created [root@master1 ~]# rbd pool init data # 用户 [root@master1 ~]# ceph auth get-or-create client.user01 mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=data,allow rwx pool=data' [client.user01] key = AQBbsHNdSmEbDRAApEyQalOIMv0/Bnsq1jy25Q== [root@master1 ~]# ceph auth get client.user01 exported keyring for client.user01 [client.user01] key = AQBbsHNdSmEbDRAApEyQalOIMv0/Bnsq1jy25Q== caps mon = "allow r" caps osd = "allow class-read object_prefix rbd_children,allow rwx pool=data,allow rwx pool=data" [root@master2 ~]# ceph auth get client.user01 > /etc/ceph/ceph.client.user01.keyring exported keyring for client.user01 [root@master1 ~]# chmod 644 /etc/ceph/ceph.client.user01.keyring [root@master1 ~]# scp /etc/ceph/ceph.client.user01.keyring root@node3:/etc/ceph ceph.client.rbd.keyring 100% 105 109.3KB/s 00:00 # 磁盘 [root@node3 ~]# rbd -p data --name client.user01 create disk01 --size 5G --image-feature layering [root@node3 ~]# rbd -p data --name client.user01 map disk01 /dev/rbd0 # 格式化并挂载 [root@node3 ~]# mkfs.xfs /dev/rbd0 [root@node3 ~]# mkdir /data3 [root@node3 ~]# mount /dev/rbd0 /data3 ``` * fio测试 ``` [root@node3 ~]# ./fio 100M 1G /data3/tmp.log +-------------+------------+--------+--------------+--------+ | 项目 | 写IOPS | 写带宽 | 读IOPS | 读带宽 | +-------------+------------+--------+--------------+--------+ | rand-4k-1 | 32.483758 | 129 | 631.849146 | 2527 | | rand-4k-64 | 592.702352 | 2370 | 24173.748820 | 96694 | | seq-128k-64 | 257.399610 | 32947 | 966.037736 | 123652 | +-------------+------------+--------+--------------+--------+ ``` * dd测试 ``` [root@node3 ~]# dd if=/dev/zero of=/data3/tmp1.log bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 8.20452 s, 131 MB/s [root@node3 ~]# dd if=/dev/zero of=/data3/tmp1.log bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 6.94509 s, 155 MB/s ``` 如果觉得我的文章对您有用,请随意赞赏。您的支持将鼓励我继续创作! 赞赏支持