建表慢

IOPS也还好,建张表不至于这么慢吧

[root@ali-e2-tikv1 ~]# fio -direct=1 -iodepth=32 -rw=randread -ioengine=libaio -bs=4k -numjobs=4 -time_based=1 -runtime=120 -group_reporting -filename=/dev/vdb1 -name=test
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32

fio-3.7
Starting 4 processes
Jobs: 4 (f=4): [r(4)][100.0%][r=93.1MiB/s,w=0KiB/s][r=23.8k,w=0 IOPS][eta 00m:00s]
test: (groupid=0, jobs=4): err= 0: pid=20094: Tue Mar 19 14:36:12 2024
read: IOPS=24.1k, BW=94.2MiB/s (98.8MB/s)(11.0GiB/120007msec)
slat (usec): min=3, max=123454, avg=10.18, stdev=99.37
clat (usec): min=222, max=125997, avg=5276.23, stdev=3194.22
lat (usec): min=255, max=127969, avg=5286.58, stdev=3196.21
clat percentiles (usec):
| 1.00th=[ 2057], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2540],
| 30.00th=[ 2769], 40.00th=[ 3195], 50.00th=[ 4293], 60.00th=[ 6456],
| 70.00th=[ 7242], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 9110],
| 99.00th=[15139], 99.50th=[19530], 99.90th=[29754], 99.95th=[35914],
| 99.99th=[51119]
bw ( KiB/s): min= 1712, max=46608, per=25.00%, avg=24106.90, stdev=3482.69, samples=960
iops : min= 428, max=11652, avg=6026.70, stdev=870.67, samples=960
lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03%
lat (msec) : 2=0.55%, 4=47.74%, 10=48.57%, 20=2.62%, 50=0.46%
lat (msec) : 100=0.01%, 250=0.01%
cpu : usr=3.32%, sys=9.20%, ctx=2261976, majf=0, minf=275
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=2893294,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
READ: bw=94.2MiB/s (98.8MB/s), 94.2MiB/s-94.2MiB/s (98.8MB/s-98.8MB/s), io=11.0GiB (11.8GB), run=120007-120007msec

Disk stats (read/write):
vdb: ios=3016985/88934, merge=12101/811907, ticks=16129776/776840, in_queue=15217378, util=99.44%

建表需要往 pd 里的 etcd 里落数据,pd 涉及到 raft多数派确认和 磁盘的写入,我这边 nvmessd 万兆在不忙的时候大约 500ms 左右

admin show ddl jobs看看是不是有其他ddl在跑

那建表的时间,会不会在pd那块啊

看起来有快有慢,不太稳定

:joy:你这个明显太慢了,普通SSD应该是400MB/s左右,NVME可以到3000MB/s及以上,升级存储吧

另外如果是云厂商的云SSD,尽量选ESSD等可以堆叠性能的(记得把性能突发去掉,非常烧钱)

我这个测试的是随机IOPS,不是吞吐量

对于SSD来说,在4K IO的情况下差别不大,区别于机械盘是没有寻道时间的

  1. low-space-ratio 设置值是多少?

  1. 相关的写数据操作的监控发一下看看
1 个赞

6.5.1版本,好像没有这个变量

咋可能,这是配置文件参数
SHOW config WHERE NAME LIKE ‘%low-space-ratio%’;
看下

先改成0.9看看,你这磁盘都这么满了,还不扩容啊?
SET config pd low-space-ratio=0.9


PD的监控正常吗

你测试下就知道了,阿里的ssd是共享云盘不是独享的,给你的性能指标只是理论最大值,实际上不可能达到的。

还有300多G 呢

TIKV还有500G