为何关闭dynamic-level-bytes后,TiKV的compaction顺序还是不发生变化?

【 TiDB 使用环境】测试
【 TiDB 版本】Tidb 5.3
【遇到的问题】 通过tiup cluster edit-config将tikv的三个cf的dynamic-level-bytes均设为false后,为什么不断插入数据数据在L0触发compaction以后还是先flush到L6,L6的文件到一定大小再到L5?我原本以为会恢复L1~L6 的顺序,参考:
https://book.tidb.io/session4/chapter7/compact.html
【复现路径】做过哪些操作出现的问题
【问题现象及影响】
image

【附件】

请提供各个组件的 version 信息,如 cdc/tikv,可通过执行 cdc version/tikv-server --version 获取。

ingest sst will add the sst to the bottommost level if there is no overlap with existing ssts, level 6 by default. We use ingest sst to replicate regions’ peers, so the LSM structure may like this:
L0 ssts
L1 ssts
L2 ssts
L3~L5 is empty
L6 ssts(ingested by replicating)
When user delete these datas which contained in ssts at level 6, these delete marks are hard to reach level 6. So the disk space occupied by these ssts is hard to reclaim.
When dynamic_level_bytes is on, for an empty DB, RocksDB makes last level the base level, which means merging L0 data into the last level, until it exceeds max_bytes_for_level_base. And then RocksDB makes the second last level to be base level, and so on.
For the above case, if dynamic_level_bytes is on, the LSM structure is like this:
L0 ssts
L5 ssts
L6 ssts
The delete marks will be compacted from L0 to L5, from L5 to L6. The relevant space occupied by these keys will be freed.

It is not recommended to change this value on db which is not empty. If you really want to change this value, please use pd-ctl to compact the whole db first.
$./bin/tikv-ctl --host={$tikv-ip}:{$tikv-port} compact -c write
$./bin/tikv-ctl --host={$tikv-ip}:{$tikv-port} compact -c default

1 个赞

您好,看这个文档的意思似乎还是因为dynamic_level_bytes开启的原因,而我的配置中已经把这个配置成了false,但是只有L0,L5和L6存数据的问题依旧存在,是因为我没有首先进行compact吗?

看描述是这意思,最好的方式还是在集群初始化时设置

感谢您的回答!我去试一下

请问您摘录的这个是rocksdb的文档吗?

tikv的issue里的描述

我也做了个测试,可以参考下
https://tidb.net/blog/7f8aeedb

1 个赞

谢谢您!太详细了

此话题已在最后回复的 60 天后被自动关闭。不再允许新回复。