【 TiDB 使用环境】生产环境 /测试/ Poc
【 TiDB 版本】
【复现路径】做过哪些操作出现的问题
【遇到的问题:问题现象及影响】
【资源配置】进入到 TiDB Dashboard -集群信息 (Cluster Info) -主机(Hosts) 截图此页面
【附件:截图/日志/监控】
tidb_gc_enable 关了半年。。。
mysql> show status like ‘%gc%’;
±----------------------±----------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
±----------------------±----------------------------------------------------------------------------------------------------------+
| tidb_gc_last_run_time | 20231026-15:06:47.521 +0800 |
| tidb_gc_leader_desc | host:sjpt-dbdc-tidb9.dc.wxxdc, pid:37499, start at 2023-11-16 18:59:16.797709803 +0800 CST m=+1.565898972 |
| tidb_gc_leader_lease | 20240506-17:23:16.802 +0800 |
| tidb_gc_leader_uuid | 62f5f240ab00003 |
| tidb_gc_safe_point | 20231026-14:56:47.521 +0800 |
±----------------------±----------------------------------------------------------------------------------------------------------+
5 rows in set (0.20 sec)
mysql> show variables like ‘%gc%’;
±------------------------------------±-------+
| Variable_name | Value |
±------------------------------------±-------+
| tidb_enable_gc_aware_memory_track | OFF |
| tidb_enable_gogc_tuner | ON |
| tidb_gc_concurrency | -1 |
| tidb_gc_enable | OFF |
| tidb_gc_life_time | 10m0s |
| tidb_gc_max_wait_time | 86400 |
| tidb_gc_run_interval | 10m0s |
| tidb_gc_scan_lock_mode | LEGACY |
| tidb_gogc_tuner_threshold | 0.6 |
| tidb_server_memory_limit_gc_trigger | 0.7 |
±------------------------------------±-------+
重新打开之后:
mysql> show status like ‘%gc%’;
±----------------------±----------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
±----------------------±----------------------------------------------------------------------------------------------------------+
| tidb_gc_last_run_time | 20240506-17:31:16.779 +0800 |
| tidb_gc_leader_desc | host:sjpt-dbdc-tidb9.dc.wxxdc, pid:37499, start at 2023-11-16 18:59:16.797709803 +0800 CST m=+1.565898972 |
| tidb_gc_leader_lease | 20240506-18:54:16.802 +0800 |
| tidb_gc_leader_uuid | 62f5f240ab00003 |
| tidb_gc_safe_point | 20240506-17:21:16.779 +0800 |
tidb@sjpt-dbdc-tidb7:~$ tiup ctl:v7.1.1 pd service-gc-safepoint
Starting component ctl
: /home/tidb/.tiup/components/ctl/v7.1.1/ctl pd service-gc-safepoint
{
“service_gc_safe_points”: [
{
“service_id”: “gc_worker”,
“expired_at”: 9223372036854775807,
“safe_point”: 449573624683954176
}
],
“gc_safe_point”: 445200048461185024
}
tiup cdc:v7.1.1 cli --pd=http://10.2.***:2379 unsafe reset 执行无效,重启pd也无效
我看pd日志有:failed to get safe point from pd"] [err_code=KV:Storage:Unknown] [err="Error(Other(“[src/server/gc_worker/gc_worker.rs:80]: failed to get safe point from PD: Other(”[components/pd_client/src/util.rs:427
tidb有日志:
[“[gc worker] delete ranges: got an error while trying to get store list from PD”] [uuid=62f5f240ab00003] [error=“rpc error: code = Unavailable desc = not leader”]