现有TiKV报错,依然在尝试恢复已经tombstone的TiKV,导致性能和空间严重不足

集群版本: v4.0.0

原有集群有TiKV1,TiKV2(10.204.9.86:20180),TiKV3。 由于TiKV2被损坏无法恢复。 新增了一台TiKV4,并且缩容了TiKV2,由于TiKV2一致处于offline的状态。导致系统无法操作恢复动作,以及无法让TiKV4上线。所以直接通过PD让TiKV2修改成tombstone,并且通过TiUP的disable清理掉TiKV2。

目前发现整个集群性能不足。再查看TiKV1,TiKV3,疯狂打印以下日志。数据库中有被删除数据库的动作,整个集群也没降低

[2020/05/31 23:32:56.840 +08:00] [ERROR] [transport.rs:137] ["resolve store address failed"] [err="Other(\"[src/server/resolve.rs:72]: store is tombstone \\\"id: 4 address: \\\\\\\"10.204.9.86:20160\\\\\\\" state: Tombstone labels { key: \\\\\\\"host\\\\\\\" value: \\\\\\\"tikv2\\\\\\\" } version: \\\\\\\"4.0.0-rc.2\\\\\\\" status_address: \\\\\\\"10.204.9.86:20180\\\\\\\" git_hash: \\\\\\\"2fdb2804bf8ffaab4b18c4996970e19906296497\\\\\\\" start_timestamp: 1590876087 deploy_path: \\\\\\\"/data31/tidb-deploy/tikv-20160/bin/tikv-server\\\\\\\" last_heartbeat: 1590803526235186636\\\"\")"] [store_id=4]
[2020/05/31 23:32:56.840 +08:00] [ERROR] [transport.rs:137] ["resolve store address failed"] [err="Other(\"[src/server/resolve.rs:72]: store is tombstone \\\"id: 4 address: \\\\\\\"10.204.9.86:20160\\\\\\\" state: Tombstone labels { key: \\\\\\\"host\\\\\\\" value: \\\\\\\"tikv2\\\\\\\" } version: \\\\\\\"4.0.0-rc.2\\\\\\\" status_address: \\\\\\\"10.204.9.86:20180\\\\\\\" git_hash: \\\\\\\"2fdb2804bf8ffaab4b18c4996970e19906296497\\\\\\\" start_timestamp: 1590876087 deploy_path: \\\\\\\"/data31/tidb-deploy/tikv-20160/bin/tikv-server\\\\\\\" last_heartbeat: 1590803526235186636\\\"\")"] [store_id=4]
  1. 请问无法操作恢复动作是指什么?
  2. 为什么 tikv 4 无法上线? 这里有什么报错吗?
  3. 请反馈当前集群状态 tiup cluster display 结果
  4. 上传 pd-ctl 中 member, store 的结果,多谢。

1、TiKV2损坏,在BR恢复数据的时候打挂的,目前没找到恢复方式。
2、由于使用了扩展TiKV的方式启动TiKV4。缩容TiKV2,TiKV4上面只有非常少量的数据,数据和leader都没迁移到TiKV4,看了下里面的容量大小只有100G其他的TiKV节点在1.5T的样子。目前通过强制tombstone 。TiKV4已经恢复。

3、集群状态

4、pd中的内容
member

{
  "header": {
    "cluster_id": 6820705896630752929
  },
  "members": [
    {
      "name": "pd-10.204.9.133-2379",
      "member_id": 1758101190664893197,
      "peer_urls": [
        "http://10.204.9.133:2380"
      ],
      "client_urls": [
        "http://10.204.9.133:2379"
      ],
      "deploy_path": "/data/tidb-deploy/pd-2379/bin",
      "binary_version": "v4.0.0",
      "git_hash": "56d4c3d2237f5bf6fb11a794731ed1d95c8020c2"
    },
    {
      "name": "pd-10.204.9.132-2379",
      "member_id": 4003923670935905986,
      "peer_urls": [
        "http://10.204.9.132:2380"
      ],
      "client_urls": [
        "http://10.204.9.132:2379"
      ],
      "deploy_path": "/data/tidb-deploy/pd-2379/bin",
      "binary_version": "v4.0.0",
      "git_hash": "56d4c3d2237f5bf6fb11a794731ed1d95c8020c2"
    },
    {
      "name": "pd-10.204.9.131-2379",
      "member_id": 7629576249764497491,
      "peer_urls": [
        "http://10.204.9.131:2380"
      ],
      "client_urls": [
        "http://10.204.9.131:2379"
      ],
      "deploy_path": "/data/tidb-deploy/pd-2379/bin",
      "binary_version": "v4.0.0",
      "git_hash": "56d4c3d2237f5bf6fb11a794731ed1d95c8020c2"
    }
  ],
  "leader": {
    "name": "pd-10.204.9.131-2379",
    "member_id": 7629576249764497491,
    "peer_urls": [
      "http://10.204.9.131:2380"
    ],
    "client_urls": [
      "http://10.204.9.131:2379"
    ]
  },
  "etcd_leader": {
    "name": "pd-10.204.9.131-2379",
    "member_id": 7629576249764497491,
    "peer_urls": [
      "http://10.204.9.131:2380"
    ],
    "client_urls": [
      "http://10.204.9.131:2379"
    ],
    "deploy_path": "/data/tidb-deploy/pd-2379/bin",
    "binary_version": "v4.0.0",
    "git_hash": "56d4c3d2237f5bf6fb11a794731ed1d95c8020c2"
  }
}

store

{
  "count": 4,
  "stores": [
    {
      "store": {
        "id": 1,
        "address": "10.204.9.87:20160",
        "labels": [
          {
            "key": "host",
            "value": "tikv3"
          }
        ],
        "version": "4.0.0",
        "status_address": "10.204.9.87:20180",
        "git_hash": "198a2cea01734ce8f46d55a29708f123f9133944",
        "start_timestamp": 1590896656,
        "deploy_path": "/data31/tidb-deploy/tikv-20160/bin/tikv-server",
        "last_heartbeat": 1590982377840924510,
        "state_name": "Up"
      },
      "status": {
        "capacity": "2.865TiB",
        "available": "971.2GiB",
        "used_size": "1.884TiB",
        "leader_count": 27460,
        "leader_weight": 1,
        "leader_score": 27460,
        "leader_size": 2174716,
        "region_count": 98447,
        "region_weight": 1,
        "region_score": 7874999,
        "region_size": 7874999,
        "start_ts": "2020-05-31T11:44:16+08:00",
        "last_heartbeat_ts": "2020-06-01T11:32:57.84092451+08:00",
        "uptime": "23h48m41.84092451s"
      }
    },
    {
      "store": {
        "id": 6,
        "address": "10.204.9.85:20160",
        "labels": [
          {
            "key": "host",
            "value": "tikv1"
          }
        ],
        "version": "4.0.0",
        "status_address": "10.204.9.85:20180",
        "git_hash": "198a2cea01734ce8f46d55a29708f123f9133944",
        "start_timestamp": 1590896364,
        "deploy_path": "/data31/tidb-deploy/tikv-20160/bin/tikv-server",
        "last_heartbeat": 1590982373980992964,
        "state_name": "Up"
      },
      "status": {
        "capacity": "2.865TiB",
        "available": "989GiB",
        "used_size": "1.878TiB",
        "leader_count": 53012,
        "leader_weight": 1,
        "leader_score": 53012,
        "leader_size": 4145438,
        "region_count": 98447,
        "region_weight": 1,
        "region_score": 7874999,
        "region_size": 7874999,
        "start_ts": "2020-05-31T11:39:24+08:00",
        "last_heartbeat_ts": "2020-06-01T11:32:53.980992964+08:00",
        "uptime": "23h53m29.980992964s"
      }
    },
    {
      "store": {
        "id": 88,
        "address": "10.204.9.90:3930",
        "labels": [
          {
            "key": "engine",
            "value": "tiflash"
          }
        ],
        "version": "v4.0.0",
        "peer_address": "10.204.9.90:20170",
        "status_address": "10.204.9.90:20292",
        "git_hash": "c51c2c5c18860aaef3b5853f24f8e9cefea167eb",
        "start_timestamp": 1590896875,
        "last_heartbeat": 1590982377140013325,
        "state_name": "Up"
      },
      "status": {
        "capacity": "2.865TiB",
        "available": "2.715TiB",
        "used_size": "277KiB",
        "leader_count": 0,
        "leader_weight": 1,
        "leader_score": 0,
        "leader_size": 0,
        "region_count": 0,
        "region_weight": 1,
        "region_score": 0,
        "region_size": 0,
        "start_ts": "2020-05-31T11:47:55+08:00",
        "last_heartbeat_ts": "2020-06-01T11:32:57.140013325+08:00",
        "uptime": "23h45m2.140013325s"
      }
    },
    {
      "store": {
        "id": 657369,
        "address": "10.204.9.90:20161",
        "version": "4.0.0",
        "status_address": "10.204.9.90:20181",
        "git_hash": "198a2cea01734ce8f46d55a29708f123f9133944",
        "start_timestamp": 1590896859,
        "deploy_path": "/data32/deploy/install/deploy/tikv-20161/bin",
        "last_heartbeat": 1590982374154880239,
        "state_name": "Up"
      },
      "status": {
        "capacity": "2.865TiB",
        "available": "1015GiB",
        "used_size": "1.866TiB",
        "leader_count": 17975,
        "leader_weight": 1,
        "leader_score": 17975,
        "leader_size": 1554845,
        "region_count": 92052,
        "region_weight": 1,
        "region_score": 7348629,
        "region_size": 7348629,
        "start_ts": "2020-05-31T11:47:39+08:00",
        "last_heartbeat_ts": "2020-06-01T11:32:54.154880239+08:00",
        "uptime": "23h45m15.154880239s"
      }
    }
  ]
}

最后我准备重建整个集群,这边磁盘都快被打满了。1.5T的数据现在整个容量变成了6T。磁盘快打到70%了

  1. 从 store 状态看, 都是 up ,3个 tikv 实例也比较均衡 ,1 个 tiflash 节点。
  2. 现在 tikv 1 和 tikv 3 还是一直打印 [ERROR] [transport.rs:137] [“resolve store address failed”] 吗? tikv 的容量一直在增长? store 里没有看到 id 为 4 的信息,确实比较奇怪
  3. 能否检查您的 BR 进程是否还在,可以 kill 进程试试,多谢。

2、一直在打错误日志空间在增长,store4里面原来是offline的。操作是先强行curl 成tombstone状态,运行tiup cluster display的时候直接干掉了

3、BR进程不存在,搜索br都是没有的,并且昨晚恢复的时候都是已完成的。而且在运行恢复之前,其实都一直在打以上的错误日志。

[2020/05/31 16:16:38.561 +08:00] [INFO] [coprocessor.go:870] ["[TIME_COP_PROCESS] resp_time:2.093925021s txnStartTS:417047993781059590 region_id:4153286 store_addr:10.204.9.85:20160 kv_process_ms:2092 scan_total_write:3822583 scan_processed_write:3822582 scan_total_data:0 scan_processed_data:0 scan_total_lock:1 scan_processed_lock:0"]
[2020/05/31 16:16:38.561 +08:00] [INFO] [client.go:134] ["Restore client closed"]
[2020/05/31 16:16:38.563 +08:00] [INFO] [ddl_worker.go:124] ["[ddl] DDL worker closed"] [worker="worker 1, tp general"] ["take time"=10.792µs]
[2020/05/31 16:16:38.563 +08:00] [INFO] [ddl_worker.go:124] ["[ddl] DDL worker closed"] [worker="worker 2, tp add index"] ["take time"=6.326µs]
[2020/05/31 16:16:38.563 +08:00] [INFO] [delete_range.go:123] ["[ddl] closing delRange"]
[2020/05/31 16:16:38.563 +08:00] [INFO] [session_pool.go:85] ["[ddl] closing sessionPool"]
[2020/05/31 16:16:38.563 +08:00] [INFO] [ddl.go:407] ["[ddl] DDL closed"] [ID=f68a902f-9951-4169-a346-11778ae697a7] ["take time"=2.440802ms]
[2020/05/31 16:16:38.563 +08:00] [INFO] [ddl.go:301] ["[ddl] stop DDL"] [ID=f68a902f-9951-4169-a346-11778ae697a7]
[2020/05/31 16:16:38.564 +08:00] [INFO] [manager.go:267] ["failed to campaign"] ["owner info"="[ddl] /tidb/ddl/fg/owner ownerManager f68a902f-9951-4169-a346-11778ae697a7"] [error="context canceled"]
[2020/05/31 16:16:38.564 +08:00] [INFO] [manager.go:248] ["break campaign loop, context is done"] ["owner info"="[ddl] /tidb/ddl/fg/owner ownerManager f68a902f-9951-4169-a346-11778ae697a7"]
[2020/05/31 16:16:38.565 +08:00] [INFO] [manager.go:292] ["revoke session"] ["owner info"="[ddl] /tidb/ddl/fg/owner ownerManager f68a902f-9951-4169-a346-11778ae697a7"] []
[2020/05/31 16:16:38.566 +08:00] [INFO] [domain.go:607] ["domain closed"] ["take time"=5.23773ms]
[2020/05/31 16:16:38.566 +08:00] [INFO] [collector.go:172] ["Database restore Failed summary : total restore files: 64614, total success: 64614, total failed: 0"] ["split region"=33m23.982001099s] ["restore checksum"=18m49.231473133s] ["restore ranges"=46980]
  1. 能否反馈tikv打挂时的 BR 日志 和 tikv 日志, 问题发生前后 10分钟,多谢。
  2. tikv 当前在 恢复 id 4 的日志,能否发一下第一次发生报错时的日志, 发前后10分钟,多谢。
  3. 空间不断上涨,请查看应该是由于 tikv 日志,可以清理一部分tikv 日志,多谢。

1、打挂TiKV时候的日志 应该找不齐了,BR的的日志找了下貌似没有特别前面10分钟都是导入,出问题的日志在这里
2、id 4一直在尝试恢复,由于 id 4的TiKV被TiUP清理掉了。目前没任何信息。
3、日志已经被清理了。

好的, BR 的问题,我了解到他们还在分析,多谢。

这里关于 Tikv2 和 Tikv4 的行为描述,感觉有些困惑,能麻烦描述一下关于他们的操作么,多谢。

原有集群有3个TiKV节点:TiKV1,TiKV2(10.204.9.86:20180),TiKV3

BR在恢复的时候让TiKV2无法使用。

由于无法恢复,扩展了一台TiKV4,缩容TiKV2。想修复集群。结果虽然集群能用,但是整个状态比较异常:1、性能很差。2、TiKV1和TiKV3疯狂打日志。3、集群调度有问题导致TiKV存储的不断增长。

嗯嗯 那么后续有无对 tikv4 进行其他操作?

没有。tikv4在 TiKV2变成tombstone状态之后就慢慢迁移数据和leader过去了

  1. 现在,其他几个tikv store 均衡了吗? leader 和 region 总数量。
  2. 当前 tikv 1 和 tikv 3 还是一直在打印 恢复 id 4 的日志吗 ?

最后阶段均衡了,最后日志依然在打。目前整个集群已经完全重建了,当时的场景无法还原了。

好的,感谢您的回复,抱歉这个问题时间有点久。 我们会再跟进一下,如果能够有进展,再答复,多谢了。