使用Backup 会报"File or directory not found on TiKV Node (store id: 1)"

【 TiDB 使用环境】生产环境 /测试/ Poc
使用tidb-operator v1.6.1 启动的 TidbCluster 和Backup
repoURL: https://charts.pingcap.org/
chart: tidb-operator
【 TiDB 版本】

apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: basic
spec:
  version: v8.5.0
  timezone: Asia/Shanghai
  pvReclaimPolicy: Retain
  enableDynamicConfiguration: true
  configUpdateStrategy: RollingUpdate
  discovery: {}
  helper:
    image: m.daocloud.io/docker.io/library/alpine:3.16.0
  ...

【复现路径】做过哪些操作出现的问题

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: tidb-cluster-basic-backup-pvc
  namespace: tidb-cluster
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 20Gi
  storageClassName: ""

---
apiVersion: pingcap.com/v1alpha1
kind: Backup
metadata:
  name: basic-backup-tidb-cluster-20240801123456
  namespace: tidb-cluster
spec:
  cleanPolicy: Retain
  resources:
    requests:
      cpu: "500m"
      memory: "512Mi"
    limits:
      cpu: "1000m"
      memory: "1Gi"
  backupMode: snapshot
  backupType: full
  toolImage: m.daocloud.io/docker.io/pingcap/br:v8.5.0
  br:
    cluster: basic
    clusterNamespace: tidb-cluster
    logLevel: info
    concurrency: 4
    # rateLimit: 0
    # options:
    # - --lastbackupts=420134118382108673
  local:
    prefix: tidb-cluster/basic/20240801123456
    volume:
      name: tidb-cluster-basic-backup-pvc
      persistentVolumeClaim:
        claimName: tidb-cluster-basic-backup-pvc
    volumeMount:
      name: tidb-cluster-basic-backup-pvc
      mountPath: /backup

【遇到的问题:问题现象及影响】
kubectl -n tidb-cluster logs -f backup-basic-tidb-cluster-full-20240801123456-2652w

Defaulted container "backup" out of: backup, br (init)
Create rclone.conf file.
/tidb-backup-manager backup --namespace=tidb-cluster --backupName=basic-tidb-cluster-full-20240801123456 --tikvVersion=v8.5.0 --mode=snapshot
I0523 14:22:58.567089       9 backup.go:78] start to process backup tidb-cluster/basic-tidb-cluster-full-20240801123456
I0523 14:22:58.573028       9 manager.go:109] start to process backup: {"kind":"Backup","apiVersion":"pingcap.com/v1alpha1","metadata":{"name":"basic-tidb-cluster-full-20240801123456","namespace":"tidb-cluster","uid":"cadebaa6-bb16-4ba3-84a8-37ae4131d4d1","resourceVersion":"50448022","generation":2,"creationTimestamp":"2025-05-23T06:22:54Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"pingcap.com/v1alpha1\",\"kind\":\"Backup\",\"metadata\":{\"annotations\":{},\"name\":\"basic-tidb-cluster-full-20240801123456\",\"namespace\":\"tidb-cluster\"},\"spec\":{\"backupMode\":\"snapshot\",\"backupType\":\"full\",\"br\":{\"cluster\":\"basic\",\"clusterNamespace\":\"tidb-cluster\",\"concurrency\":4,\"logLevel\":\"info\"},\"cleanPolicy\":\"Retain\",\"local\":{\"prefix\":\"tidb-cluster/basic/full/20240801123456\",\"volume\":{\"name\":\"tidb-cluster-basic-backup-pvc\",\"persistentVolumeClaim\":{\"claimName\":\"tidb-cluster-basic-backup-pvc\"}},\"volumeMount\":{\"mountPath\":\"/backup\",\"name\":\"tidb-cluster-basic-backup-pvc\"}},\"resources\":{\"limits\":{\"cpu\":\"1000m\",\"memory\":\"1Gi\"},\"requests\":{\"cpu\":\"500m\",\"memory\":\"512Mi\"}},\"toolImage\":\"m.daocloud.io/docker.io/pingcap/br:v8.5.0\"}}\n"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"pingcap.com/v1alpha1","time":"2025-05-23T06:22:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{".":{},"f:backupMode":{},"f:backupType":{},"f:br":{".":{},"f:cluster":{},"f:clusterNamespace":{},"f:concurrency":{},"f:logLevel":{}},"f:calcSizeLevel":{},"f:cleanPolicy":{},"f:local":{".":{},"f:prefix":{},"f:volume":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}},"f:volumeMount":{".":{},"f:mountPath":{},"f:name":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:toolImage":{},"f:volumeBackupInitJobMaxActiveSeconds":{}}}},{"manager":"tidb-controller-manager","operation":"Update","apiVersion":"pingcap.com/v1alpha1","time":"2025-05-23T06:22:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:backoffRetryPolicy":{".":{},"f:maxRetryTimes":{},"f:minRetryDuration":{},"f:retryTimeout":{}},"f:resources":{"f:limits":{"f:cpu":{}}}},"f:status":{".":{},"f:conditions":{},"f:phase":{},"f:timeCompleted":{},"f:timeStarted":{}}}}]},"spec":{"resources":{"limits":{"cpu":"1","memory":"1Gi"},"requests":{"cpu":"500m","memory":"512Mi"}},"backupType":"full","backupMode":"snapshot","local":{"volume":{"name":"tidb-cluster-basic-backup-pvc","persistentVolumeClaim":{"claimName":"tidb-cluster-basic-backup-pvc"}},"volumeMount":{"name":"tidb-cluster-basic-backup-pvc","mountPath":"/backup"},"prefix":"tidb-cluster/basic/full/20240801123456"},"br":{"cluster":"basic","clusterNamespace":"tidb-cluster","logLevel":"info","concurrency":4},"calcSizeLevel":"all","toolImage":"m.daocloud.io/docker.io/pingcap/br:v8.5.0","cleanPolicy":"Retain","backoffRetryPolicy":{"minRetryDuration":"300s","maxRetryTimes":2,"retryTimeout":"30m"},"volumeBackupInitJobMaxActiveSeconds":600},"status":{"timeStarted":null,"timeCompleted":null,"phase":"Scheduled","conditions":[{"type":"Scheduled","status":"True","lastTransitionTime":"2025-05-23T06:22:54Z"}]}}
I0523 14:22:58.592544       9 backup_status_updater.go:128] Backup: [tidb-cluster/basic-tidb-cluster-full-20240801123456] updated successfully
E0523 14:22:58.605827       9 backup_status_updater.go:131] Failed to update backup [tidb-cluster/basic-tidb-cluster-full-20240801123456], error: Operation cannot be fulfilled on backups.pingcap.com "basic-tidb-cluster-full-20240801123456": the object has been modified; please apply your changes to the latest version and try again
I0523 14:22:58.634887       9 backup_status_updater.go:128] Backup: [tidb-cluster/basic-tidb-cluster-full-20240801123456] updated successfully
I0523 14:22:58.634946       9 backup.go:297] Running br command with args: [backup full --pd=basic-pd.tidb-cluster:2379 --log-level=info --storage=local:///backup/tidb-cluster/basic/full/20240801123456 --concurrency=4]
I0523 14:22:58.788201       9 backup.go:334] [2025/05/23 14:22:58.787 +08:00] [INFO] [meminfo.go:179] ["use cgroup memory hook because TiDB is in the container"]
I0523 14:22:58.788250       9 backup.go:334] [2025/05/23 14:22:58.788 +08:00] [INFO] [cmd.go:210] ["calculate the rest memory"] [memtotal=1073741824] [memused=38567936] [memlimit=927799706]
I0523 14:22:58.788412       9 backup.go:334] [2025/05/23 14:22:58.788 +08:00] [INFO] [info.go:53] ["Welcome to Backup & Restore (BR)"] [release-version=v8.5.0] [git-hash=d13e52ed6e22cc5789bed7c64c861578cd2ed55b] [git-branch=HEAD] [go-version=go1.23.3] [utc-build-time="2024-12-18 02:28:02"] [race-enabled=false]
I0523 14:22:58.788532       9 backup.go:334] [2025/05/23 14:22:58.788 +08:00] [INFO] [common.go:925] [arguments] [__command="br backup full"] [concurrency=4] [log-level=info] [pd="[basic-pd.tidb-cluster:2379]"] [storage=local:///backup/tidb-cluster/basic/full/20240801123456]
I0523 14:22:58.797567       9 backup.go:334] [2025/05/23 14:22:58.797 +08:00] [INFO] [conn.go:168] ["new mgr"] [pdAddrs="[basic-pd.tidb-cluster:2379]"]
I0523 14:22:58.841640       9 backup.go:334] [2025/05/23 14:22:58.841 +08:00] [INFO] [pd_service_discovery.go:1000] ["[pd] update member urls"] [old-urls="[http://basic-pd.tidb-cluster:2379]"] [new-urls="[http://basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379,http://basic-pd-1.basic-pd-peer.tidb-cluster.svc:2379,http://basic-pd-2.basic-pd-peer.tidb-cluster.svc:2379]"]
I0523 14:22:58.842242       9 backup.go:334] [2025/05/23 14:22:58.842 +08:00] [INFO] [pd_service_discovery.go:1025] ["[pd] switch leader"] [new-leader=http://basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379] [old-leader=]
I0523 14:22:58.843076       9 backup.go:334] [2025/05/23 14:22:58.843 +08:00] [INFO] [pd_service_discovery.go:499] ["[pd] init cluster id"] [cluster-id=7507103841878085337]
I0523 14:22:58.848677       9 backup.go:334] [2025/05/23 14:22:58.848 +08:00] [INFO] [client.go:532] ["[pd] changing service mode"] [old-mode=UNKNOWN_SVC_MODE] [new-mode=PD_SVC_MODE]
I0523 14:22:58.848753       9 backup.go:334] [2025/05/23 14:22:58.848 +08:00] [INFO] [tso_client.go:296] ["[tso] switch dc tso global allocator serving url"] [dc-location=global] [new-url=http://basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379]
I0523 14:22:58.850315       9 backup.go:334] [2025/05/23 14:22:58.850 +08:00] [INFO] [tso_dispatcher.go:140] ["[tso] start tso deadline watcher"] [dc-location=global]
I0523 14:22:58.850334       9 backup.go:334] [2025/05/23 14:22:58.850 +08:00] [INFO] [client.go:538] ["[pd] service mode changed"] [old-mode=UNKNOWN_SVC_MODE] [new-mode=PD_SVC_MODE]
I0523 14:22:58.850369       9 backup.go:334] [2025/05/23 14:22:58.850 +08:00] [INFO] [tso_client.go:132] ["[tso] start tso dispatcher check loop"]
I0523 14:22:58.853642       9 backup.go:334] [2025/05/23 14:22:58.850 +08:00] [INFO] [tso_dispatcher.go:198] ["[tso] tso dispatcher created"] [dc-location=global]
I0523 14:22:58.854128       9 backup.go:334] [2025/05/23 14:22:58.853 +08:00] [INFO] [tso_dispatcher.go:723] ["[tso] switching tso rpc concurrency"] [old=0] [new=1]
I0523 14:22:58.854153       9 backup.go:334] [2025/05/23 14:22:58.853 +08:00] [INFO] [tso_dispatcher.go:476] ["[tso] start tso connection contexts updater"] [dc-location=global]
I0523 14:22:58.933739       9 backup.go:334] [2025/05/23 14:22:58.861 +08:00] [INFO] [conn.go:142] ["checked alive KV stores"] [aliveStores=3] [totalStores=3]
I0523 14:22:58.936119       9 backup.go:334] [2025/05/23 14:22:58.935 +08:00] [INFO] [pd_service_discovery.go:1000] ["[pd] update member urls"] [old-urls="[http://basic-pd.tidb-cluster:2379]"] [new-urls="[http://basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379,http://basic-pd-1.basic-pd-peer.tidb-cluster.svc:2379,http://basic-pd-2.basic-pd-peer.tidb-cluster.svc:2379]"]
I0523 14:22:58.936875       9 backup.go:334] [2025/05/23 14:22:58.936 +08:00] [INFO] [pd_service_discovery.go:1025] ["[pd] switch leader"] [new-leader=http://basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379] [old-leader=]
I0523 14:22:58.937754       9 backup.go:334] [2025/05/23 14:22:58.937 +08:00] [INFO] [pd_service_discovery.go:499] ["[pd] init cluster id"] [cluster-id=7507103841878085337]
I0523 14:22:58.940755       9 backup.go:334] [2025/05/23 14:22:58.940 +08:00] [INFO] [client.go:532] ["[pd] changing service mode"] [old-mode=UNKNOWN_SVC_MODE] [new-mode=PD_SVC_MODE]
I0523 14:22:58.940839       9 backup.go:334] [2025/05/23 14:22:58.940 +08:00] [INFO] [tso_client.go:296] ["[tso] switch dc tso global allocator serving url"] [dc-location=global] [new-url=http://basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379]
I0523 14:22:58.942622       9 backup.go:334] [2025/05/23 14:22:58.942 +08:00] [INFO] [client.go:538] ["[pd] service mode changed"] [old-mode=UNKNOWN_SVC_MODE] [new-mode=PD_SVC_MODE]
I0523 14:22:58.942641       9 backup.go:334] [2025/05/23 14:22:58.942 +08:00] [INFO] [tso_dispatcher.go:140] ["[tso] start tso deadline watcher"] [dc-location=global]
I0523 14:22:58.942724       9 backup.go:334] [2025/05/23 14:22:58.942 +08:00] [INFO] [tso_dispatcher.go:198] ["[tso] tso dispatcher created"] [dc-location=global]
I0523 14:22:58.942738       9 backup.go:334] [2025/05/23 14:22:58.942 +08:00] [INFO] [tso_client.go:132] ["[tso] start tso dispatcher check loop"]
I0523 14:22:58.942857       9 backup.go:334] [2025/05/23 14:22:58.942 +08:00] [INFO] [tso_dispatcher.go:723] ["[tso] switching tso rpc concurrency"] [old=0] [new=1]
I0523 14:22:58.942870       9 backup.go:334] [2025/05/23 14:22:58.942 +08:00] [INFO] [tso_dispatcher.go:476] ["[tso] start tso connection contexts updater"] [dc-location=global]
I0523 14:22:58.944479       9 backup.go:334] [2025/05/23 14:22:58.944 +08:00] [INFO] [tikv_driver.go:201] ["using API V1."]
I0523 14:22:58.947972       9 backup.go:334] [2025/05/23 14:22:58.947 +08:00] [INFO] [tidb.go:85] ["new domain"] [store=tikv-7507103841878085337] ["ddl lease"=1s] ["stats lease"=-1ns]
I0523 14:22:58.976619       9 backup.go:334] [2025/05/23 14:22:58.976 +08:00] [WARN] [info.go:333] ["init TiFlashReplicaManager"]
I0523 14:22:59.058897       9 backup.go:334] [2025/05/23 14:22:59.058 +08:00] [INFO] [domain.go:2988] [acquireServerID] [serverID=1114] ["lease id"=6c0996f5fe80678c]
I0523 14:22:59.064936       9 backup.go:334] [2025/05/23 14:22:59.064 +08:00] [INFO] [controller.go:198] ["load resource controller config"] [config="{\"degraded-mode-wait-duration\":\"0s\",\"ltb-max-wait-duration\":\"30s\",\"ltb-token-rpc-max-delay\":\"1s\",\"request-unit\":{\"read-base-cost\":0.125,\"read-per-batch-base-cost\":0.5,\"read-cost-per-byte\":0.0000152587890625,\"write-base-cost\":1,\"write-per-batch-base-cost\":1,\"write-cost-per-byte\":0.0009765625,\"read-cpu-ms-cost\":0.3333333333333333},\"enable-controller-trace-log\":\"false\",\"token-rpc-params\":{\"wait-retry-interval\":\"50ms\",\"wait-retry-times\":20}}"] [ru-config="{\"ReadBaseCost\":0.125,\"ReadPerBatchBaseCost\":0.5,\"ReadBytesCost\":0.0000152587890625,\"WriteBaseCost\":1,\"WritePerBatchBaseCost\":1,\"WriteBytesCost\":0.0009765625,\"CPUMsCost\":0.3333333333333333,\"LTBMaxWaitDuration\":30000000000,\"WaitRetryInterval\":50000000,\"WaitRetryTimes\":20,\"DegradedModeWaitDuration\":0}"]
I0523 14:22:59.068410       9 backup.go:334] [2025/05/23 14:22:59.068 +08:00] [INFO] [store_cache.go:532] ["change store resolve state"] [store=1] [addr=basic-tikv-0.basic-tikv-peer.tidb-cluster.svc:20160] [from=unresolved] [to=resolved] [liveness-state=reachable]
I0523 14:22:59.069467       9 backup.go:334] [2025/05/23 14:22:59.069 +08:00] [INFO] [store_cache.go:532] ["change store resolve state"] [store=1001] [addr=basic-tikv-1.basic-tikv-peer.tidb-cluster.svc:20160] [from=unresolved] [to=resolved] [liveness-state=reachable]
I0523 14:22:59.070574       9 backup.go:334] [2025/05/23 14:22:59.070 +08:00] [INFO] [store_cache.go:532] ["change store resolve state"] [store=1002] [addr=basic-tikv-2.basic-tikv-peer.tidb-cluster.svc:20160] [from=unresolved] [to=resolved] [liveness-state=reachable]
I0523 14:22:59.145790       9 backup.go:334] [2025/05/23 14:22:59.145 +08:00] [INFO] [domain.go:403] ["full load InfoSchema success"] [isV2=true] [currentSchemaVersion=0] [neededSchemaVersion=132] ["elapsed time"=58.853429ms]
I0523 14:22:59.150131       9 backup.go:334] [2025/05/23 14:22:59.150 +08:00] [INFO] [domain.go:806] ["full load and reset schema validator"]
I0523 14:22:59.152530       9 backup.go:334] [2025/05/23 14:22:59.152 +08:00] [INFO] [ddl.go:945] ["change job version in use"] [category=ddl] [old=v1] [new=v2]
I0523 14:22:59.152573       9 backup.go:334] [2025/05/23 14:22:59.152 +08:00] [INFO] [ddl.go:779] ["start DDL"] [category=ddl] [ID=95f7e58e-1cf0-40df-ac05-ace1da18d9e8] [runWorker=false] [jobVersion=v2]
I0523 14:22:59.152627       9 backup.go:334] [2025/05/23 14:22:59.152 +08:00] [INFO] [ddl.go:756] ["start delRangeManager OK"] [category=ddl] ["is a emulator"=false]
I0523 14:22:59.160701       9 backup.go:334] [2025/05/23 14:22:59.160 +08:00] [INFO] [env.go:109] ["the ingest sorted directory"] [category=ddl-ingest] ["data path"=/tmp/tidb/tmp_ddl-4000]
I0523 14:22:59.233405       9 backup.go:334] [2025/05/23 14:22:59.232 +08:00] [WARN] [backend_mgr.go:96] ["ingest backfill may not be available"] [category=ddl-ingest] [error="the available disk space(68761137152) in /tmp/tidb/tmp_ddl-4000 should be greater than @@tidb_ddl_disk_quota(107374182400)"] [errorVerbose="the available disk space(68761137152) in /tmp/tidb/tmp_ddl-4000 should be greater than @@tidb_ddl_disk_quota(107374182400)\ngithub.com/pingcap/tidb/pkg/ddl/ingest.(*diskRootImpl).StartupCheck\n\t/workspace/source/tidb/pkg/ddl/ingest/disk_root.go:148\ngithub.com/pingcap/tidb/pkg/ddl/ingest.NewLitBackendCtxMgr\n\t/workspace/source/tidb/pkg/ddl/ingest/backend_mgr.go:94\ngithub.com/pingcap/tidb/pkg/ddl/ingest.InitGlobalLightningEnv\n\t/workspace/source/tidb/pkg/ddl/ingest/env.go:78\ngithub.com/pingcap/tidb/pkg/ddl.(*ddl).Start\n\t/workspace/source/tidb/pkg/ddl/ddl.go:833\ngithub.com/pingcap/tidb/pkg/domain.(*Domain).Start\n\t/workspace/source/tidb/pkg/domain/domain.go:1486\ngithub.com/pingcap/tidb/br/pkg/gluetidb.Glue.startDomainAsNeeded\n\t/workspace/source/tidb/br/pkg/gluetidb/glue.go:117\ngithub.com/pingcap/tidb/br/pkg/gluetidb.Glue.createTypesSession\n\t/workspace/source/tidb/br/pkg/gluetidb/glue.go:121\ngithub.com/pingcap/tidb/br/pkg/gluetidb.Glue.UseOneShotSession\n\t/workspace/source/tidb/br/pkg/gluetidb/glue.go:154\ngithub.com/pingcap/tidb/br/pkg/task.RunBackup\n\t/workspace/source/tidb/br/pkg/task/backup.go:428\nmain.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:57\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:22:59.233460       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [env.go:81] ["init global ingest backend environment finished"] [category=ddl-ingest] ["memory limitation"=536870912] ["disk usage info"="disk usage: 247165177856/315926315008, backend usage: 0"] ["max open file number"=65535] ["lightning is initialized"=true]
I0523 14:22:59.233504       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=loadSchemaInLoop]
I0523 14:22:59.233515       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=mdlCheckLoop]
I0523 14:22:59.233532       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=topNSlowQueryLoop]
I0523 14:22:59.233583       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=infoSyncerKeeper]
I0523 14:22:59.233604       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=globalConfigSyncerKeeper]
I0523 14:22:59.233663       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=runawayStartLoop]
I0523 14:22:59.233681       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=requestUnitsWriterLoop]
I0523 14:22:59.233720       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=closestReplicaReadCheckLoop]
I0523 14:22:59.233906       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [runaway.go:68] ["try to start runaway manager loop"]
I0523 14:22:59.233945       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [owner_daemon.go:70] ["begin advancer daemon"] [daemon-id=LogBackup::Advancer]
I0523 14:22:59.233989       9 backup.go:334] [2025/05/23 14:22:59.233 +08:00] [INFO] [manager.go:295] ["start campaign owner"] [ownerInfo="[log-backup] /tidb/br-stream/owner"]
I0523 14:22:59.247307       9 backup.go:334] [2025/05/23 14:22:59.247 +08:00] [INFO] [wait_group_wrapper.go:133] ["background process started"] [source=domain] [process=logBackupAdvancer]
I0523 14:22:59.247471       9 backup.go:334] [2025/05/23 14:22:59.247 +08:00] [INFO] [owner_daemon.go:81] ["begin running daemon"] [id=e3ab80f0-ebdb-4a3b-adb1-f4ef1e8a1721] [daemon-id=LogBackup::Advancer]
I0523 14:22:59.255842       9 backup.go:334] [2025/05/23 14:22:59.255 +08:00] [INFO] [backup.go:433] ["get new_collation_enabled config from mysql.tidb table"] [new_collation_enabled=True]
I0523 14:22:59.270229       9 backup.go:334] [2025/05/23 14:22:59.270 +08:00] [INFO] [delete_range.go:162] ["closing delRange"] [category=ddl]
I0523 14:22:59.270251       9 backup.go:334] [2025/05/23 14:22:59.270 +08:00] [INFO] [session_pool.go:94] ["closing session pool"] [category=ddl]
I0523 14:22:59.270384       9 backup.go:334] [2025/05/23 14:22:59.270 +08:00] [INFO] [ddl.go:1026] ["DDL closed"] [category=ddl] [ID=95f7e58e-1cf0-40df-ac05-ace1da18d9e8] ["take time"=14.516271ms]
I0523 14:22:59.270407       9 backup.go:334] [2025/05/23 14:22:59.270 +08:00] [INFO] [ddl.go:748] ["stop DDL"] [category=ddl] [ID=95f7e58e-1cf0-40df-ac05-ace1da18d9e8]
I0523 14:22:59.282078       9 backup.go:334] [2025/05/23 14:22:59.281 +08:00] [INFO] [domain.go:3009] ["releaseServerID succeed"] [serverID=1114]
I0523 14:22:59.282165       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [domain.go:882] ["infoSyncerKeeper exited."]
I0523 14:22:59.282179       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [domain.go:1099] ["loadSchemaInLoop exited."]
I0523 14:22:59.282193       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [domain.go:3073] ["serverIDKeeper exited."]
I0523 14:22:59.282208       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=infoSyncerKeeper]
I0523 14:22:59.282223       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=loadSchemaInLoop]
I0523 14:22:59.282239       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [domain.go:908] ["globalConfigSyncerKeeper exited."]
I0523 14:22:59.282249       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=runawayStartLoop]
I0523 14:22:59.282264       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=mdlCheckLoop]
I0523 14:22:59.282279       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=globalConfigSyncerKeeper]
I0523 14:22:59.282302       9 backup.go:334] [2025/05/23 14:22:59.282 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=requestUnitsWriterLoop]
I0523 14:22:59.285528       9 backup.go:334] [2025/05/23 14:22:59.285 +08:00] [INFO] [manager.go:414] ["failed to campaign"] ["owner info"="[log-backup] /tidb/br-stream/owner ownerManager e3ab80f0-ebdb-4a3b-adb1-f4ef1e8a1721"] [error="context canceled"]
I0523 14:22:59.285569       9 backup.go:334] [2025/05/23 14:22:59.285 +08:00] [INFO] [manager.go:398] ["break campaign loop, context is done"] ["owner info"="[log-backup] /tidb/br-stream/owner ownerManager e3ab80f0-ebdb-4a3b-adb1-f4ef1e8a1721"]
I0523 14:22:59.297380       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [owner_daemon.go:87] ["daemon loop exits"] [id=e3ab80f0-ebdb-4a3b-adb1-f4ef1e8a1721] [daemon-id=LogBackup::Advancer]
I0523 14:22:59.297404       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=logBackupAdvancer]
I0523 14:22:59.297415       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [domain.go:1580] ["closestReplicaReadCheckLoop exited."]
I0523 14:22:59.297431       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [advancer_cliext.go:160] ["Start collecting remaining events in the channel."] [category="log backup advancer"] [remained=0]
I0523 14:22:59.297441       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=closestReplicaReadCheckLoop]
I0523 14:22:59.297457       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [domain.go:854] ["topNSlowQueryLoop exited."]
I0523 14:22:59.297472       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [wait_group_wrapper.go:140] ["background process exited"] [source=domain] [process=topNSlowQueryLoop]
I0523 14:22:59.297538       9 backup.go:334] [2025/05/23 14:22:59.297 +08:00] [INFO] [advancer_cliext.go:165] ["Finish collecting remaining events in the channel."] [category="log backup advancer"]
I0523 14:22:59.298230       9 backup.go:334] [2025/05/23 14:22:59.298 +08:00] [INFO] [domain.go:1274] ["domain closed"] ["take time"=42.335366ms]
I0523 14:22:59.298248       9 backup.go:334] [2025/05/23 14:22:59.298 +08:00] [INFO] [glue.go:180] ["one shot domain closed"]
I0523 14:22:59.298262       9 backup.go:334] [2025/05/23 14:22:59.298 +08:00] [INFO] [glue.go:163] ["one shot session closed"]
I0523 14:22:59.298322       9 backup.go:334] [2025/05/23 14:22:59.298 +08:00] [INFO] [client.go:379] ["new backup client"]
I0523 14:22:59.325060       9 backup.go:334] [2025/05/23 14:22:59.324 +08:00] [INFO] [backup.go:472] ["use checkpoint's default GC TTL"] ["GC TTL"=4320]
I0523 14:22:59.328491       9 backup.go:334] [2025/05/23 14:22:59.328 +08:00] [INFO] [client.go:447] ["backup encode timestamp"] [BackupTS=458222830694957057]
I0523 14:22:59.328570       9 backup.go:334] [2025/05/23 14:22:59.328 +08:00] [INFO] [backup.go:494] ["current backup safePoint job"] [safePoint="{ID=br-f4388ab8-38ff-4df5-9006-2cc069196640,TTL=1h12m0s,BackupTime=\"2025-05-23 14:22:59.299 +0800 CST\",BackupTS=458222830694957057}"]
I0523 14:22:59.388996       9 backup.go:334] [2025/05/23 14:22:59.388 +08:00] [INFO] [client.go:802] ["backup empty database"] [db=test]
I0523 14:22:59.389029       9 backup.go:334] [2025/05/23 14:22:59.388 +08:00] [INFO] [backup.go:581] ["get placement policies"] [count=0]
I0523 14:22:59.654310       9 backup.go:334] [2025/05/23 14:22:59.654 +08:00] [INFO] [local.go:80] ["failed to write file, try to mkdir the path"] [path=/backup/tidb-cluster/basic/full/20240801123456/checkpoints/backup]
I0523 14:22:59.687463       9 backup.go:334] [2025/05/23 14:22:59.687 +08:00] [INFO] [external_storage.go:104] ["start to flush the checkpoint lock"] [lock-at=1747981379650] [expire-at=1747981679650]
I0523 14:23:00.944985       9 backup.go:334] [2025/05/23 14:23:00.944 +08:00] [INFO] [pd.go:444] ["adaptive update ts interval state transition"] [configuredInterval=2s] [prevAdaptiveUpdateInterval=2s] [newAdaptiveUpdateInterval=2s] [requiredStaleness=0s] [prevState=unknown(0)] [newState=normal]
I0523 14:23:02.725583       9 backup.go:334] [2025/05/23 14:23:02.724 +08:00] [INFO] [client.go:1087] ["Backup Ranges Started"] [ranges="{total=151,ranges=\"[\\\"[74800000000000000A5F720000000000000000, 74800000000000000A5F72FFFFFFFFFFFFFFFF00)\\\",\\\"(skip 149)\\\",\\\"[7480000000000000915F69800000000000000100, 7480000000000000915F698000000000000001FB)\\\"]\",totalFiles=0,totalKVs=0,totalBytes=0,totalSize=0}"]
I0523 14:23:02.926478       9 backup.go:334] [2025/05/23 14:23:02.926 +08:00] [INFO] [client.go:184] ["This round of backup starts..."] [round=1]
I0523 14:23:02.927976       9 backup.go:334] [2025/05/23 14:23:02.927 +08:00] [INFO] [client.go:225] ["backup ranges"] [round=1] [incomplete-ranges=151] [cost=1.490499ms]
I0523 14:23:02.930069       9 backup.go:334] [2025/05/23 14:23:02.929 +08:00] [INFO] [store_manager.go:151] ["StoreManager: dialing to store."] [address=basic-tikv-0.basic-tikv-peer.tidb-cluster.svc:20160] [store-id=1]
I0523 14:23:02.936403       9 backup.go:334] [2025/05/23 14:23:02.936 +08:00] [INFO] [store.go:210] ["starting backup to the corresponding store"] [storeID=1] [requestCount=5] [concurrency=4]
I0523 14:23:02.937372       9 backup.go:334] [2025/05/23 14:23:02.937 +08:00] [INFO] [store_manager.go:151] ["StoreManager: dialing to store."] [address=basic-tikv-1.basic-tikv-peer.tidb-cluster.svc:20160] [store-id=1001]
I0523 14:23:02.941039       9 backup.go:334] [2025/05/23 14:23:02.940 +08:00] [INFO] [store.go:210] ["starting backup to the corresponding store"] [storeID=1001] [requestCount=5] [concurrency=4]
I0523 14:23:02.941971       9 backup.go:334] [2025/05/23 14:23:02.941 +08:00] [INFO] [store_manager.go:151] ["StoreManager: dialing to store."] [address=basic-tikv-2.basic-tikv-peer.tidb-cluster.svc:20160] [store-id=1002]
I0523 14:23:02.947588       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [INFO] [client.go:147] ["start wait store backups"] [remainingProducers=3]
I0523 14:23:02.947613       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [INFO] [store.go:210] ["starting backup to the corresponding store"] [storeID=1002] [requestCount=5] [concurrency=4]
I0523 14:23:02.947885       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [INFO] [client.go:138] ["collect backups goroutine exits"] [round=1]
I0523 14:23:02.947905       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [INFO] [client.go:1091] ["Backup Ranges Completed"] [take=222.328828ms]
I0523 14:23:02.947956       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [INFO] [backup.go:659] ["wait for flush checkpoint..."]
I0523 14:23:02.948061       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [INFO] [checkpoint.go:506] ["stop checkpoint runner"]
I0523 14:23:02.948097       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [WARN] [backoff.go:209] ["unexpected error, stop retrying"] [error="context canceled"] [errorVerbose="context canceled\ngithub.com/pingcap/errors.AddStack\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/errors.go:178\ngithub.com/pingcap/errors.Trace\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/juju_adaptor.go:15\ngithub.com/pingcap/tidb/br/pkg/backup.doSendBackup\n\t/workspace/source/tidb/br/pkg/backup/store.go:188\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1.1\n\t/workspace/source/tidb/br/pkg/backup/store.go:231\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry.func1\n\t/workspace/source/tidb/br/pkg/utils/retry.go:45\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetryV2[...]\n\t/workspace/source/tidb/br/pkg/utils/retry.go:63\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry\n\t/workspace/source/tidb/br/pkg/utils/retry.go:44\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1\n\t/workspace/source/tidb/br/pkg/backup/store.go:225\ngithub.com/pingcap/tidb/pkg/util.(*WorkerPool).ApplyOnErrorGroup.func1\n\t/workspace/source/tidb/pkg/util/worker_pool.go:81\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/root/go/pkg/mod/golang.org/x/sync@v0.8.0/errgroup/errgroup.go:78\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.948128       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [WARN] [backoff.go:209] ["unexpected error, stop retrying"] [error="context canceled"] [errorVerbose="context canceled\ngithub.com/pingcap/errors.AddStack\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/errors.go:178\ngithub.com/pingcap/errors.Trace\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/juju_adaptor.go:15\ngithub.com/pingcap/tidb/br/pkg/backup.doSendBackup\n\t/workspace/source/tidb/br/pkg/backup/store.go:188\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1.1\n\t/workspace/source/tidb/br/pkg/backup/store.go:231\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry.func1\n\t/workspace/source/tidb/br/pkg/utils/retry.go:45\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetryV2[...]\n\t/workspace/source/tidb/br/pkg/utils/retry.go:63\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry\n\t/workspace/source/tidb/br/pkg/utils/retry.go:44\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1\n\t/workspace/source/tidb/br/pkg/backup/store.go:225\ngithub.com/pingcap/tidb/pkg/util.(*WorkerPool).ApplyOnErrorGroup.func1\n\t/workspace/source/tidb/pkg/util/worker_pool.go:81\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/root/go/pkg/mod/golang.org/x/sync@v0.8.0/errgroup/errgroup.go:78\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.948154       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [INFO] [checkpoint.go:395] ["stop checkpoint flush worker"]
I0523 14:23:02.948182       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [WARN] [backoff.go:209] ["unexpected error, stop retrying"] [error="context canceled"] [errorVerbose="context canceled\ngithub.com/pingcap/errors.AddStack\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/errors.go:178\ngithub.com/pingcap/errors.Trace\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/juju_adaptor.go:15\ngithub.com/pingcap/tidb/br/pkg/backup.doSendBackup\n\t/workspace/source/tidb/br/pkg/backup/store.go:188\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1.1\n\t/workspace/source/tidb/br/pkg/backup/store.go:231\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry.func1\n\t/workspace/source/tidb/br/pkg/utils/retry.go:45\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetryV2[...]\n\t/workspace/source/tidb/br/pkg/utils/retry.go:63\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry\n\t/workspace/source/tidb/br/pkg/utils/retry.go:44\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1\n\t/workspace/source/tidb/br/pkg/backup/store.go:225\ngithub.com/pingcap/tidb/pkg/util.(*WorkerPool).ApplyOnErrorGroup.func1\n\t/workspace/source/tidb/pkg/util/worker_pool.go:81\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/root/go/pkg/mod/golang.org/x/sync@v0.8.0/errgroup/errgroup.go:78\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.948212       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [WARN] [backoff.go:209] ["unexpected error, stop retrying"] [error="context canceled"] [errorVerbose="context canceled\ngithub.com/pingcap/errors.AddStack\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/errors.go:178\ngithub.com/pingcap/errors.Trace\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/juju_adaptor.go:15\ngithub.com/pingcap/tidb/br/pkg/backup.doSendBackup\n\t/workspace/source/tidb/br/pkg/backup/store.go:188\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1.1\n\t/workspace/source/tidb/br/pkg/backup/store.go:231\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry.func1\n\t/workspace/source/tidb/br/pkg/utils/retry.go:45\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetryV2[...]\n\t/workspace/source/tidb/br/pkg/utils/retry.go:63\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry\n\t/workspace/source/tidb/br/pkg/utils/retry.go:44\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1\n\t/workspace/source/tidb/br/pkg/backup/store.go:225\ngithub.com/pingcap/tidb/pkg/util.(*WorkerPool).ApplyOnErrorGroup.func1\n\t/workspace/source/tidb/pkg/util/worker_pool.go:81\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/root/go/pkg/mod/golang.org/x/sync@v0.8.0/errgroup/errgroup.go:78\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.948235       9 backup.go:334] [2025/05/23 14:23:02.947 +08:00] [WARN] [backoff.go:209] ["unexpected error, stop retrying"] [error="context canceled"] [errorVerbose="context canceled\ngithub.com/pingcap/errors.AddStack\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/errors.go:178\ngithub.com/pingcap/errors.Trace\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/juju_adaptor.go:15\ngithub.com/pingcap/tidb/br/pkg/backup.doSendBackup\n\t/workspace/source/tidb/br/pkg/backup/store.go:188\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1.1\n\t/workspace/source/tidb/br/pkg/backup/store.go:231\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry.func1\n\t/workspace/source/tidb/br/pkg/utils/retry.go:45\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetryV2[...]\n\t/workspace/source/tidb/br/pkg/utils/retry.go:63\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry\n\t/workspace/source/tidb/br/pkg/utils/retry.go:44\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1\n\t/workspace/source/tidb/br/pkg/backup/store.go:225\ngithub.com/pingcap/tidb/pkg/util.(*WorkerPool).ApplyOnErrorGroup.func1\n\t/workspace/source/tidb/pkg/util/worker_pool.go:81\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/root/go/pkg/mod/golang.org/x/sync@v0.8.0/errgroup/errgroup.go:78\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.948260       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
I0523 14:23:02.948284       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [INFO] [client.go:103] ["store backup goroutine exits"] [store=1]
I0523 14:23:02.948304       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1002] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
I0523 14:23:02.948323       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [WARN] [backoff.go:209] ["unexpected error, stop retrying"] [error="context canceled"] [errorVerbose="context canceled\ngithub.com/pingcap/errors.AddStack\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/errors.go:178\ngithub.com/pingcap/errors.Trace\n\t/root/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20240318064555-6bd07397691f/juju_adaptor.go:15\ngithub.com/pingcap/tidb/br/pkg/backup.doSendBackup\n\t/workspace/source/tidb/br/pkg/backup/store.go:188\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1.1\n\t/workspace/source/tidb/br/pkg/backup/store.go:231\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry.func1\n\t/workspace/source/tidb/br/pkg/utils/retry.go:45\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetryV2[...]\n\t/workspace/source/tidb/br/pkg/utils/retry.go:63\ngithub.com/pingcap/tidb/br/pkg/utils.WithRetry\n\t/workspace/source/tidb/br/pkg/utils/retry.go:44\ngithub.com/pingcap/tidb/br/pkg/backup.startBackup.func1\n\t/workspace/source/tidb/br/pkg/backup/store.go:225\ngithub.com/pingcap/tidb/pkg/util.(*WorkerPool).ApplyOnErrorGroup.func1\n\t/workspace/source/tidb/pkg/util/worker_pool.go:81\ngolang.org/x/sync/errgroup.(*Group).Go.func1\n\t/root/go/pkg/mod/golang.org/x/sync@v0.8.0/errgroup/errgroup.go:78\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.948346       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [INFO] [client.go:103] ["store backup goroutine exits"] [store=1002]
I0523 14:23:02.948425       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1001] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
I0523 14:23:02.948442       9 backup.go:334] [2025/05/23 14:23:02.948 +08:00] [INFO] [client.go:103] ["store backup goroutine exits"] [store=1001]
I0523 14:23:02.949354       9 backup.go:334] [2025/05/23 14:23:02.949 +08:00] [INFO] [backup.go:500] ["skip removing gc-safepoint keeper for next retry"] [gc-id=br-f4388ab8-38ff-4df5-9006-2cc069196640]
I0523 14:23:02.950888       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_dispatcher.go:158] ["[tso] exit tso deadline watcher"] [dc-location=global]
I0523 14:23:02.950918       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_dispatcher.go:264] ["[tso] stop fetching the pending tso requests due to context canceled"] [dc-location=global]
I0523 14:23:02.950939       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [pd_service_discovery.go:551] ["[pd] exit member loop due to context canceled"]
I0523 14:23:02.950953       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_dispatcher.go:201] ["[tso] exit tso dispatcher"] [dc-location=global]
I0523 14:23:02.950966       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_dispatcher.go:490] ["[tso] exit tso connection contexts updater"] [dc-location=global]
I0523 14:23:02.950978       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_client.go:147] ["[tso] exit tso dispatcher check loop"]
I0523 14:23:02.950991       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [resource_manager_client.go:296] ["[resource manager] exit resource token dispatcher"]
I0523 14:23:02.951008       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_client.go:157] ["[tso] closing tso client"]
I0523 14:23:02.951022       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_client.go:162] ["[tso] close tso client"]
I0523 14:23:02.951073       9 backup.go:334] [2025/05/23 14:23:02.951 +08:00] [INFO] [tso_client.go:164] ["[tso] tso client is closed"]
I0523 14:23:02.951095       9 backup.go:334] [2025/05/23 14:23:02.950 +08:00] [INFO] [tso_stream.go:359] ["tsoStream.recvLoop ended"] [stream=basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379-2] [error="rpc error: code = Canceled desc = context canceled"] [errorVerbose="rpc error: code = Canceled desc = context canceled\ngithub.com/tikv/pd/client.(*tsoStream).recvLoop\n\t/root/go/pkg/mod/github.com/tikv/pd/client@v0.0.0-20241111073742-238d4d79ea31/tso_stream.go:427\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.951110       9 backup.go:334] [2025/05/23 14:23:02.951 +08:00] [INFO] [pd_service_discovery.go:644] ["[pd] close pd service discovery client"]
I0523 14:23:02.952317       9 backup.go:334] [2025/05/23 14:23:02.952 +08:00] [INFO] [client.go:347] ["[pd] http client closed"] [source=tikv-driver]
I0523 14:23:02.953493       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [pd_service_discovery.go:551] ["[pd] exit member loop due to context canceled"]
I0523 14:23:02.953512       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [resource_manager_client.go:296] ["[resource manager] exit resource token dispatcher"]
I0523 14:23:02.953530       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_dispatcher.go:158] ["[tso] exit tso deadline watcher"] [dc-location=global]
I0523 14:23:02.953542       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_client.go:157] ["[tso] closing tso client"]
I0523 14:23:02.953555       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_client.go:147] ["[tso] exit tso dispatcher check loop"]
I0523 14:23:02.953573       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_dispatcher.go:264] ["[tso] stop fetching the pending tso requests due to context canceled"] [dc-location=global]
I0523 14:23:02.953591       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_dispatcher.go:490] ["[tso] exit tso connection contexts updater"] [dc-location=global]
I0523 14:23:02.953612       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_stream.go:359] ["tsoStream.recvLoop ended"] [stream=basic-pd-0.basic-pd-peer.tidb-cluster.svc:2379-1] [error="rpc error: code = Canceled desc = context canceled"] [errorVerbose="rpc error: code = Canceled desc = context canceled\ngithub.com/tikv/pd/client.(*tsoStream).recvLoop\n\t/root/go/pkg/mod/github.com/tikv/pd/client@v0.0.0-20241111073742-238d4d79ea31/tso_stream.go:427\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"]
I0523 14:23:02.953627       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_dispatcher.go:201] ["[tso] exit tso dispatcher"] [dc-location=global]
I0523 14:23:02.953679       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_client.go:162] ["[tso] close tso client"]
I0523 14:23:02.953775       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [tso_client.go:164] ["[tso] tso client is closed"]
I0523 14:23:02.953795       9 backup.go:334] [2025/05/23 14:23:02.953 +08:00] [INFO] [pd_service_discovery.go:644] ["[pd] close pd service discovery client"]
I0523 14:23:02.954772       9 backup.go:334] [2025/05/23 14:23:02.954 +08:00] [INFO] [client.go:347] ["[pd] http client closed"] [source="br/lightning PD controller"]
I0523 14:23:02.954845       9 backup.go:334] [2025/05/23 14:23:02.954 +08:00] [INFO] [collector.go:224] ["units canceled"] [cancel-unit=0]
I0523 14:23:02.954862       9 backup.go:334] [2025/05/23 14:23:02.954 +08:00] [INFO] [metafile.go:739] ["exit write metas by context done"]
I0523 14:23:02.954881       9 backup.go:334] [2025/05/23 14:23:02.954 +08:00] [INFO] [collector.go:225] ["Full Backup failed summary"] [total-ranges=0] [ranges-succeed=0] [ranges-failed=0] [backup-total-ranges=151] [backup-total-regions=151]
I0523 14:23:02.955128       9 backup.go:334] [2025/05/23 14:23:02.955 +08:00] [INFO] [progress.go:176] [progress] [step="Full Backup"] [progress=0.00%] [count="0 / 151"] [speed="? p/s"] [elapsed=3.3s] [remaining=3.3s]
I0523 14:23:02.955226       9 backup.go:334] [2025/05/23 14:23:02.954 +08:00] [ERROR] [backup.go:58] ["failed to backup"] [error="error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error"] [errorVerbose="[BR:KV:ErrKVStorage]tikv storage occur I/O error\nerror happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).OnBackupResponse\n\t/workspace/source/tidb/br/pkg/backup/client.go:1213\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).RunLoop\n\t/workspace/source/tidb/br/pkg/backup/client.go:341\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).BackupRanges\n\t/workspace/source/tidb/br/pkg/backup/client.go:1126\ngithub.com/pingcap/tidb/br/pkg/task.RunBackup\n\t/workspace/source/tidb/br/pkg/task/backup.go:689\nmain.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:57\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"] [stack="main.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:58\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272"]
I0523 14:23:02.955467       9 backup.go:334] [2025/05/23 14:23:02.955 +08:00] [ERROR] [main.go:38] ["br failed"] [error="error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error"] [errorVerbose="[BR:KV:ErrKVStorage]tikv storage occur I/O error\nerror happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).OnBackupResponse\n\t/workspace/source/tidb/br/pkg/backup/client.go:1213\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).RunLoop\n\t/workspace/source/tidb/br/pkg/backup/client.go:341\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).BackupRanges\n\t/workspace/source/tidb/br/pkg/backup/client.go:1126\ngithub.com/pingcap/tidb/br/pkg/task.RunBackup\n\t/workspace/source/tidb/br/pkg/task/backup.go:689\nmain.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:57\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"] [stack="main.main\n\t/workspace/source/tidb/br/cmd/br/main.go:38\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272"]
I0523 14:23:02.976074       9 backup.go:334]
I0523 14:23:02.976155       9 backup.go:344] Error: error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error
E0523 14:23:02.976438       9 manager.go:382] backup cluster tidb-cluster/basic-tidb-cluster-full-20240801123456 data failed, err: cluster tidb-cluster/basic-tidb-cluster-full-20240801123456, wait pipe message failed, errMsg [2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
[2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1002] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
[2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1001] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
[2025/05/23 14:23:02.954 +08:00] [ERROR] [backup.go:58] ["failed to backup"] [error="error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error"] [errorVerbose="[BR:KV:ErrKVStorage]tikv storage occur I/O error\nerror happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).OnBackupResponse\n\t/workspace/source/tidb/br/pkg/backup/client.go:1213\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).RunLoop\n\t/workspace/source/tidb/br/pkg/backup/client.go:341\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).BackupRanges\n\t/workspace/source/tidb/br/pkg/backup/client.go:1126\ngithub.com/pingcap/tidb/br/pkg/task.RunBackup\n\t/workspace/source/tidb/br/pkg/task/backup.go:689\nmain.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:57\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"] [stack="main.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:58\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272"]
[2025/05/23 14:23:02.955 +08:00] [ERROR] [main.go:38] ["br failed"] [error="error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error"] [errorVerbose="[BR:KV:ErrKVStorage]tikv storage occur I/O error\nerror happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).OnBackupResponse\n\t/workspace/source/tidb/br/pkg/backup/client.go:1213\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).RunLoop\n\t/workspace/source/tidb/br/pkg/backup/client.go:341\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).BackupRanges\n\t/workspace/source/tidb/br/pkg/backup/client.go:1126\ngithub.com/pingcap/tidb/br/pkg/task.RunBackup\n\t/workspace/source/tidb/br/pkg/task/backup.go:689\nmain.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:57\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"] [stack="main.main\n\t/workspace/source/tidb/br/cmd/br/main.go:38\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272"]
Error: error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error
, err: exit status 1
I0523 14:23:03.000135       9 backup_status_updater.go:128] Backup: [tidb-cluster/basic-tidb-cluster-full-20240801123456] updated successfully
error: cluster tidb-cluster/basic-tidb-cluster-full-20240801123456, wait pipe message failed, errMsg [2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
[2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1002] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
[2025/05/23 14:23:02.948 +08:00] [ERROR] [client.go:117] ["store backup failed"] [round=1] [storeID=1001] [error="rpc error: code = Canceled desc = context canceled"] [stack="github.com/pingcap/tidb/br/pkg/backup.(*MainBackupSender).SendAsync.func1\n\t/workspace/source/tidb/br/pkg/backup/client.go:117"]
[2025/05/23 14:23:02.954 +08:00] [ERROR] [backup.go:58] ["failed to backup"] [error="error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error"] [errorVerbose="[BR:KV:ErrKVStorage]tikv storage occur I/O error\nerror happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).OnBackupResponse\n\t/workspace/source/tidb/br/pkg/backup/client.go:1213\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).RunLoop\n\t/workspace/source/tidb/br/pkg/backup/client.go:341\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).BackupRanges\n\t/workspace/source/tidb/br/pkg/backup/client.go:1126\ngithub.com/pingcap/tidb/br/pkg/task.RunBackup\n\t/workspace/source/tidb/br/pkg/task/backup.go:689\nmain.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:57\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"] [stack="main.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:58\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272"]
[2025/05/23 14:23:02.955 +08:00] [ERROR] [main.go:38] ["br failed"] [error="error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error"] [errorVerbose="[BR:KV:ErrKVStorage]tikv storage occur I/O error\nerror happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).OnBackupResponse\n\t/workspace/source/tidb/br/pkg/backup/client.go:1213\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).RunLoop\n\t/workspace/source/tidb/br/pkg/backup/client.go:341\ngithub.com/pingcap/tidb/br/pkg/backup.(*Client).BackupRanges\n\t/workspace/source/tidb/br/pkg/backup/client.go:1126\ngithub.com/pingcap/tidb/br/pkg/task.RunBackup\n\t/workspace/source/tidb/br/pkg/task/backup.go:689\nmain.runBackupCommand\n\t/workspace/source/tidb/br/cmd/br/backup.go:57\nmain.newFullBackupCommand.func1\n\t/workspace/source/tidb/br/cmd/br/backup.go:149\ngithub.com/spf13/cobra.(*Command).execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:985\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/root/go/pkg/mod/github.com/spf13/cobra@v1.8.1/command.go:1041\nmain.main\n\t/workspace/source/tidb/br/cmd/br/main.go:36\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"] [stack="main.main\n\t/workspace/source/tidb/br/cmd/br/main.go:38\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272"]
Error: error happen in store 1: File or directory not found on TiKV Node (store id: 1). workaround: please ensure br and tikv nodes share a same storage and the user of br and tikv has same uid.: [BR:KV:ErrKVStorage]tikv storage occur I/O error
, err: exit status 1
Sleeping for 10 seconds before exit...

【资源配置】进入到 TiDB Dashboard -集群信息 (Cluster Info) -主机(Hosts) 截图此页面
【附件:截图/日志/监控】

这里–storage指定了备份路径的,你指定了local本地的/backup,tikv没有这个路径就报错了。
可以指定下S3协议的存储

1 个赞