br恢复增量数据报invalid memory address or nil pointer dereference

【 TiDB 使用环境】生产环境
【 TiDB 版本】v7.1.1
【复现路径】使用br恢复增量数据
【遇到的问题:问题现象及影响】恢复时报错invalid memory address or nil pointer dereference并退出
【资源配置】
【附件:截图/日志/监控】
Detail BR log in /tmp/br.log.2024-07-17T15.55.05+0800
[2024/07/17 15:55:11.891 +08:00] [INFO] [collector.go:77] [“Full Restore failed summary”] [total-ranges=0] [ranges-succeed=0] [ranges-failed=0]
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x3f566a4]

goroutine 1 [running]:
github.com/pingcap/tidb/executor.(*Compiler).Compile.func1()
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/executor/compiler.go:54 +0x445
panic({0x5313660, 0x8796a00})
/usr/local/go/src/runtime/panic.go:884 +0x213
github.com/pingcap/tidb/privilege/privileges.(*Handle).Get(...)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/privilege/privileges/cache.go:1646
github.com/pingcap/tidb/privilege/privileges.(*UserPrivileges).RequestVerificationWithUser(0xc0031cc700, {0xc00e9afa90, 0x9}, {0xc001d7c360, 0xd}, {0x0, 0x0}, 0xc00d38c510?, 0xc00d01f6e0)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/privilege/privileges/privileges.go:208 +0xc4
github.com/pingcap/tidb/planner/core.(*PlanBuilder).BuildDataSourceFromView(0xc00c8716c0, {0x6244578, 0xc00e9b6930}, {{0xc000d344dc?, 0x9?}, {0xc000d344dc?, 0x9?}}, 0xc00d0229c0, 0xc00e9b6e40, 0xc00e9b6e70)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:5236 +0x160c
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildDataSource(0xc00c8716c0, {0x6244578, 0xc00e9b6930}, 0xc00e2c7110, 0xc00dd09d30)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:4635 +0x129e
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildResultSetNode(0xc00c8716c0, {0x6244578, 0xc00e9b6930}, {0x625a658?, 0xc00dd09ce0?}, 0x50?)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:385 +0x1aa
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildJoin(0x0?, {0x6244578?, 0xc00e9b6930?}, 0xb2?)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:828 +0x6e5
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildResultSetNode(0x0?, {0x6244578?, 0xc00e9b6930?}, {0x6259268?, 0xc00e9ba7e0?}, 0x0?)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:372 +0x9d
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildTableRefs(0xc00c8716c0?, {0x6244578?, 0xc00e9b6930?}, 0x1c0733a?)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:364 +0x85
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildSelect(0xc00c8716c0, {0x6244578, 0xc00e9b6930}, 0xc00e98a900)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:4035 +0x6e5
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildIntersect(0x0?, {0x6244578, 0xc00e9b6930}, {0xc00d1772b8, 0x1, 0x0?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:1858 +0x9f
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildSetOpr(0xc00c8716c0, {0x6244578, 0xc00e9b6930}, 0xc00dd09f10)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/logical_plan_builder.go:1770 +0x2cb
github.com/pingcap/tidb/planner/core.(*PlanBuilder).Build(0x6252330?, {0x6244578?, 0xc00e9b6930?}, {0x62545e0?, 0xc00dd09f10?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/planbuilder.go:821 +0x39d
github.com/pingcap/tidb/planner/core.(*PlanBuilder).buildDDL(0xc00c8716c0, {0x6244578, 0xc00e9b6930}, {0x625dd20?, 0xc00e2c8d10?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/planbuilder.go:4752 +0x1398
github.com/pingcap/tidb/planner/core.(*PlanBuilder).Build(0xc00c8716c0, {0x6244578, 0xc00e9b6930}, {0x6252330?, 0xc00e2c8d10?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/core/planbuilder.go:842 +0x7f0
github.com/pingcap/tidb/planner.buildLogicalPlan({0x6244578, 0xc00e9b6930}, {0x62b36e8?, 0xc0008e7180}, {0x6252330, 0xc00e2c8d10}, 0xc00c8716c0)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/optimize.go:558 +0x12c
github.com/pingcap/tidb/planner.optimize({0x6244578, 0xc00e9b6930}, {0x62b36e8, 0xc0008e7180}, {0x6252330?, 0xc00e2c8d10?}, {0x627a5d0, 0xc00e9b6a50})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/optimize.go:479 +0x46f
github.com/pingcap/tidb/planner.Optimize({0x6244578, 0xc00e9b6930}, {0x62b36e8, 0xc0008e7180}, {0x6252330, 0xc00e2c8d10}, {0x627a5d0, 0xc00e9b6a50})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/planner/optimize.go:337 +0x16fe
github.com/pingcap/tidb/executor.(*Compiler).Compile(0xc00d179c00, {0x6244578?, 0xc00e9b6930?}, {0x62588c8?, 0xc00e2c8d10})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/executor/compiler.go:98 +0x459
github.com/pingcap/tidb/session.(*session).ExecuteStmt(0xc0008e7180, {0x6244578?, 0xc00e9b6930?}, {0x62588c8?, 0xc00e2c8d10?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/session/session.go:2191 +0x725
github.com/pingcap/tidb/session.(*session).ExecuteInternal(0xc0008e7180, {0x6244578, 0xc00e9b6930}, {0xc000d34480, 0x107}, {0x0, 0x0, 0x0})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/session/session.go:1660 +0x24f
github.com/pingcap/tidb/br/pkg/gluetidb.(*tidbSession).ExecuteInternal(0xc0031ca3c0, {0x62444d0?, 0xc0006dce10?}, {0xc000d34480, 0x107}, {0x0, 0x0, 0x0})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/gluetidb/glue.go:175 +0xf3
github.com/pingcap/tidb/br/pkg/gluetidb.(*tidbSession).Execute(0x5996e01?, {0x62444d0?, 0xc0006dce10?}, {0xc000d34480?, 0x1?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/gluetidb/glue.go:170 +0x31
github.com/pingcap/tidb/br/pkg/restore.(*DB).ExecDDL(0xc0016228a0, {0x62444d0, 0xc0006dce10}, 0xc00c91cc60)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/restore/db.go:129 +0x4a9
github.com/pingcap/tidb/br/pkg/restore.(*Client).ExecDDLs(0xc003cc7200, {0x62444d0, 0xc0006dce10}, {0xc00ca32870?, 0x12, 0x12?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/restore/client.go:1202 +0x105
github.com/pingcap/tidb/br/pkg/task.runRestore({0x62444d0, 0xc000b12eb0}, {0x625b208, 0x90a3130}, {0x59a610a, 0xc}, 0xc000dd3200)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/task/restore.go:840 +0x1985
github.com/pingcap/tidb/br/pkg/task.RunRestore({0x62444d0, 0xc000b12eb0}, {0x625b208, 0x90a3130}, {0x59a610a, 0xc}, 0xc000dd3200)
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/pkg/task/restore.go:595 +0x286
main.runRestoreCommand(0xc000d31200, {0x59a610a, 0xc})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/restore.go:63 +0x6ae
main.newFullRestoreCommand.func1(0xc000d31200?, {0xc0004f69c0?, 0x4?, 0x4?})
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/restore.go:169 +0x25
github.com/spf13/cobra.(*Command).execute(0xc000d31200, {0xc0000740a0, 0x4, 0x4})
/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:916 +0x862
github.com/spf13/cobra.(*Command).ExecuteC(0xc000d2a000)
/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:1044 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/cobra@v1.6.1/command.go:968
main.main()
/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/br/cmd/br/main.go:58 +0x35c

查看br日志,最后有这些报错:
[2024/07/17 15:14:30.679 +08:00] [INFO] [restore.go:781] [“start to remove gc-safepoint keeper”]
[2024/07/17 15:14:30.681 +08:00] [INFO] [restore.go:791] [“finish removing gc-safepoint keeper”]
[2024/07/17 15:14:30.681 +08:00] [INFO] [restore.go:745] [“start to remove the pd scheduler”]
[2024/07/17 15:14:30.681 +08:00] [INFO] [client.go:1540] [“stop automatic switch to import mode”]
[2024/07/17 15:14:30.826 +08:00] [INFO] [pd.go:516] [“resume scheduler”] [schedulers=“[balance-hot-region-scheduler,balance-leader-scheduler,balance-region-scheduler]”]
[2024/07/17 15:14:30.826 +08:00] [INFO] [pd.go:502] [“exit pause scheduler and configs successful”]
[2024/07/17 15:14:30.827 +08:00] [INFO] [pd.go:536] [“resume scheduler successful”] [scheduler=balance-hot-region-scheduler]
[2024/07/17 15:14:30.827 +08:00] [INFO] [pd.go:536] [“resume scheduler successful”] [scheduler=balance-leader-scheduler]
[2024/07/17 15:14:30.828 +08:00] [INFO] [pd.go:536] [“resume scheduler successful”] [scheduler=balance-region-scheduler]
[2024/07/17 15:14:30.828 +08:00] [INFO] [pd.go:636] [“restoring config”] [config=“{"enable-location-replacement":"true","leader-schedule-limit":4,"max-merge-region-keys":200000,"max-merge-region-size":20,"max-pending-peer-count":64,"max-snapshot-count":64,"
region-schedule-limit":2048}”]
[2024/07/17 15:14:30.839 +08:00] [INFO] [restore.go:749] [“finish removing pd scheduler”]
[2024/07/17 15:14:30.839 +08:00] [INFO] [client.go:500] [“Restore client closed”]
[2024/07/17 15:14:31.714 +08:00] [INFO] [manager.go:282] [“failed to campaign”] [“owner info”=“[ddl] /tidb/ddl/fg/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”] [error=“context canceled”]
[2024/07/17 15:14:31.714 +08:00] [INFO] [manager.go:263] [“break campaign loop, context is done”] [“owner info”=“[ddl] /tidb/ddl/fg/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”]
[2024/07/17 15:14:31.714 +08:00] [INFO] [manager.go:307] [“revoke session”] [“owner info”=“[ddl] /tidb/ddl/fg/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”]
[2024/07/17 15:14:31.716 +08:00] [INFO] [ddl_workerpool.go:82] [“[ddl] closing workerPool”]
[2024/07/17 15:14:31.716 +08:00] [INFO] [ddl_workerpool.go:82] [“[ddl] closing workerPool”]
[2024/07/17 15:14:31.716 +08:00] [INFO] [delete_range.go:150] [“[ddl] closing delRange”]
[2024/07/17 15:14:31.716 +08:00] [INFO] [session_pool.go:98] [“[ddl] closing session pool”]
[2024/07/17 15:14:31.716 +08:00] [INFO] [ddl.go:867] [“[ddl] DDL closed”] [ID=48098e9f-0139-4bf4-9016-d3ab428976e8] [“take time”=876.393245ms]
[2024/07/17 15:14:31.716 +08:00] [INFO] [ddl.go:707] [“[ddl] stop DDL”] [ID=48098e9f-0139-4bf4-9016-d3ab428976e8]
[2024/07/17 15:14:31.718 +08:00] [INFO] [wait_group_wrapper.go:137] [“background process exited”] [source=domain] [process=mdlCheckLoop]
[2024/07/17 15:14:31.718 +08:00] [INFO] [domain.go:856] [“loadSchemaInLoop exited.”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [wait_group_wrapper.go:137] [“background process exited”] [source=domain] [process=loadSchemaInLoop]
[2024/07/17 15:14:31.718 +08:00] [INFO] [domain.go:2677] [“serverIDKeeper exited.”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [domain.go:663] [“infoSyncerKeeper exited.”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:282] [“failed to campaign”] [“owner info”=“[stats] /tidb/stats/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”] [error=“lost watcher waiting for delete”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [wait_group_wrapper.go:137] [“background process exited”] [source=domain] [process=infoSyncerKeeper]
[2024/07/17 15:14:31.718 +08:00] [INFO] [advancer.go:272] [“[log backup advancer] Meet task event”] [event=“Err(, err = EOF)”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:282] [“failed to campaign”] [“owner info”=“[stats] /tidb/stats/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”] [error=“lost watcher waiting for delete”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:282] [“failed to campaign”] [“owner info”=“[log-backup] /tidb/br-stream/owner ownerManager f5594b7d-fdb1-4f05-b65e-3c598d7072c3”] [error=“lost watcher waiting for delete”]
[2024/07/17 15:14:31.718 +08:00] [ERROR] [advancer.go:275] [“listen task meet error, would reopen.”] [error=EOF] [stack=“github.com/pingcap/tidb/br/pkg/streamhelper.(*CheckpointAdvancer).StartTaskListener.func1\n\t/home/jenkins/agent/workspace/build-common/go/src/github.c
om/pingcap/br/br/pkg/streamhelper/advancer.go:275”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [advancer.go:278] [“[log backup advancer] Task watcher exits due to some error.”] [error=EOF]
[2024/07/17 15:14:31.718 +08:00] [INFO] [domain.go:689] [“globalConfigSyncerKeeper exited.”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [wait_group_wrapper.go:137] [“background process exited”] [source=domain] [process=globalConfigSyncerKeeper]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:282] [“failed to campaign”] [“owner info”=“[log-backup] /tidb/br-stream/owner ownerManager f5594b7d-fdb1-4f05-b65e-3c598d7072c3”] [error=“context canceled”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:247] [“etcd session is done, creates a new one”] [“owner info”=“[log-backup] /tidb/br-stream/owner ownerManager f5594b7d-fdb1-4f05-b65e-3c598d7072c3”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:251] [“break campaign loop, NewSession failed”] [“owner info”=“[log-backup] /tidb/br-stream/owner ownerManager f5594b7d-fdb1-4f05-b65e-3c598d7072c3”] [error=“context canceled”] [errorVerbose=“context canceled\ngithub.com
/pingcap/errors.AddStack\n\t/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20221009092201-b66cddb77c32/errors.go:174\ngithub.com/pingcap/errors.Trace\n\t/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20221009092201-b66cddb77c32/juju_adaptor.go:15\ngithub.com/pingcap/tid
b/util.contextDone\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/util/etcd.go:90\ngithub.com/pingcap/tidb/util.NewSession\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/util/etcd.go:50\ngithub.com/pingcap/tidb/owner
.(*ownerManager).campaignLoop\n\t/home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/br/owner/manager.go:249\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [owner_daemon.go:87] [“daemon loop exits”] [id=f5594b7d-fdb1-4f05-b65e-3c598d7072c3] [daemon-id=LogBackup::Advancer]
[2024/07/17 15:14:31.718 +08:00] [INFO] [wait_group_wrapper.go:137] [“background process exited”] [source=domain] [process=logBackupAdvancer]
[2024/07/17 15:14:31.718 +08:00] [INFO] [domain.go:635] [“topNSlowQueryLoop exited.”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [domain.go:1249] [“closestReplicaReadCheckLoop exited.”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:282] [“failed to campaign”] [“owner info”=“[stats] /tidb/stats/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”] [error=“rpc error: code = Unavailable desc = error reading from server: read tcp 172.20.1.128:37858
->172.20.1.176:2379: use of closed network connection”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [wait_group_wrapper.go:137] [“background process exited”] [source=domain] [process=closestReplicaReadCheckLoop]
[2024/07/17 15:14:31.718 +08:00] [INFO] [wait_group_wrapper.go:137] [“background process exited”] [source=domain] [process=topNSlowQueryLoop]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:247] [“etcd session is done, creates a new one”] [“owner info”=“[stats] /tidb/stats/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:282] [“failed to campaign”] [“owner info”=“[stats] /tidb/stats/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”] [error=“rpc error: code = Unavailable desc = error reading from server: read tcp 172.20.1.128:59694
->172.20.1.175:2379: use of closed network connection”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [domain.go:1007] [“domain closed”] [“take time”=878.723233ms]
[2024/07/17 15:14:31.718 +08:00] [INFO] [manager.go:247] [“etcd session is done, creates a new one”] [“owner info”=“[stats] /tidb/stats/owner ownerManager 48098e9f-0139-4bf4-9016-d3ab428976e8”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [tso_client.go:134] [“closing tso client”]
[2024/07/17 15:14:31.718 +08:00] [INFO] [tso_dispatcher.go:375] [“[tso] stop fetching the pending tso requests due to context canceled”] [dc-location=global]

求助为什么,最近cdc一直出问题,之前还好好的,出问题前有一个改动单行存储大小的操作,如下:
MySQL [(none)]> show config where name like ‘%txn-entry-size-limit%’;
±-----±---------------------------------±---------------------------------±---------+
| Type | Instance | Name | Value |
±-----±---------------------------------±---------------------------------±---------+
| tidb | a-risk-us-east-tidb-q-tidb3:4000 | performance.txn-entry-size-limit | 67108864 |
| tidb | a-risk-us-east-tidb-q-tidb1:4000 | performance.txn-entry-size-limit | 67108864 |
| tidb | a-risk-us-east-tidb-q-tidb2:4000 | performance.txn-entry-size-limit | 67108864 |
±-----±---------------------------------±---------------------------------±---------+
MySQL [(none)]> show config where name like ‘%raft-entry-max-size%’;
±--------±------------------------------------±----------------------------------------------±------+
| Type | Instance | Name | Value |
±--------±------------------------------------±----------------------------------------------±------+
| tikv | a-risk-us-east-tidb-q-tikv2:20160 | raftstore.raft-entry-max-size | 64MiB |
| tikv | a-risk-us-east-tidb-q-tikv3:20160 | raftstore.raft-entry-max-size | 64MiB |
| tiflash | a-risk-us-east-tidb-q-tiflash1:3930 | raftstore-proxy.raftstore.raft-entry-max-size | 64MiB |
| tikv | a-risk-us-east-tidb-q-tikv1:20160 | raftstore.raft-entry-max-size | 64MiB |
±--------±------------------------------------±----------------------------------------------±------+

从库机器配置:
tidb 8c32g *3
tikv 8c64g *3
pd 4c8g *3
tiflash 4c16g *1

开始是主从同步失败,pause resume多次都不行,然后用br全量,增量的方式恢复数据,然后重建cdc的changefeed,还是不行。
cdc现在报:
[
{
“id”: “tidb-to-tidb2”,
“namespace”: “default”,
“summary”: {
“state”: “warning”,
“tso”: 451193290156408852,
“checkpoint”: “2024-07-17 05:36:50.228”,
“error”: {
“time”: “2024-07-17T16:39:54.378219492+08:00”,
“addr”: “172.20.1.187:8300”,
“code”: “CDC:ErrRedoWriterStopped”,
“message”: “[CDC:ErrRedoWriterStopped]redo manager is closed”
}
}
}
]
应该是下游tidb写不进去,应该怎么调整呢?

做过升级?

没有做过升级,上下游的tidb所有组件都是v7.1.1,只是在cdc同步有问题时单独升级了cdc组件到v7.1.5。
关于br同步的invalid memory address or nil pointer dereference问题,怀疑也是单行数据很大造成的。如下。

另外,我本身遇到的是cdc出问题才用br工具增量恢复,所以发生了上边br增量恢复的问题。关于主从同步本身,目前拆分了cdc任务恢复了同步,确定是某个库触发了[CDC:ErrRedoWriterStopped]redo manager is closed这个报错,这个库之前要求支持单行存储64M大小。所以上下游的raftstore.raft-entry-max-size performance.txn-entry-size-limit都改为了64M,怀疑是特别大的行会造成cdc报错。

:thinking:有可能是这个原因。