设置资源管控后,原本binding到tiflash的计划走不了了

【TiDB 使用环境】生产环境
【TiDB 版本】7.5.1
【操作系统】centos7.9
【部署方式】云上部署(什么云)/机器部署(什么机器配置、什么硬盘)
【集群数据量】
【集群节点数】

CREATE RESOURCE GROUP IF NOT EXISTS rg1 RU_PER_SEC = 2000;
ALTER USER rc_gu@'10.%' RESOURCE GROUP rg1;
 select * from table_name  where id > 0 and date>=20250716 and date<=20250716 and org_id=763214 and task_type != 4  order by id asc limit 10000;

如此操作之后binding走tiflash的执行计划走不了,将资源组变更回去后依旧不能走tiflash的执行计划

ALTER USER rc_gu@‘10.%’ RESOURCE GROUP default;

你 bind 长啥样?

给个最新复现方法吧。

表大小8000w
Original_sql: select * from db . table_name where date >= ? and date <= ? and task_type != ? order by send_ts desc , id desc limit …
Bind_sql: SELECT /+ read_from_storage(tiflash[table_name])/ * FROM db.table_name WHERE date >= 20250601 AND date <= 20250630 AND task_type != 4 ORDER BY send_ts DESC,id DESC LIMIT 0,10

删除资源组后这个账户的查询才生效,但是貌似触发了bug,导致一个tidb节点重启了
goroutine 58590452583 [running]:
github.com/prometheus/client_golang/prometheus.(*CounterVec).WithLabelValues(...)
/go/pkg/mod/github.com/prometheus/client_golang@v1.18.0/prometheus/counter.go:284
github.com/tikv/pd/client/resource_group/controller.(*ResourceGroupsController).IsBackgroundRequest(0x1c08425?, {0x633aa98, 0xc001622460}, {0xc44a2324e2, 0x3}, {0xc263f2a2c0, 0xf})
/go/pkg/mod/github.com/tikv/pd/client@v0.0.0-20240210135946-3488a653ddd9/resource_group/controller/controller.go:566 +0xd7
github.com/tikv/client-go/v2/internal/client.buildResourceControlInterceptor.func1.1({0xc000a87548, 0x11}, 0xc0efbec4e0)
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/internal/client/client_interceptor.go:104 +0xc3
github.com/tikv/client-go/v2/internal/client.interceptedClient.SendRequest({{0x6324450?, 0xc0002fd9f0?}}, {0x633aa98, 0xc001622460}, {0xc000a87548, 0x11}, 0xc3529f8c40?, 0x6fc23ac00)
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/internal/client/client_interceptor.go:58 +0x1cd
github.com/tikv/client-go/v2/internal/client.reqCollapse.SendRequest({{0x63247b0?, 0xc0016143d0?}}, {0x633aa98, 0xc001622460}, {0xc000a87548, 0x11}, 0xc3529f8cc0?, 0x1c9600f?)
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/internal/client/client_collapse.go:74 +0xc3
github.com/tikv/client-go/v2/internal/locate.(*RegionRequestSender).sendReqToRegion(0xc0efbf0060, 0xc0efbd3a70, 0xc0efbd3b00, 0xc0efbec4e0, 0xc3529f9508?)
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/internal/locate/region_request.go:1680 +0x89b
github.com/tikv/client-go/v2/internal/locate.(*RegionRequestSender).SendReqCtx(0xc0efbf0060, 0xc0efbd3a70, 0xc0efbec4e0, {0xe6692, 0x365f, 0x27bd}, 0x6fc23ac00, 0x0, {0x0, 0x0, …})
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/internal/locate/region_request.go:1461 +0x14be
github.com/tikv/client-go/v2/internal/locate.(*RegionRequestSender).SendReq(...)
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/internal/locate/region_request.go:243
github.com/tikv/client-go/v2/txnkv/transaction.actionCommit.handleSingleBatch({0x0?, 0x0?}, 0xc0c4cec8c0, 0xc0efbd3a70, {{0xe6692, 0x365f, 0x27bd}, {0x635ab90, 0xc287b91980}, 0x0})
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/txnkv/transaction/commit.go:107 +0x805
github.com/tikv/client-go/v2/txnkv/transaction.(*batchExecutor).startWorker.func1()
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/txnkv/transaction/2pc.go:1980 +0x10c
created by github.com/tikv/client-go/v2/txnkv/transaction.(*batchExecutor).startWorker in goroutine 58590452397
/go/pkg/mod/github.com/tikv/client-go/v2@v2.0.8-0.20240219030752-98ed21b132fa/txnkv/transaction/2pc.go:1963 +0x72

你要不要试试升级到 7.5.6: 2025-03-14

试试能不能解决

我找套测试环境试一下吧,看看能不能解决,目前不知道有没有什么未知问题