【 TiDB 使用环境】生产环境 /测试/ Poc
【 TiDB 版本】
【复现路径】做过哪些操作出现的问题
【遇到的问题:问题现象及影响】
【资源配置】
【TiDB Operator 版本】:v1.5.2
【K8s 版本】:v1.29.2
【附件:截图/日志/监控】
tidb-operator tidb-admin 1 2024-03-13 13:36:47.522594135 +0800 CST deployed tidb-operator-v1.5.2 v1.5.2
tidb-operator的pod日志:
E0313 06:17:24.355669 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)
E0313 06:18:09.206798 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)
E0313 06:19:03.449916 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)
E0313 06:19:52.099712 1 reflector.go:138] k8s.io/client-go@v0.20.15/tools/cache/reflector.go:167: Failed to watch *v1alpha1.TidbDashboard: failed to list *v1alpha1.TidbDashboard: the server could not find the requested resource (get tidbdashboards.pingcap.com)
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
name: basic
spec:
version: v7.1.1
timezone: UTC
pvReclaimPolicy: Retain
enableDynamicConfiguration: true
configUpdateStrategy: RollingUpdate
discovery: {}
helper:
image: alpine:3.16.0
pd:
baseImage: pingcap/pd
maxFailoverCount: 0
replicas: 1
# if storageClassName is not set, the default Storage Class of the Kubernetes cluster will be used
storageClassName: nfs-client
requests:
storage: “1Gi”
config: {}
tikv:
baseImage: pingcap/tikv
maxFailoverCount: 0
# If only 1 TiKV is deployed, the TiKV region leader
# cannot be transferred during upgrade, so we have
# to configure a short timeout
evictLeaderTimeout: 1m
replicas: 1
# if storageClassName is not set, the default Storage Class of the Kubernetes cluster will be used
storageClassName: nfs-client
requests:
storage: “1Gi”
config:
storage:
# In basic examples, we set this to avoid using too much storage.
reserve-space: “0MB”
rocksdb:
# In basic examples, we set this to avoid the following error in some Kubernetes clusters:
# “the maximum number of open file descriptors is too small, got 1024, expect greater or equal to 82920”
max-open-files: 256
raftdb:
max-open-files: 256
tidb:
baseImage: pingcap/tidb
maxFailoverCount: 0
replicas: 1
service:
type: ClusterIP
config: {}