TIDB开启tidb-operator组件的TLS

【 TiDB 使用环境】测试
【 TiDB 版本】6.5.0
【复现路径】开启TLS
【遇到的问题:问题现象及影响】
【资源配置】进入到 TiDB Dashboard -集群信息 (Cluster Info) -主机(Hosts) 截图此页面
【附件:截图/日志/监控】

通过helm、tidb-operator1.4.3、tidb6.5.0部署tidb集群,目前tidb组件pd\kv\tidb之间,MySQL客户端已经开启了TLS。想咨询:tidb-operator启动的tidb-discovery、schedule、controll-manager等服务是否支持开启TLS

TiDB Operator确实支持为其管理的TiDB集群组件之间启用TLS加密通信1。这包括TiDB、TiKV和PD组件之间的通信,以及它们各自的内部集群通信。然而,关于TiDB-Discovery、Schedule和Control-Manager等特定服务,上下文中没有直接提到它们是否支持TLS。

不过,我们可以从以下几点推断:

  1. TiDB Operator支持为TiDB集群的主要组件启用TLS,这表明它有能力处理TLS配置2。
  2. TiDB Operator允许为不同的组件使用不同的客户端证书,这暗示了它具有灵活的TLS配置能力3。
  3. TiDB集群的整体安全策略倾向于全面加密,目前不支持只对特定组件启用加密4。

基于这些信息,我们可以推测TiDB Operator很可能支持为TiDB-Discovery、Schedule和Control-Manager等服务启用TLS。然而,由于没有明确的文档说明,我不能确定地回答这个问题。

https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/restore-data-using-tidb-lightning#tls-配置
看一下这个文档

参考 [https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/restore-data-using-tidb-lightning#tls-配置]配置后,重新部署tidb没有生效呢

TLS 配置

如果目标 TiDB 集群组件间开启了 TLS (spec.tlsCluster.enabled: true),则可以参考为 TiDB 集群各个组件生成证书为 TiDB Lightning 组件生成 Server 端证书,并在 values.yaml 中通过配置 tlsCluster.enabled: true 开启集群内部的 TLS 支持。

如果目标 TiDB 集群为 MySQL 客户端开启了 TLS (spec.tidb.tlsClient.enabled: true) 并配置了相应的 Client 端证书(对应的 Kubernetes Secret 对象为 ${cluster_name}-tidb-client-secret),则可以通过在 values.yaml 中配置 tlsClient.enabled: true 以使 TiDB Lightning 通过 TLS 方式连接 TiDB Server。

如果需要 TiDB Lightning 使用不同的 Client 证书来连接 TiDB Server,则可以参考为 TiDB 集群颁发两套证书为 TiDB Lightning 组件生成 Client 端证书,并在 values.yaml 中通过 tlsCluster.tlsClientSecretName 指定对应的 Kubernetes Sceret 对象。

values.yaml:

# Default values for tidb-operator

# clusterScoped is whether tidb-operator should manage kubernetes cluster wide tidb clusters
# Also see rbac.create, controllerManager.serviceAccount, scheduler.create and controllerManager.clusterPermissions.
clusterScoped: true

tlsCluster:
  enabled: true
  tlsClientSecretName: basic-tidb-client-secret
tlsClient.enabled: true
........

日志:

kubectl -n csc logs tidb-scheduler-f8dfb8d94-6fmh7 kube-scheduler
I0827 03:34:27.804436       1 flags.go:59] FLAG: --add-dir-header="false"
I0827 03:34:27.804606       1 flags.go:59] FLAG: --address="0.0.0.0"
I0827 03:34:27.804619       1 flags.go:59] FLAG: --algorithm-provider=""
I0827 03:34:27.804626       1 flags.go:59] FLAG: --alsologtostderr="false"
I0827 03:34:27.804633       1 flags.go:59] FLAG: --authentication-kubeconfig=""
I0827 03:34:27.804639       1 flags.go:59] FLAG: --authentication-skip-lookup="false"
I0827 03:34:27.804648       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl="10s"
I0827 03:34:27.804657       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure="true"
I0827 03:34:27.804663       1 flags.go:59] FLAG: --authorization-always-allow-paths="[/healthz]"
I0827 03:34:27.804676       1 flags.go:59] FLAG: --authorization-kubeconfig=""
I0827 03:34:27.804682       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0827 03:34:27.804688       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
I0827 03:34:27.804694       1 flags.go:59] FLAG: --bind-address="0.0.0.0"
I0827 03:34:27.804703       1 flags.go:59] FLAG: --cert-dir=""
I0827 03:34:27.804710       1 flags.go:59] FLAG: --client-ca-file=""
I0827 03:34:27.804716       1 flags.go:59] FLAG: --config="/etc/kubernetes/scheduler-config.yaml"
I0827 03:34:27.804723       1 flags.go:59] FLAG: --contention-profiling="true"
I0827 03:34:27.804729       1 flags.go:59] FLAG: --experimental-logging-sanitization="false"
I0827 03:34:27.804735       1 flags.go:59] FLAG: --feature-gates=""
I0827 03:34:27.804746       1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight="1"
I0827 03:34:27.804829       1 flags.go:59] FLAG: --help="false"
I0827 03:34:27.804901       1 flags.go:59] FLAG: --http2-max-streams-per-connection="0"
I0827 03:34:27.804924       1 flags.go:59] FLAG: --kube-api-burst="100"
I0827 03:34:27.804991       1 flags.go:59] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0827 03:34:27.805010       1 flags.go:59] FLAG: --kube-api-qps="50"
I0827 03:34:27.805089       1 flags.go:59] FLAG: --kubeconfig=""
I0827 03:34:27.805156       1 flags.go:59] FLAG: --leader-elect="true"
I0827 03:34:27.805180       1 flags.go:59] FLAG: --leader-elect-lease-duration="15s"
I0827 03:34:27.805244       1 flags.go:59] FLAG: --leader-elect-renew-deadline="10s"
I0827 03:34:27.805267       1 flags.go:59] FLAG: --leader-elect-resource-lock="leases"
I0827 03:34:27.805351       1 flags.go:59] FLAG: --leader-elect-resource-name="kube-scheduler"
I0827 03:34:27.805373       1 flags.go:59] FLAG: --leader-elect-resource-namespace="kube-system"
I0827 03:34:27.805387       1 flags.go:59] FLAG: --leader-elect-retry-period="2s"
I0827 03:34:27.805400       1 flags.go:59] FLAG: --lock-object-name="kube-scheduler"
I0827 03:34:27.805415       1 flags.go:59] FLAG: --lock-object-namespace="kube-system"
I0827 03:34:27.805430       1 flags.go:59] FLAG: --log-backtrace-at=":0"
I0827 03:34:27.805464       1 flags.go:59] FLAG: --log-dir=""
I0827 03:34:27.805478       1 flags.go:59] FLAG: --log-file=""
I0827 03:34:27.805489       1 flags.go:59] FLAG: --log-file-max-size="1800"
I0827 03:34:27.805503       1 flags.go:59] FLAG: --log-flush-frequency="5s"
I0827 03:34:27.805518       1 flags.go:59] FLAG: --logging-format="text"
I0827 03:34:27.805530       1 flags.go:59] FLAG: --logtostderr="true"
I0827 03:34:27.805539       1 flags.go:59] FLAG: --master=""
I0827 03:34:27.805545       1 flags.go:59] FLAG: --one-output="false"
I0827 03:34:27.805551       1 flags.go:59] FLAG: --permit-port-sharing="false"
I0827 03:34:27.805558       1 flags.go:59] FLAG: --policy-config-file=""
I0827 03:34:27.805564       1 flags.go:59] FLAG: --policy-configmap=""
I0827 03:34:27.805570       1 flags.go:59] FLAG: --policy-configmap-namespace="kube-system"
I0827 03:34:27.805576       1 flags.go:59] FLAG: --port="10251"
I0827 03:34:27.805583       1 flags.go:59] FLAG: --profiling="true"
I0827 03:34:27.805588       1 flags.go:59] FLAG: --requestheader-allowed-names="[]"
I0827 03:34:27.805609       1 flags.go:59] FLAG: --requestheader-client-ca-file=""
I0827 03:34:27.805615       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
I0827 03:34:27.805623       1 flags.go:59] FLAG: --requestheader-group-headers="[x-remote-group]"
I0827 03:34:27.805633       1 flags.go:59] FLAG: --requestheader-username-headers="[x-remote-user]"
I0827 03:34:27.805640       1 flags.go:59] FLAG: --scheduler-name="default-scheduler"
I0827 03:34:27.805647       1 flags.go:59] FLAG: --secure-port="10259"
I0827 03:34:27.805654       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=""
I0827 03:34:27.805659       1 flags.go:59] FLAG: --skip-headers="false"
I0827 03:34:27.805665       1 flags.go:59] FLAG: --skip-log-headers="false"
I0827 03:34:27.805671       1 flags.go:59] FLAG: --stderrthreshold="2"
I0827 03:34:27.805677       1 flags.go:59] FLAG: --tls-cert-file=""
I0827 03:34:27.805683       1 flags.go:59] FLAG: --tls-cipher-suites="[]"
I0827 03:34:27.805692       1 flags.go:59] FLAG: --tls-min-version=""
I0827 03:34:27.805698       1 flags.go:59] FLAG: --tls-private-key-file=""
I0827 03:34:27.805704       1 flags.go:59] FLAG: --tls-sni-cert-key="[]"
I0827 03:34:27.805713       1 flags.go:59] FLAG: --use-legacy-policy-config="false"
I0827 03:34:27.805719       1 flags.go:59] FLAG: --v="2"
I0827 03:34:27.805726       1 flags.go:59] FLAG: --version="false"
I0827 03:34:27.805742       1 flags.go:59] FLAG: --vmodule=""
I0827 03:34:27.805749       1 flags.go:59] FLAG: --write-config-to=""
I0827 03:34:33.353937       1 serving.go:331] Generated self-signed cert in-memory
I0827 03:34:40.649959       1 requestheader_controller.go:244] Loaded a new request header values for RequestHeaderAuthRequestController
W0827 03:34:40.655078       1 options.go:330] Neither --kubeconfig nor --master was specified. Using default API client. This might not work.
I0827 03:34:40.755939       1 factory.go:187] Creating scheduler from algorithm provider 'DefaultProvider'
I0827 03:34:40.756075       1 factory.go:95] Creating extender with config {URLPrefix:http://127.0.0.1:10262/scheduler FilterVerb:filter PreemptVerb:preempt PrioritizeVerb: Weight:1 BindVerb: EnableHTTPS:false TLSConfig:<nil> HTTPTimeout:{Duration:30s} NodeCacheCapable:false ManagedResources:[] Ignorable:false}
I0827 03:34:40.841461       1 configfile.go:72] Using component config:
apiVersion: kubescheduler.config.k8s.io/v1beta1
clientConnection:
  acceptContentTypes: ""
  burst: 100
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: ""
  qps: 50
enableContentionProfiling: true
enableProfiling: true
extenders:
- filterVerb: filter
  httpTimeout: 30s
  preemptVerb: preempt
  urlPrefix: http://127.0.0.1:10262/scheduler
  weight: 1
healthzBindAddress: 0.0.0.0:10261

https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/enable-tls-between-components
试试这个文档。

本文主要描述了在 Kubernetes 上如何为 TiDB 集群组件间开启 TLS。TiDB Operator 从 v1.1 开始已经支持为 Kubernetes 上 TiDB 集群组件间开启 TLS。开启步骤为:

    为即将被创建的 TiDB 集群的每个组件生成证书:
        为 PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiProxy/TiKV Importer/TiDB Lightning 组件。。。

我理解这篇文档是指为tidb内部各个业务组件如:PD/TiKV/TiDB之间开启TLS,这一部分我现在已经实现了,也参考过这篇文档。
现在的需求是给运维组件tidb-operator开启TLS,指的是:tidb-discovery、tidb-schedule、tidb-controll-manager,这几个组件和PD/TiKV/TiDB感觉不属于一个维度。

你指的具体是文档哪部分呢,谢谢

哦哦 抱歉看错了。

这几个组件只是起控制调度的作用,他们是K8S维度的,不是TIDB维度的,应该开不了TLS