导入数据报错

为提高效率,提问时请提供以下信息,问题描述清晰可优先响应。

  • 【TiDB 版本】: Image: pingcap/tidb:v4.0.4
  • 【问题描述】: 可以使用myslq客户端正常登陆,但是无法使用lightning 导入数据

hulk:~/下载/tidb-toolkit-v4.0.3-linux-amd64/bin# mysql -uroot -p123456 -h192.168.49.18 -P42937
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 52830
Server version: 5.7.25-TiDB-v4.0.4 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright © 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> show databases;
±-------------------+
| Database |
±-------------------+
| INFORMATION_SCHEMA |
| METRICS_SCHEMA |
| PERFORMANCE_SCHEMA |
| hello |
| kk_test |
| mysql |
| test |
±-------------------+
7 rows in set (0.09 sec)

mysql> exit
Bye
hulk:~/下载/tidb-toolkit-v4.0.3-linux-amd64/bin# cat tidb-lightning.toml
[lightning]

日志

level = “info”
file = “tidb-lightning.log”

[tikv-importer]

选择使用的 local 后端

backend = “local”

设置排序的键值对的临时存放地址,目标路径需要是一个空目录

“sorted-kv-dir” = “/tmp/sorted-kv-dir”

[mydumper]

Mydumper 源数据目录。

data-source-dir = “/tmp/test/”

[tidb]

目标集群的信息

host = “192.168.49.18”
port = 42937
user = “root”
password = “123456”

表架构信息在从 TiDB 的“状态端口”获取。

status-port = 8305

集群 pd 的地址

pd-addr = “192.168.49.18:55881”

hulk:~/下载/tidb-toolkit-v4.0.3-linux-amd64/bin# ./tidb-lightning -config ./tidb-lightning.toml
Verbose debug logs will be written to tidb-lightning.log.

tidb lightning encountered error: Error 1045: Access denied for user ‘root’@‘10.244.0.0’ (using password: YES)
hulk:~/下载/tidb-toolkit-v4.0.3-linux-amd64/bin#

若提问为性能优化、故障排查类问题,请下载脚本运行。终端输出的打印结果,请务必全选并复制粘贴上传。

补充下日志信息

hulk:~/下载/tidb-toolkit-v4.0.3-linux-amd64/bin# cat tidb-lightning.log
[2020/08/31 18:08:45.091 +08:00] [WARN] [config.go:358] [“currently only per-task configuration can be applied, global configuration changes can only be made on startup”] [“global config changes”="[lightning.level,lightning.file]"]
[2020/08/31 18:08:45.091 +08:00] [INFO] [version.go:48] [“Welcome to lightning”] [“Release Version”=v4.0.3] [“Git Commit Hash”=26a0f7195bf463950d400857db49586cb9b471f0] [“Git Branch”=heads/refs/tags/v4.0.3] [“UTC Build Time”=“2020-07-24 01:18:08”] [“Go Version”=“go version go1.13 linux/amd64”]
[2020/08/31 18:08:45.091 +08:00] [INFO] [lightning.go:177] [cfg] [cfg="{“id”:1598868525091711223,“lightning”:{“table-concurrency”:4,“index-concurrency”:4,“region-concurrency”:4,“io-concurrency”:5,“check-requirements”:true},“tidb”:{“host”:“192.168.49.18”,“port”:42937,“user”:“root”,“status-port”:8305,“pd-addr”:“192.168.49.18:55881”,“sql-mode”:“ONLY_FULL_GROUP_BY,NO_AUTO_CREATE_USER”,“tls”:“false”,“security”:{“ca-path”:"",“cert-path”:"",“key-path”:""},“max-allowed-packet”:67108864,“distsql-scan-concurrency”:100,“build-stats-concurrency”:20,“index-serial-scan-concurrency”:20,“checksum-table-concurrency”:16},“checkpoint”:{“enable”:true,“schema”:“tidb_lightning_checkpoint”,“driver”:“file”,“keep-after-success”:false},“mydumper”:{“read-block-size”:65536,“batch-size”:107374182400,“batch-import-ratio”:0,“data-source-dir”:"/tmp/test/",“no-schema”:false,“character-set”:“auto”,“csv”:{“separator”:",",“delimiter”:"\"",“header”:true,“trim-last-separator”:false,“not-null”:false,“null”:"\\N",“backslash-escape”:true},“case-sensitive”:false,“strict-format”:false,“max-region-size”:268435456,“filter”:["."]},“tikv-importer”:{“addr”:"",“backend”:“tidb”,“on-duplicate”:“replace”,“max-kv-pairs”:32,“send-kv-pairs”:32768,“region-split-size”:100663296,“sorted-kv-dir”:"",“range-concurrency”:0},“post-restore”:{“level-1-compact”:false,“compact”:false,“checksum”:true,“analyze”:true},“cron”:{“switch-mode”:“5m0s”,“log-progress”:“5m0s”},“routes”:null,“security”:{“ca-path”:"",“cert-path”:"",“key-path”:""},“black-white-list”:{“do-tables”:null,“do-dbs”:null,“ignore-tables”:null,“ignore-dbs”:null}}"]
[2020/08/31 18:08:45.092 +08:00] [INFO] [lightning.go:208] [“load data source start”]
[2020/08/31 18:08:45.097 +08:00] [INFO] [lightning.go:211] [“load data source completed”] [takeTime=5.014939ms] []
[2020/08/31 18:08:45.097 +08:00] [INFO] [checkpoints.go:770] [“open checkpoint file failed, going to create a new one”] [path=/tmp/tidb_lightning_checkpoint.pb] [error=“open /tmp/tidb_lightning_checkpoint.pb: no such file or directory”]
[2020/08/31 18:08:46.226 +08:00] [ERROR] [lightning.go:226] [“restore failed”] [error=“Error 1045: Access denied for user ‘root’@‘10.244.0.0’ (using password: YES)”]
[2020/08/31 18:08:46.227 +08:00] [ERROR] [main.go:82] [“tidb lightning encountered error stack info”] [error=“Error 1045: Access denied for user ‘root’@‘10.244.0.0’ (using password: YES)”] [errorVerbose=“Error 1045: Access denied for user ‘root’@‘10.244.0.0’ (using password: YES)\ngithub.com/pingcap/errors.AddStack\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20190809092503-95897b64e011/errors.go:174\ github.com/pingcap/errors.Trace\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/pkg/mod/github.com/pingcap/errors@v0.11.5-0.20190809092503-95897b64e011/juju_adaptor.go:15\ github.com/pingcap/tidb-lightning/lightning/common.(*MySQLConnectParam).Connect\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/src/github.com/pingcap/tidb-lightning/lightning/common/util.go:81\ngithub.com/pingcap/tidb-lightning/lightning/restore.NewTiDBManager\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/src/github.com/pingcap/tidb-lightning/lightning/restore/tidb.go:85\ github.com/pingcap/tidb-lightning/lightning/restore.NewRestoreControllerWithPauser\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/src/github.com/pingcap/tidb-lightning/lightning/restore/restore.go:168\ github.com/pingcap/tidb-lightning/lightning/restore.NewRestoreController\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/src/github.com/pingcap/tidb-lightning/lightning/restore/restore.go:151\ github.com/pingcap/tidb-lightning/lightning.(*Lightning).run\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/src/github.com/pingcap/tidb-lightning/lightning/lightning.go:224\ngithub.com/pingcap/tidb-lightning/lightning.(*Lightning).RunOnce\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/src/github.com/pingcap/tidb-lightning/lightning/lightning.go:150\ main.main\ \t/home/jenkins/agent/workspace/ld_lightning_multi_branch_v4.0.3/go/src/github.com/pingcap/tidb-lightning/cmd/tidb-lightning/main.go:79\ runtime.main\ \t/usr/local/go/src/runtime/proc.go:203\ runtime.goexit\ \t/usr/local/go/src/runtime/asm_amd64.s:1357”]
[2020/08/31 18:08:46.227 +08:00] [ERROR] [main.go:83] [“tidb lightning encountered error”] [error=“Error 1045: Access denied for user ‘root’@‘10.244.0.0’ (using password: YES)”]

  1. 从报错看是密码问题,但是看您客户端可以登录
  2. 麻烦反馈下 tiup cluster display 集群名称的结果
  3. 请问使用哪种方式部署的lightning? 是手工方式吗?

2.我没有安装Tiup

3.我的cluster是用的tidb-operator部署在k8s里面的,lightning是直接下载的文件
hulk:~/下载/tidb-toolkit-v4.0.3-linux-amd64/bin# pwd
/home/hulk/下载/tidb-toolkit-v4.0.3-linux-amd64/bin
hulk:~/下载/tidb-toolkit-v4.0.3-linux-amd64/bin# ./tidb-lightning -config ./tidb-lightning.toml -tidb-password 123456
Verbose debug logs will be written to tidb-lightning.log.

tidb lightning encountered error: Error 1045: Access denied for user ‘root’@‘10.244.0.0’ (using password: YES)

  1. 可以看到配置文件里的是 host = “192.168.49.18” port = 42937 ,但是报错是 10.244.0.0 , 请问10.244.0.0是客户端吗?
  2. lightning 导入的主机 和 k8s 集群拓扑如何? lightning 导入的主机不在 k8s 集群内吗?
  3. https://docs.pingcap.com/zh/tidb/dev/privilege-management 请参考文档检查下权限是否都正确,多谢。

1.10.244.0.0是k8s集群内的地址 不是客户端
2.导入的主机在k8s集群外 ,tidb端口都是nodeport 暴露出来的

mysql> show grants for ‘root’@’%’;
±------------------------------------------------------------+
| Grants for root@% |
±------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON . TO ‘root’@’%’ WITH GRANT OPTION |
±------------------------------------------------------------+
1 row in set (0.02 sec)

mysql> select user,host from mysql.user;
±-----±-----+
| user | host |
±-----±-----+
| root | % |
±-----±-----+
1 row in set (0.01 sec)

请问图上展示的是 ON . TO ,实际上您反馈的是 星.星 对吧?

是的 星号被隐藏了

好的,我调整下格式,问题还在分析,多谢。
3.mysql> show grants for ‘root’@’%’;
+-------------------------------------------------------------+
| Grants for root@% |
±------------------------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION |
±------------------------------------------------------------+
1 row in set (0.02 sec)

mysql> select user,host from mysql.user;
±-----±-----+
| user | host |
±-----±-----+
| root | % |
±-----±-----+
1 row in set (0.01 sec) 4 格

请检查
(1) TiDB 是否開了 TLS,是的話 Lightning 也要配一下證書
(2) 檢查 TiDB log 有沒有登录时的报错信息,多谢。

[root@bogon bin]# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 3095239
Server version: 5.6.39-83.1 Percona Server (GPL), Release 83.1, Revision da5a1c2923f

Copyright © 2009-2018 Percona LLC and/or its affiliates
Copyright © 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql> exit
Bye
[root@bogon bin]#
[root@bogon bin]#
[root@bogon bin]#
[root@bogon bin]# pwd
/root/tidb-toolkit-v4.0.3-linux-amd64/bin
[root@bogon bin]# ./bin/mydumper -h 127.0.0.1 -P 33061 -u root -p -t 16 -F 256 -B kk_test --skip-tz-utc -o /tmp/test/

补充一下导出库的信息

tidb使用tidb-operator安装 没更改过配置

[root@kubemaster tidb]# cat install.sh

install tidb-operator

kubectl apply -f crd.yaml
#kubectl apply -f local-volume-pvc.yml
helm install ./tidb-operator --name=tidb-operator --namespace=tidb-admin
kubectl apply -f tidb-cluster.yml

createt user root and password is 123

kubectl create secret generic tidb-secret --from-literal=root=123456
kubectl create -f tidb-initializer.yaml

[root@kubemaster tidb]# cat tidb-cluster.yaml
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
name: demo
namespace: kube-system
spec:
configUpdateStrategy: RollingUpdate
enablePVReclaim: false
imagePullPolicy: IfNotPresent
pd:
baseImage: pingcap/pd
config:
log:
level: info
replication:
#enable-placement-rules: “true”
enable-placement-rules: “false”
location-labels:
- zone
- host
max-replicas: 3
imagePullPolicy: IfNotPresent
maxFailoverCount: 3
replicas: 3
requests:
storage: 10Gi
storageClassName: harbor-data
pvReclaimPolicy: Delete
schedulerName: tidb-scheduler
services:

  • name: pd
    type: ClusterIP
    tidb:
    baseImage: pingcap/tidb
    config:
    log:
    file:
    max-backups: 3
    level: info
    imagePullPolicy: IfNotPresent
    maxFailoverCount: 3
    replicas: 2
    separateSlowLog: true
    service:
    type: NodePort
    slowLogTailer:
    image: busybox:1.26.2
    imagePullPolicy: IfNotPresent
    limits:
    cpu: 100m
    memory: 50Mi
    requests:
    cpu: 20m
    memory: 5Mi
    tiflash:
    baseImage: pingcap/tiflash
    maxFailoverCount: 3
    replicas: 2
    storageClaims:
    • resources:
      requests:
      storage: 10Gi
      storageClassName: harbor-data
      tikv:
      baseImage: pingcap/tikv
      config:
      log-level: info
      imagePullPolicy: IfNotPresent
      maxFailoverCount: 3
      replicas: 3
      requests:
      storage: 10Gi
      storageClassName: harbor-data
      timezone: UTC
      version: v4.0.4

[root@kubemaster tidb]# cat tidb-initializer.yaml
apiVersion: pingcap.com/v1alpha1
kind: TidbInitializer
metadata:
name: initialize-demo
spec:
image: tnir/mysqlclient
imagePullPolicy: IfNotPresent
cluster:
name: initialize-demo
initSql: “create database hello;”

initSqlConfigMap: tidb-initsql

passwordSecret: “tidb-secret”

permitHost: 172.6.5.8

resources:

limits:

cpu: 1000m

memory: 500Mi

requests:

cpu: 100m

memory: 50Mi

timezone: “Asia/Shanghai”

  1. tidb 日志是否有报错?
  2. 能不能试下如果在 k8s 集群内找个集群能否导入成功,多谢。

我在k8s master机器上导入 确实没有 access deny 了 但是报超时
[root@kubemaster bin]# ./tidb-lightning -config ./tidb-lightning.toml
Verbose debug logs will be written to tidb-lightning.log.

tidb lightning encountered error: cannot fetch settings from TiDB, please manually fill in tidb.port and tidb.pd-addr: Get http://192.168.49.18:8305/settings: dial tcp 192.168.49.18:8305: i/o timeout
[root@kubemaster bin]#

kubectl logs -f demo-pd-0 -n kube-system
[2020/09/02 10:08:10.774 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/config: context canceled”]
[2020/09/02 10:08:12.330 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/ping: context canceled”]
[2020/09/02 10:08:15.829 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/stores: context canceled”]
[2020/09/02 10:08:22.449 +00:00] [INFO] [trace.go:145] [“trace[2010371912] put”] [detail="{key:/tidb/server/minstartts/ae1d3b90-783d-4c33-9d62-15837f5807cf; req_size:92; response_revision:391292; }"] [duration=127.048856ms] [start=2020/09/02 10:08:22.322 +00:00] [end=2020/09/02 10:08:22.449 +00:00] [steps="[“trace[2010371912] ‘process raft request’ (duration: 65.129096ms)”,“trace[2010371912] ‘get key’s previous created_revision and leaseID’ (duration: 61.707463ms)”]"]
[2020/09/02 10:08:25.055 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/ping: context canceled”]
[2020/09/02 10:08:37.275 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/ping: context canceled”]
^C

为看pd的日志里有这个错误
[2020/09/02 10:06:25.088 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/ping: context canceled”]
[2020/09/02 10:06:32.330 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/ping: context canceled”]
[2020/09/02 10:06:33.613 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/stores: context canceled”]
[2020/09/02 10:06:41.398 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/ping: context canceled”]
[2020/09/02 10:06:42.330 +00:00] [ERROR] [middleware.go:148] [“request failed”] [error=“Get http://demo-pd-1.demo-pd-peer.kube-system.svc:2379/pd/api/v1/ping: context canceled”]

这个是在k8s的master机器上 因此严格来说 也不算集群内把

看起来是 k8s 的问题,您参考下这个文档,关闭防火墙等试试。
https://github.com/kubernetes/minikube/issues/1234

status-port=8305, 这个端口从哪获取的,TiDB status port 默认为 10080,这里应该是 10080 对应的 NodePort。