dumpling导出数据报错一

图一:导出命令
%E6%B5%8B%E8%AF%95%E5%91%BD%E4%BB%A4

图二:导出数据量
%E6%95%B0%E6%8D%AE%E9%87%8F

图三:导出命令对应报错日志:
nohup.out_produce_param_20220708_10.log (8.8 KB)
Release version: v5.0.0
Git commit hash: 463e4cc96078fa532baeea05624d9b5ceb044fb1
Git branch: heads/refs/tags/v5.0.0
Build timestamp: 2021-04-06 04:42:26Z
Go version: go version go1.13 linux/amd64

[2022/07/08 17:04:13.134 +08:00] [INFO] [versions.go:55] [“Welcome to dumpling”] [“Release Version”=v5.0.0] [“Git Commit Hash”=463e4cc96078fa532baeea05624d9b5ceb044fb1] [“Git Branch”=heads/refs/tags/v5.0.0] [“Build timestamp”=“2021-04-06 04:42:26”] [“Go Version”=“go version go1.13 linux/amd64”]
[2022/07/08 17:04:13.138 +08:00] [INFO] [config.go:599] [“detect server type”] [type=TiDB]
[2022/07/08 17:04:13.138 +08:00] [INFO] [config.go:618] [“detect server version”] [version=5.1.0]
[2022/07/08 17:04:13.148 +08:00] [INFO] [client.go:193] [“[pd] create pd client with endpoints”] [pd-address=“[x.x.x.134:2379,x.x.x.135:2379,x.x.x.136:2379]”]
[2022/07/08 17:04:13.150 +08:00] [INFO] [base_client.go:308] [“[pd] switch leader”] [new-leader=http://x.x.x.136:2379] [old-leader=]
[2022/07/08 17:04:13.150 +08:00] [INFO] [base_client.go:112] [“[pd] init cluster id”] [cluster-id=6989862638718844439]
[2022/07/08 17:04:13.151 +08:00] [INFO] [dump.go:921] [“generate dumpling gc safePoint id”] [id=dumpling_1657271053151646318]
[2022/07/08 17:04:13.156 +08:00] [INFO] [dump.go:82] [“begin to run Dump”] [conf=“{"s3":{"endpoint":"","region":"","storage-class":"","sse":"","sse-kms-key-id":"","acl":"","access-key":"","secret-access-key":"","provider":"","force-path-style":true,"use-accelerate-endpoint":false},"gcs":{"endpoint":"","storage-class":"","predefined-acl":"","credentials-file":""},"AllowCleartextPasswords":false,"SortByPk":true,"NoViews":true,"NoHeader":true,"NoSchemas":true,"NoData":false,"CompleteInsert":false,"TransactionalConsistency":true,"EscapeBackslash":true,"DumpEmptyDatabase":true,"PosAfterConnect":false,"CompressType":0,"Host":"x.x.x.131","Port":4000,"Threads":32,"User":"root","Security":{"CAPath":"","CertPath":"","KeyPath":""},"LogLevel":"info","LogFile":"","LogFormat":"text","OutputDirPath":"/data/brbackup","StatusAddr":":8281","Snapshot":"434443662949875740","Consistency":"snapshot","CsvNullValue":"","SQL":"","CsvSeparator":"@@@","CsvDelimiter":"","Databases":[],"Where":"id \u003e= 27000000000 AND id \u003c 27962687973 and tenant_id in (1000019,1000000,1000024,1000026)","FileType":"csv","ServerInfo":{"ServerType":3,"ServerVersion":"5.1.0"},"Rows":0,"ReadTimeout":900000000000,"TiDBMemQuotaQuery":0,"FileSize":10737418240,"StatementSize":1000000,"SessionParams":{"tidb_snapshot":"434443662949875740"},"Tables":null}”]
[2022/07/08 17:06:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:08:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:10:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:12:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:14:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:16:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:18:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:20:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:22:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:24:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:26:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:28:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:30:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:32:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:34:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:36:13.356 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:38:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:40:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:42:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[2022/07/08 17:44:13.355 +08:00] [INFO] [status.go:36] [progress] [tables=“0/1 (0.0%)”] [“finished rows”=0] [“finished size”=0B] [“average speed(MiB/s)”=0]
[mysql] 2022/07/08 17:44:43 packets.go:72: read tcp x.x.x.146:32882->x.x.x.131:4000: i/o timeout
[mysql] 2022/07/08 17:44:43 connection.go:371: invalid connection
[2022/07/08 17:44:43.298 +08:00] [INFO] [collector.go:188] [“backup Failed summary : total backup ranges: 0, total success: 0, total failed: 0”]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=2]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=31]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=30]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=29]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=27]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=1]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=28]
[2022/07/08 17:44:43.300 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=19]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=15]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=14]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=13]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=0]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=12]
[2022/07/08 17:44:43.299 +08:00] [WARN] [writer.go:77] [“context has been done, the writer will exit”] [“writer ID”=7]
[2022/07/08 17:44:43.299 +08:00] [ERROR] [main.go:77] [“dump failed error stack info”] [error=“sql: SELECT COLUMN_NAME,EXTRA FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA=? AND TABLE_NAME=? ORDER BY ORDINAL_POSITION;: driver: bad connection”] [errorVerbose=“driver: bad connection
sql: SELECT COLUMN_NAME,EXTRA FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA=? AND TABLE_NAME=? ORDER BY ORDINAL_POSITION;
github.com/pingcap/dumpling/v4/export.buildSelectField
\tgithub.com/pingcap/dumpling@/v4/export/sql.go:576
github.com/pingcap/dumpling/v4/export.(*Dumper).concurrentDumpTiDBTables
\tgithub.com/pingcap/dumpling@/v4/export/dump.go:578
github.com/pingcap/dumpling/v4/export.(*Dumper).concurrentDumpTable
\tgithub.com/pingcap/dumpling@/v4/export/dump.go:440
github.com/pingcap/dumpling/v4/export.(*Dumper).buildConcatTask.func1
\tgithub.com/pingcap/dumpling@/v4/export/dump.go:340
runtime.goexit
\truntime/asm_amd64.s:1357”]

dump failed: sql: SELECT COLUMN_NAME,EXTRA FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA=? AND TABLE_NAME=? ORDER BY ORDINAL_POSITION;: driver: bad connection

图四:TiDB1日志
131.log (2.6 MB)

图五:TiDB2日志
132.log (1.0 MB)

这个导出结果也可以导入啊

1 个赞

现在没有办法只能暂时考虑使用mydmper,但是我们要导出为csv格式,还在测试中,谢谢!

下面是我的命令,可以使用的。

tiup dumpling -u bak_user -p '*****’ --host ... -P 4000 --database test --filter test.wxbtable --no-schemas --no-header --where “id=1” -t 8 --filetype csv --consistency auto -o /tmp/ --logfile /tmp/test_bak.log

加一下 --rows 或者 -r 参数限制一下每次查询的行数吧。invalid connection 很多都是因为数据还没读完,但是 mysql 连接已经超时断开导致的。

对于大表导出,可以参考一下这个:https://docs.pingcap.com/zh/tidb/dev/dumpling-overview#控制导出-tidb-大表超过-1-tb时的内存使用

1 个赞

测试过了,包括threads设置成4,问题依旧。


看起来这条sql执行的时候失败了,可以手动试试看看,看一下这个表的情况是否正常

哦哦,没往上翻历史…看 log 是 SELECT COLUMN_NAME.EXTRA 的时候就报错了呀,这都还没到导出的过程呢…可以像楼上说的那样,先手动执行一下这个语句,看看有没有问题

手动执行了可以成功执行。

这个有点玄学啊,从 log 看到前面能够获取 TiDB 的版本之类的信息,连接应该是没有问题的。或者可以试一下导出其他表看看?如果不行,或者用新一点版本的 dumpling(比如 TiDB 版本是 5.1,用 5.1 版本的 dumpling 试一下)

1 个赞

建议倒下不同的表,看是否表结构是否对这个命令有影响

1 个赞

我做了测试,以前也怀疑工具是不是有问题,换了版本也是一样,查询语句没有问题,同查询语句以前用过,也测试小一点的表没有问题,不知是否和目前表数据量大有关。

这种要看下表结构,哪个字段定义触发了问题,建议挨个排查

建议导出的是否指定包的大小试试。

不知和MySQL这些连接参数可有关系?
0713

可以将两个net开头的参数调大试试看,这个在mysql上面是有影响的。

1 个赞

调整为如下值,但是问题依旧:
set global connect_timeout=3600;
set global net_read_timeout=3600;
set global net_write_timeout=3600;

您好,该问题和 Dumpling disconnected during the chunk division stage without retrying, causing the backup to fail · Issue #29542 · pingcap/tidb · GitHub 中的一致,并已在 https://github.com/pingcap/tidb/pull/30673 修复,请使用 dumpling v6 版本以上再尝试一下。

另外,file size 太大了,会影响导出导入性能,可以设置小一些,比如 256MiB。

1 个赞

老师你好,下载了最新版本执行导出,还是报错,报错信息如下:
1、dumpling版本

2、图二:导出命令

3、导出错误日志:
nohup.out_produce_param_20220713_1.log (9.9 KB)

[2022/07/13 16:13:09.181 +08:00] [INFO] [conn.go:45] [“cannot execute query”] [retryTime=1] [sql="SELECT id FROM yifu_produce.produce_param TABLESAMPLE REGIONS() ORDER BY id"] [args=null] [error=“sql: SELECT id FROM yifu_produce.produce_param TABLESAMPLE REGIONS() ORDER BY id, args: []: invalid connection”] [errorVerbose="invalid connection\ sql: SELECT id FROM yifu_produce.produce_param TABLESAMPLE REGIONS() ORDER BY id, args: []\ngithub.com/pingcap/tidb/dumpling/export.simpleQueryWithArgs\
日志这段报错了,把相关SQL挑出来执行下?