初始化数据失败

用sysbench导入数据,报错: [2019/12/04 08:52:00.786 +08:00] [INFO] [region_cache.go:401] [“switch region peer to next due to send request fail”] [current=“region ID: 894, meta: id:894 start_key:“t\200\000\000\000\000\000\0002_i\200\000\000\000\000\000\000\001\003\200\000\000\000\000e\330Z\003\200\000\000\000\000].\317” end_key:“t\200\000\000\000\000\000\0002_i\200\000\000\000\000\000\000\001\003\200\000\000\000\000qhT\003\200\000\000\000\0009\354>” region_epoch:<conf_ver:1 version:37 > peers:<id:895 store_id:1 > , peer: id:895 store_id:1 , addr: 127.0.0.1:20160, idx: 0”] [needReload=true] [error=“no available connections”] [errorVerbose=“no available connections github.com/pingcap/tidb/store/tikv.(*batchConn).getClientAndSend /home/jenkins/agent/workspace/release_tidb_3.0/go/src/github.com/pingcap/tidb/store/tikv/client_batch.go:472 github.com/pingcap/tidb/store/tikv.(*batchConn).batchSendLoop /home/jenkins/agent/workspace/release_tidb_3.0/go/src/github.com/pingcap/tidb/store/tikv/client_batch.go:451 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357”] [2019/12/04 08:52:00.799 +08:00] [INFO] [region_cache.go:295] [“invalidate current region, because others failed on same store”] [region=714] [store=127.0.0.1:20160] [2019/12/04 08:52:00.801 +08:00] [WARN] [client_batch.go:469] [“no available connections”] [target=127.0.0.1:20160] [2019/12/04 08:52:00.801 +08:00] [INFO] [region_cache.go:902] [“mark store’s regions need be refill”] [store=127.0.0.1:20160] [2019/12/04 08:52:00.801 +08:00] [INFO] [region_cache.go:401] [“switch region peer to next due to send request fail”] [current=“region ID: 714, meta: id:714 start_key:“t\200\000\000\000\000\000\0002_r\200\000\000\000\000\270\314\031” end_key:“t\200\000\000\000\000\000\0002_r\200\000\000\000\000\276z\345” region_epoch:<conf_ver:1 version:62 > peers:<id:715 store_id:1 > , peer: id:715 store_id:1 , addr: 127.0.0.1:20160, idx: 0”] [needReload=true] [error=“no available connections”] [errorVerbose=“no available connections github.com/pingcap/tidb/store/tikv.(*batchConn).getClientAndSend /home/jenkins/agent/workspace/release_tidb_3.0/go/src/github.com/pingcap/tidb/store/tikv/client_batch.go:472 github.com/pingcap/tidb/store/tikv.(*batchConn).batchSendLoop /home/jenkins/agent/workspace/release_tidb_3.0/go/src/github.com/pingcap/tidb/store/tikv/client_batch.go:451 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357”] [2019/12/04 08:52:00.802 +08:00] [INFO] [region_cache.go:295] [“invalidate current region, because others failed on same store”] [region=4] [store=127.0.0.1:20160] [2019/12/04 08:52:00.811 +08:00] [WARN] [client_batch.go:469] [“no available connections”] [target=127.0.0.1:20160] [2019/12/04 08:52:00.811 +08:00] [INFO] [region_cache.go:902] [“mark store’s regions need be refill”] [store=127.0.0.1:20160] [2019/12/04 08:52:00.811 +08:00] [INFO] [region_cache.go:401] [“switch region peer to next due to send request fail”] [current=“region ID: 4, meta: id:4 end_key:“t\200\000\000\000\000\000\000\005” region_epoch:<conf_ver:1 version:2 > peers:<id:5 store_id:1 > , peer: id:5 store_id:1 , addr: 127.0.0.1:20160, idx: 0”] [needReload=true] [error=“no available connections”] [errorVerbose=“no available connections github.com/pingcap/tidb/store/tikv.(*batchConn).getClientAndSend /home/jenkins/agent/workspace/release_tidb_3.0/go/src/github.com/pingcap/tidb/store/tikv/client_batch.go:472 github.com/pingcap/tidb/store/tikv.(*batchConn).batchSendLoop /home/jenkins/agent/workspace/release_tidb_3.0/go/src/github.com/pingcap/tidb/store/tikv/client_batch.go:451 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357”]

系统: 16核 32GB内存 导入50G数据时失败 TiDB版本: 3.0.6 在TiDB alpha版本上也有同样的问题。单机版,利用tidb-v3.0.6的package包安装的。

您好,请在提问的时候注意将问题归到正确的分类下,否则可能影响回答问题的响应时间。

您好: 从报错看,是访问region时,region cache中需要重新载入新的版本。 感觉是性能上有些问题, 单机上测试尽量测试功能即可,能否尝试降低导入量,先测试能够正常导入,多谢。

attaching the log, please give some advice whether there are some options in the setting can be used to avoid this kind of problem. I only have single machine to do it.

prepare.tikv.log.zip (1.5 MB)

麻烦看下 PD 以及 TiDB 相同时间点有没有对应的报错?

首先, 最终出现这种错误是tikv异常退出了,错误应该是:Deadline Exceeded 请问: Deadline Exceeded是因为什么原因出现?

[ERROR] [util.rs:287] [“request failed, retry”] [err=“Grpc(RpcFailure(RpcStatus { status: DeadlineExceeded, details: Some(“Deadline Exceeded”) }))”]

第二: tidb.log也出现了错误: [2019/12/05 04:26:05.389 +08:00] [ERROR] [client.go:301] ["[pd] failed updateLeader"] [error=“failed to get leader from [http://127.0.0.1:2379]”] [errorVerbose=“failed to get leader from [http://127.0.0.1:2379]ngithub.com/pingcap/pd/client.(*client).updateLeader /home/jenkins/agent/workspace/release_tidb_3.0/go/pkg/mod/github.com/pingcap/pd@v1.1.0-beta.0.20190912093418-dc03c839debd/client/client.go:225ngithub.com/pingcap/pd/client.(*client).leaderLoop /home/jenkins/agent/workspace/release_tidb_3.0/go/pkg/mod/github.com/pingcap/pd@v1.1.0-beta.0.20190912093418-dc03c839debd/client/client.go:300 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357”] [stack=“github.com/pingcap/log.Error /home/jenkins/agent/workspace/release_tidb_3.0/go/pkg/mod/github.com/pingcap/log@v0.0.0-20190715063458-479153f07ebd/global.go:42 github.com/pingcap/pd/client.(*client).leaderLoop /home/jenkins/agent/workspace/release_tidb_3.0/go/pkg/mod/github.com/pingcap/pd@v1.1.0-beta.0.20190912093418-dc03c839debd/client/client.go:301”]

而且,tidb.log里初期也有很多write-conflict错误: [WARN] [txn.go:69] [RunInNewTxn] [“retry txn”=413000934802849823] [“original txn”=413000934802849823] [error="[kv:9007]Write conflict, txnStartTS=413000934802849823, conflictStartTS=413000934802849821, conflictCommitTS=413000934815956993, key=[]byte{0x6d, 0x4e, 0x65, 0x78, 0x74, 0x47, 0x6c, 0x6f, 0x62, 0xff, 0x61, 0x6c, 0x49, 0x44, 0x0, 0x0, 0x0, 0x0, 0xfb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x73} primary=[]byte{0x6d, 0x4e, 0x65, 0x78, 0x74, 0x47, 0x6c, 0x6f, 0x62, 0xff, 0x61, 0x6c, 0x49, 0x44, 0x0, 0x0, 0x0, 0x0, 0xfb, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x73} [try again later]"]

在tikv.log里, 在推出之前 时不时出现了不少的locked primary_lock错误。 [ERROR] [endpoint.rs:454] [error-response] [err=“locked primary_lock: 7480000000000000135F6980000000000000010405BB465BB314000303800000000000003F lock_version: 413001150534254595 key: 7480000000000000175F69800000000000000103800000000000002D038000000000000000038000000000000004038000000000000000 lock_ttl: 3055 txn_size: 750”] [2019/12/04 23:45:59.874 +08:00] [ERROR] [endpoint.rs:454] [error-response] [err=“locked primary_lock: 7480000000000000135F6980000000000000010405BB465BB314000303800000000000003F lock_version: 413001150534254595 key: 7480000000000000175F69800000000000000103800000000000002D038000000000000000038000000000000004038000000000000000 lock_ttl: 3055 txn_size: 750”]

现在看着是 PD 无法访问导致的问题。请检查下 PD 的日志看下。

1、tidb log 中出现 write-conflict 错误,表示事务出现了冲突,tidb 是乐观事务模型,在 commit 时才会进行冲突检测。可以看下官网事务相关的文档:

2、tidb.log 的日志中出现 [error=“failed to get leader from [http://127.0.0.1:2379]”] 报错可能是在向 pd 申请 tso 的时候出现了异常(tidb 一般的 DML 事务开启时,需要向 pd 申请两次 tso)。

3、tikv log 中出现 locked primary_lock 错误,是因为 tikv 定义了 lock 超时时间,如果在 ttl 内没有释放 lock 资源,则会强制清锁,出现相关报错。

4、从上面提供 prepare.tikv.log 看出有大量的下述报错:

[2019/12/05 05:13:38.109 +08:00] [ERROR] [util.rs:444] [“connect failed”] [err=“Grpc(RpcFailure(RpcStatus { status: DeadlineExceeded, details: Some(“Deadline Exceeded”) }))”] [endpoints=http://127.0.0.1:2379]

推测是 tikv 向 pd 发送请求没有得到响应,且连接失败。需要确认下 pd 当时的服务是否正常,pd 所在服务器的负载情况。以及使用 pd-ctl 查看下 pd 节点的状态:

./pd-ctl -u http://host1:2379 -i
>> member

另,强烈不建议其他服务和 tikv 混合部署,且在高并发下,可能造成的资源争用现象。

谢谢! 我查看prepare.tidb.log, 发现tidb刚起来111毫秒就出现了write-conflict: [2019/12/05 12:49:58.522 +08:00] [WARN] [txn.go:69] [RunInNewTxn] [“retry txn”=413013481452994599] [“original txn”=413013481452994599] [error="[kv:9007]Write conflict, txnStartTS=413013481452994599, conflictStartTS=413013481452994589,

我用的是16个线程,这种大量的write-confilct是不是导致最后tikv异常退出的间接原因?我的机器是16个CPU。

原则上不会出现这个问题~~~

我输入 ./pd-ctl -u http://127.0.0.1:2379 -i 后没有任何显示输出, 除了出现一个红色的>>

我这边碰到tikv多次异常退出的问题,同样的TiDB配置文件, 在2台同样配置的机器上,内存什么的都是一样的。一台机器上能没有出现问题, 在另外一台机器上tikv老是异常退出,退出瞬间前5秒 可用内存只有476M,退出后 瞬间释放掉约34G内存。我的机器是24cpus+128G内存, 但是在运行的时候,我只限定了给64G内存给tikv+tidb+pd用。在另外一台同样配置的机器上也是这么限定的, 能成功跑下了,可是在这台机器上就跑失败了。已经失败3次了。附上了相关的日志, 麻烦看一下。

sysbench没有报错 就是一直运行 报错时间估计是在21号凌晨4点09分左右。 单机版的的 没有inventory.ini,是直接用package安装的 不是集群的安装方式。 日志已经在附上了 往前找找 。现在附上tipd的日志 7z的压缩prepare.tidb.log.7z (2.9 MB)

具体 sysbench 报错是啥?现在现象是?报错问题时间段是哪个时间段?单机版是tikv单实例嘛(方便 inventory.ini 也发下)? 具体把报错时间段点得 pd leader 日志tidb 日志 tikv 日志发下,另外单实例得并不推荐哈

通过你们的官网https://pingcap.com/docs/stable/how-to/deploy/from-tarball/production-environment/ 安装的 我们目前没有更多的机器 就用单机版的先运行一下。关键问题是 同样的配置 一台机器运行很好 另外一台同样性能的机器 就不行 很奇怪

没有报错?那你说初始化数据失败是啥意思? tikv多次异常退出,能否具体点嘛?具体现象是tikv 进程没了?然后自己拉起了?

sysbench进程还在运行啊 但是屏幕上出现一大堆tidb deadline exceed的错误, 一查看 发现 pd-server 、 tipd-server的进程还在, 就是tikv-server的进程没有了啊。。

检查 tikv 日志应该是在 21 号凌晨 4 点左右出现 Panic 问题, TikV 进程异常终止: [2019/12/21 04:20:02.380 +08:00] [FATAL] [lib.rs:499] ["calledOption::unwrap()on aNonevalue"] [backtrace="stack backtrace:\n 0: 0x55fdb265c89d - backtrace::backtrace::libunwind::trace::h958f5f3eb75b2917\n at /rust/registry/src/github.com-1ecc6299db9ec823/backtrace-0.2.3/src/backtrace/libunwind.rs:54\n - backtrace::backtrace::trace::hdf994f7eb3c12b81\n at /rust/registry/src/github.com-1ecc6299db9ec823/backtrace-0.2.3/src/backtrace/mod.rs:70\n 1: 0x55fdb26524a0 - tikv_util::set_panic_hook::{{closure}}::hd5e8404ff92ff733\n at /home/jenkins/.target/release/build/backtrace-e20a32a05fd0b8fe/out/capture.rs:79\n 2: 0x55fdb27fb86f - std::panicking::rust_panic_with_hook::h8d2408723e9a2bd4\n at src/libstd/panicking.rs:479\n 3: 0x55fdb27fb64d - std::panicking::continue_panic_fmt::hb2aaa9386c4e5e80\n at src/libstd/panicking.rs:382\n 4: 0x55fdb280aeb5 - rust_begin_unwind\n at src/libstd/panicking.rs:309\n 5: 0x55fdb2815a5b - core::panicking::panic_fmt::h79e840586f23493b\n at src/libcore/panicking.rs:85\n 6: 0x55fdb281725a - core::panicking::panic::h8bb9a06d7b2ed3d1\n at src/libcore/panicking.rs:49\n 7: 0x55fdb1c6a491 - futures::future::chain::Chain<A,B,C>::poll::h674e7b4e2cb49aa7\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/chain.rs:0\n - <futures::future::and_then::AndThen<A,B,F> as futures::future::Future>::poll::h2175c23617ecd455\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/and_then.rs:32\n 8: 0x55fdb1c695c9 - <alloc::boxed::Box<F> as futures::future::Future>::poll::h3d746d300c26f017\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/mod.rs:113\n - futures::future::chain::Chain<A,B,C>::poll::hb19987206bf3e327\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/chain.rs:26\n - <futures::future::then::Then<A,B,F> as futures::future::Future>::poll::hf140120822e0374d\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/then.rs:32\n 9: 0x55fdb1c66cc9 - futures::future::chain::Chain<A,B,C>::poll::h14a96c578c02d5dd\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/chain.rs:32\n - <futures::future::and_then::AndThen<A,B,F> as futures::future::Future>::poll::hd48c90c58af66e3c\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/and_then.rs:32\n 10: 0x55fdb1c63fa2 - <alloc::boxed::Box<F> as futures::future::Future>::poll::h10de61395e462d82\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/mod.rs:113\n - futures::future::chain::Chain<A,B,C>::poll::hed0ee42f5b4b1189\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/chain.rs:32\n - <futures::future::and_then::AndThen<A,B,F> as futures::future::Future>::poll::ha10e2931e91b58ec\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/and_then.rs:32\n - futures::future::chain::Chain<A,B,C>::poll::hb79ec5b054af472c\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/chain.rs:26\n - <futures::future::then::Then<A,B,F> as futures::future::Future>::poll::h56eb5546fa3b113c\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/then.rs:32\n - <futures::future::loop_fn::LoopFn<A,F> as futures::future::Future>::poll::h639346d9f59042fe\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/loop_fn.rs:93\n - futures::future::chain::Chain<A,B,C>::poll::hcdcbefef1d03bf4d\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/chain.rs:26\n - <futures::future::then::Then<A,B,F> as futures::future::Future>::poll::h9434975c953d98db\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/then.rs:32\n 11: 0x55fdb1c6395e - <alloc::boxed::Box<F> as futures::future::Future>::poll::h3d746d300c26f017\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/mod.rs:113\n - <futures::future::map_err::MapErr<A,F> as futures::future::Future>::poll::hd7d6ed845be34a69\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/map_err.rs:30\n 12: 0x55fdb2669a4b - <alloc::boxed::Box<F> as futures::future::Future>::poll::h2a3b9c80db96acf9\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/future/mod.rs:113\n - futures::task_impl::Spawn<T>::poll_future_notify::{{closure}}::h33ba1742c245e772\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/task_impl/mod.rs:329\n - futures::task_impl::Spawn<T>::enter::{{closure}}::h116b4d045591632d\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/task_impl/mod.rs:399\n - futures::task_impl::std::set::h463cb97e96339e1e\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/task_impl/std/mod.rs:83\n - futures::task_impl::Spawn<T>::enter::h6ed16965cf57890f\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/task_impl/mod.rs:399\n - futures::task_impl::Spawn<T>::poll_fn_notify::hae18dabfabf026dc\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/task_impl/mod.rs:291\n - futures::task_impl::Spawn<T>::poll_future_notify::ha4d09a49c0457371\n at /rust/registry/src/github.com-1ecc6299db9ec823/futures-0.1.29/src/task_impl/mod.rs:329\n - tokio_current_thread::scheduler::Scheduled<U>::tick::h195a756a848f0a86\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/scheduler.rs:354\n - tokio_current_thread::scheduler::Scheduler<U>::tick::{{closure}}::h892f3b40d1e8a5b8\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/scheduler.rs:333\n - tokio_current_thread::Borrow<U>::enter::{{closure}}::{{closure}}::hb6a42ef102b8ecea\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/lib.rs:779\n - tokio_current_thread::CurrentRunner::set_spawn::he63a5614fa2c03be\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/lib.rs:816\n - tokio_current_thread::Borrow<U>::enter::{{closure}}::h0a76cad02a43041e\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/lib.rs:778\n - std::thread::local::LocalKey<T>::try_with::ha4194ae4af391780\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:299\n - std::thread::local::LocalKey<T>::with::h45d11ed731a0d6ca\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:245\n - tokio_current_thread::Borrow<U>::enter::hbe22e09dee9d218f\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/lib.rs:776\n - tokio_current_thread::scheduler::Scheduler<U>::tick::hbd3c5c5bf72ae1dc\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/scheduler.rs:333\n - tokio_current_thread::Entered<P>::tick::hd9a157d23401ed53\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/lib.rs:605\n 13: 0x55fdb2667aa5 - tokio_current_thread::Entered<P>::turn::h33dad3902441d64a\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-current-thread-0.1.4/src/lib.rs:530\n - tokio_core::reactor::Core::poll::{{closure}}::{{closure}}::{{closure}}::{{closure}}::h4922d65d4680f981\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-core-0.1.17/src/reactor/mod.rs:298\n - scoped_tls::ScopedKey<T>::set::hbcabe04be04df006\n at /rust/registry/src/github.com-1ecc6299db9ec823/scoped-tls-0.1.2/src/lib.rs:155\n - tokio_core::reactor::Core::poll::{{closure}}::{{closure}}::{{closure}}::h4adae9d51a1969ae\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-core-0.1.17/src/reactor/mod.rs:297\n - tokio_timer::timer::handle::with_default::{{closure}}::h94ab8180039ef66f\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-timer-0.2.8/src/timer/handle.rs:94\n - std::thread::local::LocalKey<T>::try_with::h104b555fa2307e17\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:299\n - std::thread::local::LocalKey<T>::with::hdb8468c43ae74ffa\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:245\n - tokio_timer::timer::handle::with_default::h21bf8e045ef1d80b\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-timer-0.2.8/src/timer/handle.rs:81\n - tokio_core::reactor::Core::poll::{{closure}}::{{closure}}::h15168c6a9ced5e63\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-core-0.1.17/src/reactor/mod.rs:275\n - tokio_executor::global::with_default::{{closure}}::hd52e06df026399c8\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-executor-0.1.7/src/global.rs:209\n - std::thread::local::LocalKey<T>::try_with::h6f71bb8e22515b47\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:299\n - std::thread::local::LocalKey<T>::with::h258650ceecacf761\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:245\n - tokio_executor::global::with_default::hf2581dedb2b141b6\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-executor-0.1.7/src/global.rs:178\n - tokio_core::reactor::Core::poll::{{closure}}::h15ea9c6ca8818c72\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-core-0.1.17/src/reactor/mod.rs:274\n - tokio_reactor::with_default::{{closure}}::h03b70bd912075401\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-reactor-0.1.7/src/lib.rs:229\n - std::thread::local::LocalKey<T>::try_with::h0535e93f78b261c8\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:299\n - std::thread::local::LocalKey<T>::with::hb65b46823b36cfa5\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/local.rs:245\n - tokio_reactor::with_default::hb0c860d19d6c3b96\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-reactor-0.1.7/src/lib.rs:212\n - tokio_core::reactor::Core::poll::hdcdf422dce6709d8\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-core-0.1.17/src/reactor/mod.rs:273\n 14: 0x55fdb1c49d4d - tokio_core::reactor::Core::run::h0a9bb47567cb0beb\n at /rust/registry/src/github.com-1ecc6299db9ec823/tokio-core-0.1.17/src/reactor/mod.rs:248\n - tikv_util::worker::future::poll::h896781831e3924bd\n at /home/jenkins/agent/workspace/release_tidb_3.0/tikv/components/tikv_util/src/worker/future.rs:109\n - tikv_util::worker::future::Worker<T>::start::{{closure}}::hcf04658523ad8eef\n at /home/jenkins/agent/workspace/release_tidb_3.0/tikv/components/tikv_util/src/worker/future.rs:140\n - std::sys_common::backtrace::__rust_begin_short_backtrace::h8d52056ba3c276a3\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/sys_common/backtrace.rs:77\n 15: 0x55fdb1c49835 - std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}}::hc21a598794b32d74\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/mod.rs:470\n - <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once::h7abd431f0da61f95\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/panic.rs:309\n - std::panicking::try::do_call::ha0a3746c94fb9fd3\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/panicking.rs:294\n - std::panicking::try::heabea072dabec34d\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250//src/libpanic_abort/lib.rs:29\n - std::panic::catch_unwind::h2fb89e04d9cc0b71\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/panic.rs:388\n - std::thread::Builder::spawn_unchecked::{{closure}}::hd5024f8a5e7b930f\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libstd/thread/mod.rs:469\n - core::ops::function::FnOnce::call_once{{vtable.shim}}::hbcd80c8139bd2636\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/libcore/ops/function.rs:231\n 16: 0x55fdb2809ebe - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::he71721d2d956d451\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/liballoc/boxed.rs:746\n 17: 0x55fdb280c1eb - <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once::he520045b8d28ce5c\n at /rustc/0e4a56b4b04ea98bb16caada30cb2418dd06e250/src/liballoc/boxed.rs:746\n - std::sys_common::thread::start_thread::h2e98d1272dc6d74b\n at src/libstd/sys_common/thread.rs:13\n - std::sys::unix::thread::Thread::new::thread_start::h18485805666ccd3c\n at src/libstd/sys/unix/thread.rs:79\n 18: 0x7f4b04ddedd4 - start_thread\n 19: 0x7f4b044e502c - __clone\n 20: 0x0 - <unknown>"] [location=src/libcore/option.rs:347] [thread_name=pd-worker]

那该怎么避免这个问题?为什么在一台机器上没有问题 在另外一台同样配置的机器上却出现了问题?

从 tikv 线程名 pd_worker 看,可能与 pd 有关,从 pd 日志看,当时大量得 requesttimeout 看下当时网络是咋样得?而且看日志pd 服务器是不是有问题?有配置ntpd之类得嘛,可能是网络导致得 requesttimeout 或者时间问题相差太大导致

tidb 日志下载不了,文件有损看不了