【 TiDB 使用环境】测试
【 TiDB 版本】v5.3.0
【遇到的问题:问题现象及影响】
Update时出现 entry too large
报错,查看文档后应该是事务数据量过大导致的,调整相关参数( txn-entry-size-limit
, txn-total-size-limit
, max_allowed_packet
)后也仍然报错,因为Update时写入的Blob长度已经超过了120MB的最大单行事务限制(378224214/(210241024)= 180.3MB)
这种情况下应该怎么样优化解决呢?能否通过改动TiDB参数实现呢?
另外在查看MySQL文档时我看到这么一段
- The internal representation of a MySQL table has a maximum row size limit of 65,535 bytes, even if the storage engine is capable of supporting larger rows.
BLOB
andTEXT
columns only contribute 9 to 12 bytes toward the row size limit because their contents are stored separately from the rest of the row.
看起来在计算行长度时Blob文件会被优化为一个类似于指针的变量,TiDB是否也会采取这种策略?如果采取,那按理来说我的操作时不可能能超出最大长度限制(不是宽表),如果未采取,是否有办法开启这种策略呢?
【附件:截图/日志/监控】
日志
[txn_mode=PESSIMISTIC] [err="[kv:8025]entry too large, the max entry size is 125829120, the size of data is 189200959
表结构
CREATE TABLE `table_optimizing_process` (
...
`rewrite_input` longblob DEFAULT NULL COMMENT 'rewrite files input',
...
PRIMARY KEY (`process_id`) /*T![clustered_index] CLUSTERED */,
KEY `table_index` (`table_id`,`plan_time`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin COMMENT='History of optimizing after each commit';
SQL语句
UPDATE table_optimizing_process SET rewrite_input = ...(len: 378224214)