structured streaming的执行计划:
WriteToDataSourceV2 org.apache.spark.sql.execution.streaming.sources.MicroBatchWriter@7dfa0785
12-09-2019 15:41:46 CST StructuredStream INFO - +- *(1) BroadcastHashJoin [cast(id#28 as bigint)], [id#33L], LeftOuter, BuildRight
12-09-2019 15:41:46 CST StructuredStream INFO - :- *(1) SerializeFromObject [assertnotnull(input[0, com.hydee.entity.BinglogTest, true]).getId AS id#28, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, com.hydee.entity.BinglogTest, true]).getName, true, false) AS name#29]
12-09-2019 15:41:46 CST StructuredStream INFO - : +- *(1) MapElements com.hydee.App$$Lambda$39/1684572536@46a97805, obj#27: com.hydee.entity.BinglogTest
12-09-2019 15:41:46 CST StructuredStream INFO - : +- *(1) Filter com.hydee.App$$Lambda$38/2017890623@667dd150.call
12-09-2019 15:41:46 CST StructuredStream INFO - : +- *(1) DeserializeToObject createexternalrow(value#328, staticinvoke(class org.apache.spark.sql.catalyst.util.DateTimeUtils$, ObjectType(class java.sql.Timestamp), toJavaTimestamp, timestamp#335, true, false), StructField(value,BinaryType,true), StructField(timestamp,TimestampType,true)), obj#26: org.apache.spark.sql.Row
12-09-2019 15:41:46 CST StructuredStream INFO - : +- *(1) Project [value#328, timestamp#335]
12-09-2019 15:41:46 CST StructuredStream INFO - : +- Scan ExistingRDD[key#326,value#328,topic#330,partition#332,offset#333L,timestamp#335,timestampType#337]
12-09-2019 15:41:46 CST StructuredStream INFO - +- BroadcastExchange HashedRelationBroadcastMode(List(input[0, bigint, true]))
12-09-2019 15:41:46 CST StructuredStream INFO - +- TiSpark CoprocessorRDD{[table: dm_test] TableScan, Columns: [id], [name], KeyRange: ([t200 00 00 00 00 00 03333_r 00 00 00 00 00 00 00 00], [t200 00 00 00 00 00 03333_s 00 00 00 00 00 00 00 00]), startTs: 411113641089695745}