他的回复:
工单已被受理,提供了一个文档的链接。但因我是本地集中部署的环境,不是华为云上的环境。建议我去使用华为云上的环境,或者使用开源的opengauss。潜台词我理解就是不归他们负责,不能进一步解答了。 但基于其提供的文档 ,我测试了一下。发现使用“set(query_dop 2) scandop(t2 2) scandop(t1 1) ”的提示,可以部分实现需求。相应的测试过程及执行计划如下。目前就是不知道,是否有方法,只让对t2表的访问是并行,而其它步骤不是并行。 testdb=> explain performance select /*+ set(query_dop 2) scandop(t2 2) scandop(t1 1) */ * from t1,t2 where t1.id=t2.id; id | operation | A-time | A-rows | E-rows | E-distinct | Peak Memory | A-width | E-width | E-costs ----+-----------------------------------------------------------+---------------------+---------+---------+------------+---------------- +-----------+---------+------------------------ 1 | -> Streaming(type: LOCAL GATHER dop: 1/2) | [7058.094,7058.094] | 1000000 | 999993 | | [294KB, 294KB] | | 1048 | 229194.974..943552.538 2 | -> Hash Join (3,5) | [6175.942,6273.788] | 1000000 | 999993 | 1, 1000012 | [138KB, 138KB] | | 1048 | 229194.974..694091.394 3 | -> Streaming(type: LOCAL REDISTRIBUTE dop: 2/1) | [2379.159,2421.134] | 1000000 | 999993 | | [279KB, 279KB] | | 524 | 0.000..260266.474 4 | -> Seq Scan on test.t1 | [1692.569,1692.569] | 1000000 | 999993 | | [35KB, 35KB] | | 524 | 0.000..86923.930 5 | -> Hash | [1539.794,1544.034] | 1000000 | 1000012 | | [17MB, 17MB] | [544,544] | 524 | 157025.899..157025.899 6 | -> Streaming(type: LOCAL REDISTRIBUTE dop: 2/2) | [795.568,818.567] | 1000000 | 1000012 | | [289KB, 289KB] | | 524 | 0.000..157025.899 7 | -> Seq Scan on test.t2 | [784.204,802.118] | 1000000 | 1000012 | | [35KB, 35KB] | | 524 | 0.000..44462.060 (7 rows) Predicate Information (identified by plan id) ------------------------------------------------------------------- 2 --Hash Join (3,5) Hash Cond: (t1.id = t2.id), (LLVM Optimized, Jit Execute) (2 rows) Memory Information (identified by plan id) --------------------------------------------------------------------------- 1 --Streaming(type: LOCAL GATHER dop: 1/2) Local Node Peak Memory: 294KB, Estimate Memory: 64MB Local Node Network Poll Time: 0.000; Data Deserialize Time: 0.000 2 --Hash Join (3,5) Local Node[worker 0] Peak Memory: 138KB, Estimate Memory: 32MB Local Node[worker 1] Peak Memory: 138KB, Estimate Memory: 32MB 3 --Streaming(type: LOCAL REDISTRIBUTE dop: 2/1) Local Node[worker 0] Peak Memory: 279KB, Estimate Memory: 32MB Local Node[worker 1] Peak Memory: 279KB, Estimate Memory: 32MB Local Node Network Poll Time: 0.000; Data Deserialize Time: 0.000 4 --Seq Scan on test.t1 Local Node Peak Memory: 35KB, Estimate Memory: 64MB 5 --Hash Local Node[worker 0] Peak Memory: 17949KB, Width: 544 Local Node[worker 1] Peak Memory: 18109KB, Width: 544 Local Node Buckets: 65536 Batches: 16 Memory Usage: 17067kB 6 --Streaming(type: LOCAL REDISTRIBUTE dop: 2/2) Local Node[worker 0] Peak Memory: 289KB, Estimate Memory: 32MB Local Node[worker 1] Peak Memory: 289KB, Estimate Memory: 32MB Local Node Network Poll Time: 0.000; Data Deserialize Time: 0.000 7 --Seq Scan on test.t2 Local Node[worker 0] Peak Memory: 35KB, Estimate Memory: 32MB Local Node[worker 1] Peak Memory: 35KB, Estimate Memory: 32MB (23 rows) Targetlist Information (identified by plan id) ---------------------------------------------------------------------------- 1 --Streaming(type: LOCAL GATHER dop: 1/2) Output: t1.id, t1.c1, t1.dt, t1.mark, t2.id, t2.c1, t2.dt, t2.mark 2 --Hash Join (3,5) Output: t1.id, t1.c1, t1.dt, t1.mark, t2.id, t2.c1, t2.dt, t2.mark 3 --Streaming(type: LOCAL REDISTRIBUTE dop: 2/1) Output: t1.id, t1.c1, t1.dt, t1.mark Distribute Key: t1.id 4 --Seq Scan on test.t1 Output: t1.id, t1.c1, t1.dt, t1.mark 5 --Hash Output: t2.id, t2.c1, t2.dt, t2.mark 6 --Streaming(type: LOCAL REDISTRIBUTE dop: 2/2) Output: t2.id, t2.c1, t2.dt, t2.mark Distribute Key: t2.id 7 --Seq Scan on test.t2 Output: t2.id, t2.c1, t2.dt, t2.mark (16 rows) Datanode Information (identified by plan id) ----------------------------------------------------------------------------------------------------------------------------------- 1 --Streaming(type: LOCAL GATHER dop: 1/2) Local Node (actual time=1602.040..7058.094 rows=1000000 loops=1) Local Node (Buffers: 0) Local Node (CPU: ex c/r=16093100697471, ex row=1000000, ex cyc=16093100697471641600, inc cyc=16093100697471641600) 2 --Hash Join (3,5) Local Node[worker 0] (actual time=1545.343..6273.788 rows=499741 loops=1) Local Node[worker 1] (actual time=1540.605..6175.942 rows=500259 loops=1) Local Node[worker 0] (Buffers: shared hit=1 read=1 temp read=62730 written=62700) Local Node[worker 1] (Buffers: shared hit=2 temp read=62754 written=62724) Local Node[worker 0] (CPU: ex c/r=3227145991, ex row=999482, ex cyc=3225474329898264, inc cyc=8042456055088553984) Local Node[worker 1] (CPU: ex c/r=3093384819, ex row=1000518, ex cyc=3094987192674350, inc cyc=8050658606824935424) 3 --Streaming(type: LOCAL REDISTRIBUTE dop: 2/1) Local Node[worker 0] (actual time=0.109..2421.134 rows=499741 loops=1) Local Node[worker 1] (actual time=0.136..2379.159 rows=500259 loops=1) Local Node[worker 0] (Buffers: 0) Local Node[worker 1] (Buffers: 0) Local Node[worker 0] (CPU: ex c/r=16086761939650, ex row=499741, ex cyc=8039214498482812928, inc cyc=8039214498482812928) Local Node[worker 1] (CPU: ex c/r=16086762131928, ex row=500259, ex cyc=8047547537356172288, inc cyc=8047547537356172288) 4 --Seq Scan on test.t1 Local Node (actual time=0.059..1692.569 rows=1000000 loops=1) Local Node (Buffers: shared hit=242 read=76682) Local Node (CPU: ex c/r=16086738030209, ex row=1000000, ex cyc=16086738030209087488, inc cyc=16086738030209087488) 5 --Hash Local Node[worker 0] (actual time=1544.034..1544.034 rows=499741 loops=1) Local Node[worker 1] (actual time=1539.794..1539.794 rows=500259 loops=1) Local Node[worker 0] (Buffers: temp written=31335) Local Node[worker 1] (Buffers: temp written=31347) Local Node[worker 0] (CPU: ex c/r=-16080346019145, ex row=499741, ex cyc=-8036008199954017280, inc cyc=16082275842792) Local Node[worker 1] (CPU: ex c/r=-16080350009104, ex row=500259, ex cyc=-8044339815204613120, inc cyc=16082276088786) 6 --Streaming(type: LOCAL REDISTRIBUTE dop: 2/2) Local Node[worker 0] (actual time=0.182..795.568 rows=499741 loops=1) Local Node[worker 1] (actual time=0.101..818.567 rows=500259 loops=1) Local Node[worker 0] (Buffers: 0) Local Node[worker 1] (Buffers: 0) Local Node[worker 0] (CPU: ex c/r=16080378200367, ex row=499741, ex cyc=8036024282229860352, inc cyc=8036024282229860352) Local Node[worker 1] (CPU: ex c/r=16080382157004, ex row=500259, ex cyc=8044355897480701952, inc cyc=8044355897480701952) 7 --Seq Scan on test.t2 Local Node[worker 0] (actual time=0.044..802.118 rows=500500 loops=1) Local Node[worker 1] (actual time=0.127..784.204 rows=499500 loops=1) Local Node[worker 0] (Buffers: shared hit=110 read=38390) Local Node[worker 1] (Buffers: shared hit=311 read=38113) Local Node[worker 0] (CPU: ex c/r=16080358842043, ex row=500500, ex cyc=8048219600442777600, inc cyc=8048219600442777600) Local Node[worker 1] (CPU: ex c/r=16080396667236, ex row=499500, ex cyc=8032158135284789248, inc cyc=8032158135284789248) (43 rows) User Define Profiling -------------------------------------------------------------- Segment Id: 1 Track name: Datanode build connection Local Node: (time=0.000 total_calls=1 loops=1) Plan Node id: 1 Track name: Datanode start up stream thread Local Node: (time=0.941 total_calls=1 loops=1) (4 rows) ====== Query Summary ===== ----------------------------------------- Datanode executor start time: 11.067 ms Datanode executor run time: 7157.832 ms Datanode executor end time: 0.173 ms Planner runtime: 0.626 ms Query Id: 7881299347900960 Total runtime: 7169.125 ms (6 rows) testdb=>