引言:hints翻译成中文就是提示,暗示的意思,它在数据库中作用就是更改SQL语句的执行方式,你可以使用hints强制sql按照你所设置的方式执行sql,一般用来做性能诊断和调优,不建议在开发中使用。
1.写一条SQL,使它通过全表扫描方式的效率优于索引访问,分别给出各自的执行计划。
LEO1@LEO1> create table leo1 as select * from dba_objects; 创建leo1表
Table created.
LEO1@LEO1> create index idx_leo1 on leo1(object_id); 在这个object_id列上创建索引
Index created.
LEO1@LEO1> execute dbms_stats.gather_table_stats('LEO1','LEO1',cascade=>true); 分析表和索引
PL/SQL procedure successfully completed.
LEO1@LEO1> select count(*) from leo1; 表上有71958行记录
COUNT(*)
---------------
71958
LEO1@LEO1> select /*+ full(leo1) */ * from leo1 where object_id>100;
71859 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2716644435
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71862 | 6807K| 287 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| LEO1 | 71862 | 6807K| 287 (1)| 00:00:04 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("OBJECT_ID">100)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
5762 consistent gets 5762次一致性读
0 physical reads
0 redo size
3715777 bytes sent via SQL*Net to client
53214 bytes received via SQL*Net from client
4792 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
71859 rows processed
LEO1@LEO1> select /*+ index(leo1 idx_leo1) */ * from leo1 where object_id>100;
71859 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 1434365503
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71862 | 6807K| 1232 (1)| 00:00:15 |
| 1 | TABLE ACCESS BY INDEX ROWID| LEO1 | 71862 | 6807K| 1232 (1)| 00:00:15 |
|* 2 | INDEX RANGE SCAN | IDX_LEO1 | 71862 | | 160 (0)| 00:00:02 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_ID">100)
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
10735 consistent gets 10735次一致性读
0 physical reads
0 redo size
8241805 bytes sent via SQL*Net to client
53214 bytes received via SQL*Net from client
4792 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
71859 rows processed
小结:上面的比较来看访问相同记录行,全表扫描并不总是性能最差的。为什么会这样呢,这要看提取的记录数占总记录数的比例是大还是小。一般来讲小于总体20%时走索引的效率高(并不绝对),如果你检索的记录数很大,其实不用先扫描索引块在访问数据块,直接全扫描数据块反而效率更高。因为走索引访问一个数据块需要2次IO,走全表扫描访问一个数据块需要1次IO,代价显而易见了!
2.自己构造三条关联查询的SQL,分别适用于nested loop join,hash join,merge join 关联,对于每条sql语句,分别通过hint产生其它两种关联方式的执行计划,并比较性能差异。
表关联-Nested Loop Join 嵌套循环关联
LEO1@LEO1> create table a as select * from dba_objects; a是一张大表
Table created.
LEO1@LEO1> create table b as select * from dba_objects where rownum<99; b是一张小表(外部表)
Table created.
LEO1@LEO1> create index idx_a on a(object_id); 在a上建一个索引,键值重复率较低
Index created.
LEO1@LEO1> execute dbms_stats.gather_table_stats('LEO1','a',cascade=>true); a表和索引都分析一下
PL/SQL procedure successfully completed.
LEO1@LEO1> execute dbms_stats.gather_table_stats('LEO1','b',cascade=>true); b表也分析一下
PL/SQL procedure successfully completed.
LEO1@LEO1> set autotrace trace explain;
LEO1@LEO1> select a.* from a,b where a.object_id=b.object_id;
Execution Plan
----------------------------------------------------------
Plan hash value: 3337251606
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 98 | 9800 | 199 (0)| 00:00:03 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 98 | 9800 | 199 (0)| 00:00:03 |
| 3 | TABLE ACCESS FULL | B | 98 | 294 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | IDX_A | 1 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| A | 1 | 97 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("A"."OBJECT_ID"="B"."OBJECT_ID") 谓词条件
数据访问:全表扫描小表b拿出一条记录,去大表a中匹配(索引扫描a表),嵌套循环遍历a,如果找到匹配记录,就去a表rowid所在的数据块上取出,最后需要的就是a表里面整个数据。
使用场景:1.外部表是一张小表 例 b表
2.关联的表是一张大表,并在关联字段上创建索引,最好是主键 例 a 表
3.索引键值重复率低
Hash Join
LEO1@LEO1> select /*+ use_hash(a,b) */ a.* from a,b where a.object_id=b.object_id;
98 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 4090908061
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 98 | 9800 | 291 (1)| 00:00:04 |
|* 1 | HASH JOIN | | 98 | 9800 | 291 (1)| 00:00:04 |
| 2 | TABLE ACCESS FULL| B | 98 | 294 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| A | 71955 | 6816K| 287 (1)| 00:00:04 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("A"."OBJECT_ID"="B"."OBJECT_ID")
Statistics
----------------------------------------------------------
176 recursive calls
0 db block gets
1060 consistent gets
2 physical reads
0 redo size
5504 bytes sent via SQL*Net to client
590 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
98 rows processed
MERGE Join
LEO1@LEO1> select /*+ use_merge(a,b) */ a.* from a,b where a.object_id=b.object_id;
98 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3307526271
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 98 | 9800 | 1238 (1)| 00:00:15 |
| 1 | MERGE JOIN | | 98 | 9800 | 1238 (1) | 00:00:15 |
| 2 | TABLE ACCESS BY INDEX ROWID| A | 71955 | 6816K| 1234 (1) | 00:00:15 |
| 3 | INDEX FULL SCAN | IDX_A | 71955 | | 160 (0) | 00:00:02 |
|* 4 | SORT JOIN | | 98 | 294 | 4 (25)| 00:00:01 |
| 5 | TABLE ACCESS FULL | B | 98 | 294 | 3 (0) | 00:00:01 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("A"."OBJECT_ID"="B"."OBJECT_ID")
filter("A"."OBJECT_ID"="B"."OBJECT_ID")
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
22 consistent gets
0 physical reads
0 redo size
5388 bytes sent via SQL*Net to client
590 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
98 rows processed
比较小结:大家从cost值上看到走nested loops要比后2个关联方式代价小,说明CBO优化器的选择还是正确的。
表关联-Hash Join 哈希关联
应用场景:1.一个是大表,一个是小表,两个表进行关联操作
2.当两个表没有索引时进行关联,使用hash方式匹配效率较高
3.如果两个表有索引又进行了hash关联,那么哈希完后,结果只受哈希列表影响,不受索引影响了
LEO1@LEO1> drop table a purge; 删除a表重新建
Table dropped.
LEO1@LEO1> drop table b purge; 删除b表重新建
Table dropped.
LEO1@LEO1> create table a as select * from dba_objects; a是一张大表,无索引
Table created.
LEO1@LEO1> create table b as select * from dba_objects where rownum<1000; b是一张小表,无索引
Table created.
LEO1@LEO1> select a.* from a,b where a.object_id=b.object_id; a表b表进行关联操作
999 rows selected. 返回999行
Execution Plan
----------------------------------------------------------
Plan hash value: 4090908061
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 999 | 214K| 294 (1)| 00:00:04 |
|* 1 | HASH JOIN | | 999 | 214K| 294 (1)| 00:00:04 |
| 2 | TABLE ACCESS FULL| B | 999 | 12987 | 6 (0)| 00:00:01 | b表小代价也小
| 3 | TABLE ACCESS FULL| A | 83813 | 16M| 287 (1)| 00:00:04 | a表大代价相对也大
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("A"."OBJECT_ID"="B"."OBJECT_ID")
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
532 recursive calls
0 db block gets
1261 consistent gets
1038 physical reads
0 redo size
51276 bytes sent via SQL*Net to client
1250 bytes received via SQL*Net from client
68 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
999 rows processed
数据访问:全表扫描a、b表,先把小表b做哈希后build到内存中,在对大表a做哈希,然后从大表a中取数据到小表b中比较,最后把匹配的数据返回给用户,这种哈希匹配效率高。(我们也可以叫做2个数据集的比较,哈希完后oracle会把数据分布到一个个哈希区,然后是大数据集哈希区与小数据集哈希区比较,也就是n对n比较,不像nested loops 是1:n比较,因此性能好)
Nested loops
LEO1@LEO1> select /*+ use_nl(a,b) */ a.* from a,b where a.object_id=b.object_id;
999 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 4193326952
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 999 | 214K| 285K (1) |00:57:09 |
| 1 | NESTED LOOPS | | 999 | 214K| 285K (1) | 00:57:09 |
| 2 | TABLE ACCESS FULL| B | 999 | 12987 | 6 (0) | 00:00:01 |
|* 3 | TABLE ACCESS FULL| A | 1 | 207 | 286 (1) | 00:00:04 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("A"."OBJECT_ID"="B"."OBJECT_ID") 谓词条件filter(过滤)就代表是全表扫描
Note
-----
- dynamic sampling used for this statement (level=2) 动态采样,级别越高,采集数据越多,结果越精确,但消耗资源也越多
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1029120 consistent gets 嵌套循环比较,比hash多出了816倍一致性读
0 physical reads
0 redo size
51276 bytes sent via SQL*Net to client
1250 bytes received via SQL*Net from client
68 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
999 rows processed
Merge Join
LEO1@LEO1> select /*+ use_merge(a,b) */ a.* from a,b where a.object_id=b.object_id;
999 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3028542103
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 999 | 214K| | 4066 (1)| 00:00:49 |
| 1 | MERGE JOIN | | 999 | 214K| | 4066 (1)| 00:00:49 |
| 2 | SORT JOIN | | 999 | 12987 | | 7 (15)| 00:00:01 |
| 3 | TABLE ACCESS FULL| B | 999 | 12987 | | 6 (0)| 00:00:01 |
|* 4 | SORT JOIN | | 83813 | 16M| 39M| 4059 (1)| 00:00:49 |
| 5 | TABLE ACCESS FULL| A | 83813 | 16M| | 287 (1)| 00:00:04 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("A"."OBJECT_ID"="B"."OBJECT_ID")
filter("A"."OBJECT_ID"="B"."OBJECT_ID")
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
7 recursive calls
0 db block gets
1141 consistent gets
0 physical reads
0 redo size
51142 bytes sent via SQL*Net to client
1250 bytes received via SQL*Net from client
68 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
999 rows processed
数据访问:先对a、b表进行整体排序,在逐条进行比较,cost值比hash join大了13倍,Rows列比hash join多返回了两行,这些都证明了merge Join 没有hash join性能好。
表关联-Merge Join 合并关联
Merge join场合:如果2个表都是经过整体排序后的,那么它们在关联的时候就会走Merge join。
我们还用如上的a、b表做测试比较
LEO1@LEO1> select * from (select * from a order by object_id) a,(select * from b order by object_id) b where a.object_id=b.object_id;
999 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2924767385
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 83813 | 33M| | 4066 (1)| 00:00:49 |
| 1 | MERGE JOIN | | 83813 | 33M| | 4066 (1)| 00:00:49 |
| 2 | VIEW | | 83813 | 16M| | 4059 (1)| 00:00:49 |
| 3 | SORT ORDER BY | | 83813 | 16M| 19M| 4059 (1)| 00:00:49 |
| 4 | TABLE ACCESS FULL| A | 83813 | 16M| | 287 (1)| 00:00:04 |
|* 5 | SORT JOIN | | 999 | 201K| | 7 (15)| 00:00:01 |
| 6 | TABLE ACCESS FULL | B | 999 | 201K| | 6 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access("A"."OBJECT_ID"="B"."OBJECT_ID")
filter("A"."OBJECT_ID"="B"."OBJECT_ID")
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
291 recursive calls
0 db block gets
1169 consistent gets
0 physical reads
0 redo size
85714 bytes sent via SQL*Net to client
1250 bytes received via SQL*Net from client
68 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
999 rows processed
小结:结合上面的结果由于需要先排序,则返回的行数又多了,从而增加了等待时间和代价,通常merge join的效果并不是很好因为代价太大了。
表关联-leading( ) 指定表访问的顺序
LEO1@LEO1> create table c as select * from dba_objects where rownum<100; 创建c表
Table created.
LEO1@LEO1> select /*+ leading(c b a) */ * from a,b,c where a.object_id=b.object_id and b.object_id=c.object_id; 利用上面a、b表做三表关联
99 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 455705007
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 99 | 61479 | 298 (2)| 00:00:04 |
|* 1 | HASH JOIN | | 99 | 61479 | 298 (2)| 00:00:04 |
|* 2 | HASH JOIN | | 99 | 40986 | 10 (10)| 00:00:01 |
| 3 | TABLE ACCESS FULL| C | 99 | 20493 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| B | 999 | 201K| 6 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | A | 83813 | 16M| 287 (1)| 00:00:04 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("A"."OBJECT_ID"="B"."OBJECT_ID")
2 - access("B"."OBJECT_ID"="C"."OBJECT_ID")
Note
-----
- dynamic sampling used for this statement (level=2)
小结:我们看到执行计划中访问表的顺序(C->B->A)就是我们指定好的顺序leading(c b a),说明hints生效。
3.通过append hint来插入数据,演示它和普通插入数据的性能比较。
LEO1@LEO1> set timing on 显示执行时间
LEO1@LEO1> insert into leo1 select * from leo1; 普通加载数据,会扫描空闲空间加以利用
71958 rows created.
Elapsed: 00:00:00.89 执行了00.89秒
Execution Plan
----------------------------------------------------------
Plan hash value: 2716644435
---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 71958 | 6816K| 287 (1)| 00:00:04 |
| 1 | LOAD TABLE CONVENTIONAL | LEO1 | | | | |
| 2 | TABLE ACCESS FULL | LEO1 | 71958 | 6816K| 287 (1)| 00:00:04 |
---------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
518 recursive calls
15560 db block gets
3693 consistent gets 产生了3693次一致性读
4 physical reads
13892928 redo size 产生了13892928大小redo日志
843 bytes sent via SQL*Net to client
792 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
71958 rows processed
LEO1@LEO1> rollback; 回滚
Rollback complete.
Elapsed: 00:00:00.10
LEO1@LEO1> insert /*+ append */ into leo1 select * from leo1; 直接加载数据,不扫描空闲空间,直接定位HWM加载数据,效率高
71958 rows created.
Elapsed: 00:00:00.36 执行了00.36秒
Execution Plan
----------------------------------------------------------
ERROR:
ORA-12838: cannot read/modify an object after modifying it in parallel(无法在并行模式下修改之后读写对象)
SP2-0612: Error generating AUTOTRACE EXPLAIN report 生成执行计划报告时出错
Statistics
----------------------------------------------------------
340 recursive calls
2441 db block gets
2253 consistent gets 产生了2253次一致性读
0 physical reads
2268672 redo size 产生了2268672大小redo日志
829 bytes sent via SQL*Net to client
806 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
71958 rows processed
LEO1@LEO1> rollback; 回滚
Rollback complete.
Elapsed: 00:00:00.09
小结:从比较结果一眼看出,直接加载的效率要比普通加载高很多,时间上差不多快了一倍。原因有以下2点:
第一点:普通加载会扫描空闲空间,利用这些空闲空间插入数据,直接加载不扫描空闲空间直接定位到HWM直接加载数据,从而效率较高
第二点:可以看出普通加载的一致性读和redo量都要大于直接加载,产生这些数据量也是要消耗资源的,所以普通加载没有直接加载性能好。
4.用cardinality hint来模拟表中的数据,写一条SQL语句并给出它的执行计划。
名词解释:cardinality这个关键字在10g执行计划里被rows代替,实际上两个词指的是一个东西。
Cardinality(基数)在执行计划中表示每一步操作返回的记录数,这个数是oracle估算出来的并不是真实返回的记录数,CBO根据这个值计算权重,来选择使用哪种方式来访问数据。
作用:1.我们一般使用“cardinality”hints来比较不同数量返回值在执行计划中效率。
2.当有特殊场景不容易模拟出来的时候,我们可以使用“cardinality”hints方式来轻松解决
LEO1@LEO1> select count(*) from leo1; leo1表有71958条记录
COUNT(*)
----------------
71958
LEO1@LEO1> create table leo2 as select * from dba_objects; 创建leo2表
Table created.
LEO1@LEO1> insert into leo2 select * from leo2; 在插入一次,为了比leo1表记录数多一倍,好做比较
71960 rows created.
LEO1@LEO1> create index idx_leo2 on leo2(object_id); 在object_id字段上创建索引
Index created.
LEO1@LEO1> select count(*) from leo2; 现在有143920条记录
COUNT(*)
----------------
143920
LEO1@LEO1> execute dbms_stats.gather_table_stats('LEO1','LEO2',cascade=>true); leo2和索引都做分析
PL/SQL procedure successfully completed.
LEO1@LEO1> set autotrace traceonly;
LEO1@LEO1> select * from leo1,leo2 where leo1.object_id=leo2.object_id;
143916 rows selected. 返回143916行
Execution Plan
----------------------------------------------------------
Plan hash value: 2436308224
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 141K| 26M| | 2291 (1) | 00:00:28 |
|* 1 | HASH JOIN | | 141K| 26M| 7664K| 2291 (1) | 00:00:28 |
| 2 | TABLE ACCESS FULL| LEO1 | 71958 | 6816K| | 587 (1) | 00:00:08 |
| 3 | TABLE ACCESS FULL| LEO2 | 143K| 13M| | 588 (1) | 00:00:08 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("LEO1"."OBJECT_ID"="LEO2"."OBJECT_ID") 谓词条件2个索引字段相等
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
13672 consistent gets 全表扫描产生了13672个一致性读
2134 physical reads
0 redo size
12630296 bytes sent via SQL*Net to client
106058 bytes received via SQL*Net from client
9596 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
143916 rows processed
我们强制指定leo1表返回100行,来看执行计划如何选择访问数据的方式
LEO1@LEO1> select /*+ cardinality(leo1 100) */ * from leo1,leo2 where leo1.object_id=leo2.object_id;
143916 rows selected. 也返回143916行,返回值没有按执行计划走
Execution Plan
----------------------------------------------------------
Plan hash value: 2751515442
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 197 | 38218 | 887 (1)| 00:00:11 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 197 | 38218 | 887 (1)| 00:00:11 |
| 3 | TABLE ACCESS FULL | LEO1 | 100 | 9700 | 587 (1)| 00:00:08 |
|* 4 | INDEX RANGE SCAN | IDX_LEO2 | 2 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| LEO2 | 2 | 194 | 3 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("LEO1"."OBJECT_ID"="LEO2"."OBJECT_ID")
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
177213 consistent gets 因为有索引扫描所以有177213个一致性读
2134 physical reads 物理读都是一样的,说明只有内存IO增加了
0 redo size
7727088 bytes sent via SQL*Net to client
106058 bytes received via SQL*Net from client
9596 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
143916 rows processed
数据访问:全表扫描小表leo1(因为强制指定返回100行就认为是小表)拿出一条记录,去大表leo2中匹配(索引扫描leo2表,因为当检索范围较大时扫描索引的速度较快),嵌套循环遍历leo2,如果找到匹配记录,就去leo2表rowid所在的数据块上取出,最后需要的就是leo2表里面整个数据。
使用场景:1.外部表是一张小表 例 leo1表 因为记录少会执行全表扫描
2.内部表是一张大表,并在关联字段上创建索引,当检索范围较大时扫描索引的速度较快
3. 当有特殊场景不容易模拟出来的时候,我们可以使用“cardinality”hints方式来轻松解决.