Wednesday, December 7, 2011

Table Functions And Join Cardinality Estimates

If you consider the usage of Table Functions then you should be aware of some limitations to the optimizer calculations, in particular when considering a join between a Table Function and other row sources.

As outlined in one of my previous posts you can and should help the optimizer to arrive at a reasonable cardinality estimate when dealing with table functions, however doing so doesn't provide all necessary inputs to the join cardinality calculation that are useful and available from the statistics when dealing with regular tables.

Therefore even when following the recommended practice regarding the cardinality estimates it is possible to end up with some inaccuracies. This post will explain why.

Join Cardinality Basics

In order to appreciate the problem that can be encountered let's have a quick walkthrough what basic information is used by the optimizer to calculate a join cardinality. Here is a very simplified version of the join selectivity formula:

Join Selectivity = 1 / greater(num_distinct(t1.c1), num_distinct(t2.c2))

I've omitted the NULL (and histogram) case and hence simplified the formula further. Furthermore I'll restrict the show case here to a single join column.

There is another information that is evaluated but not obvious from above formula: The low and high values of the join columns. If the join columns do not overlap at all the join cardinality will be calculated as 1.

Finally this join selectivity will be multiplied by the (filtered) cardinality of the two row sources to arrive at the join cardinality:

Join Cardinality = Join Selectivity * cardinality t1 * cardinality t2

So for this simplified basic join cardinality formula the following information is required from the statistics (if available):

- (filtered) num_rows row sources
- num_distinct join columns
- low/high value join columns

Here is an example of this calculation in action, using real tables with table and basic column statistics gathered, hence all of the just mentioned information available:


drop table t1;

purge table t1;

drop table t2;

purge table t2;

create table t1
as
select
rownum as id
-- 10 distinct values 1..10
, mod(rownum, 10) + 1 as fk
, rpad('x', 100) as filler
from
dual
connect by
level <= 10000
;

exec dbms_stats.gather_table_stats(null, 't1')

create table t2
as
select
-- 20 distinct values 1..20
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 20
;

exec dbms_stats.gather_table_stats(null, 't2')

explain plan for
select
count(*)
from
t1
, t2
where
t1.fk = t2.id
;

set linesize 200 pagesize 0 tab off

select * from table(dbms_xplan.display(null, null, 'BASIC ROWS NOTE'));

--------------------------------------------
| Id | Operation | Name | Rows |
--------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | HASH JOIN | | 10000 |
| 3 | TABLE ACCESS FULL| T2 | 20 |
| 4 | TABLE ACCESS FULL| T1 | 10000 |
--------------------------------------------


So we have:

Join Selectivity = 1 / greater(10, 20) = 1 / 20

Join Cardinality = 1 / 20 * 20 * 10000 = 10000

The join column values do overlap and there is no filter on the two row sources, so the result is as expected.

Now if the simple test case gets modified slightly, for example like this:


drop table t2;

purge table t2;

create table t2
as
select
-- 1 distinct value
1 as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 20
;

exec dbms_stats.gather_table_stats(null, 't2')

explain plan for
select
count(*)
from
t1
, t2
where
t1.fk = t2.id
;

set linesize 200 pagesize 0 tab off

select * from table(dbms_xplan.display(null, null, 'BASIC ROWS NOTE'));

--------------------------------------------
| Id | Operation | Name | Rows |
--------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | HASH JOIN | | 20000 |
| 3 | TABLE ACCESS FULL| T2 | 20 |
| 4 | TABLE ACCESS FULL| T1 | 10000 |
--------------------------------------------


Join Selectivity = 1 / greater(10, 1) = 1 / 10

Join Cardinality = 1 / 10 * 20 * 10000 = 20000

So we can see that the number of distinct values is one of the crucial inputs to the join cardinality calculation (Again: I deliberately keep things simple here and for example omit nulls). Another input are the min and max values of the join columns - this can be seen by again slightly modifying the example:


drop table t2;

purge table t2;

create table t2
as
select
-- 20 distinct values 21..40
rownum + 20 as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 20
;

exec dbms_stats.gather_table_stats(null, 't2')

explain plan for
select
count(*)
from
t1
, t2
where
t1.fk = t2.id
;

set linesize 200 pagesize 0 tab off

select * from table(dbms_xplan.display(null, null, 'BASIC ROWS NOTE'));

--------------------------------------------
| Id | Operation | Name | Rows |
--------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | HASH JOIN | | 1 |
| 3 | TABLE ACCESS FULL| T2 | 20 |
| 4 | TABLE ACCESS FULL| T1 | 10000 |
--------------------------------------------


We can immediately see that the optimizer detected that the join columns do not overlap and hence set the join cardinality to 1.

Now let's move on towards our Table Function case and see what happens if the information is missing from the table statistics and gets amended by dynamic sampling.

First let's start over again with the initial example data set of T2:


drop table t2;

purge table t2;

create table t2
as
select
-- 20 distinct values 1..20
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 20
;

-- Do not gather statistics but use dynamic sampling instead
alter session set optimizer_dynamic_sampling = 2;

alter session set tracefile_identifier = 'dyn_sample_1';

alter session set events '10053 trace name context forever, level 1';

explain plan for
select
count(*)
from
t1
, t2
where
t1.fk = t2.id
;

set linesize 200 pagesize 0 tab off

select * from table(dbms_xplan.display(null, null, 'BASIC ROWS NOTE'));

--------------------------------------------
| Id | Operation | Name | Rows |
--------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | HASH JOIN | | 10000 |
| 3 | TABLE ACCESS FULL| T2 | 20 |
| 4 | TABLE ACCESS FULL| T1 | 10000 |
--------------------------------------------

Note
-----
- dynamic sampling used for this statement (level=2)


So we can see that dynamic sampling got used and the cardinality estimate is correct for the join. Let's check the CBO trace file what has been executed as dynamic sampling query:


SELECT /* OPT_DYN_SAMP */
/*+
ALL_ROWS
IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB)
opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB)
NO_SQL_TUNE
*/
NVL(SUM(C1),0)
, NVL(SUM(C2),0)
, COUNT(DISTINCT C3)
, NVL(SUM(CASE WHEN C3 IS NULL THEN 1 ELSE 0 END),0)
FROM
(
SELECT /*+
NO_PARALLEL("T2")
FULL("T2")
NO_PARALLEL_INDEX("T2")
*/
1 AS C1
, 1 AS C2
, "T2"."ID" AS C3
FROM
"T2" "T2"
) SAMPLESUB
;


So it's interesting to see that the dynamic sampling code detected that the column T2.ID is used as part of a join and therefore not only determined the cardinality of the table but also the num_distinct and num_nulls information of the join column.

Consequently we find this information following in the trace file:


.
.
.
ndv C3 : 20
scaled : 20.00
nulls C4 : 0
scaled : 0.00
min. sel. est. : -1.00000000
** Dynamic sampling col. stats.:
Column (#1): ID( Part#: 0
AvgLen: 22 NDV: 20 Nulls: 0 Density: 0.050000
** Using dynamic sampling NULLs estimates.
** Using dynamic sampling NDV estimates.
Scaled NDVs using cardinality = 20.
** Using dynamic sampling card. : 20
** Dynamic sampling updated table card.
.
.
.


Now if you've followed the description so far carefully you'll notice that there is something missing from the dynamic sampling information that is usually available from the dictionary statistics.

Let's repeat the exercise and use the example data set of T2 where the join columns do not overlap:


drop table t2;

purge table t2;

create table t2
as
select
-- 20 distinct values 21..40
rownum + 20 as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 20
;

-- Do not gather statistics but use dynamic sampling instead
alter session set optimizer_dynamic_sampling = 2;

alter session set tracefile_identifier = 'dyn_sample_2';

alter session set events '10053 trace name context forever, level 1';

explain plan for
select
count(*)
from
t1
, t2
where
t1.fk = t2.id
;

set linesize 200 pagesize 0 tab off

select * from table(dbms_xplan.display(null, null, 'BASIC ROWS NOTE'));

--------------------------------------------
| Id | Operation | Name | Rows |
--------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | HASH JOIN | | 10000 |
| 3 | TABLE ACCESS FULL| T2 | 20 |
| 4 | TABLE ACCESS FULL| T1 | 10000 |
--------------------------------------------

Note
-----
- dynamic sampling used for this statement (level=2)


So this is a case where even with dynamic sampling the join cardinality estimate is incorrect and different from what you get with actually gathered statistics. Whether this is a deliberate design decision to keep the footprint of the dynamic sampling query as low as possible or an omission that could be fixed by a bug/enhancement request I don't know but if you were asking yourself whether dynamic sampling is a reasonable replacement for actual statistics or not - here is a case where it doesn't produce the same as basic table and column statistics.

Joins With Table Functions

Now that the basics have been clarified, let's move on to table functions. As outlined in the past and in one of my previous posts you shouldn't use table functions without helping the optimizer to come up with a reasonable cardinality estimate.

But as you have just seen for a proper join cardinality estimation there is more required than a reasonable cardinality estimate of a row source. Let's see what this means when attempting to join table functions with other row sources.

For that purpose I create the following simple table function that allows to generate a simple set of data controlled by the input parameters:


drop type t_num_list;

drop function f_tab_pipelined;

create or replace type t_num_list as table of number;

create or replace function
f_tab_pipelined(
in_start in number default 1
, in_end in number default 10
, in_val in number default null
) return t_num_list pipelined
is
begin
for i in in_start..in_end loop
pipe row(coalesce(in_val, i));
end loop;
return;
end f_tab_pipelined;
/


So I can define the start and end values which will be returned with a step size of one. If the third parameter is provided rather than returning the current loop value the third parameter will be returned, resulting in a single distinct value repeated end - start + 1 times, otherwise the number of distinct values will be equal to the number of rows generated.

From 11.1 on dynamic sampling of table functions is supported, so let's simulate the cases I've just demonstrated with real tables.

First the case with 20 distinct values of T2 and overlapping join column values:


alter session set tracefile_identifier = 'dyn_sample_table_func';

alter session set events '10053 trace name context lifetime 1, level 1';

explain plan for
select /*+ dynamic_sampling(t2, 2) */
count(*)
from
t1
, (
select
column_value as id
from
table(f_tab_pipelined(1, 20, null))
) t2
where
t1.fk = t2.id
;

set linesize 200 pagesize 0 tab off

select * from table(dbms_xplan.display(null, null, 'BASIC ROWS NOTE'));

-----------------------------------------------------------------------
| Id | Operation | Name | Rows |
-----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | HASH JOIN | | 20000 |
| 3 | COLLECTION ITERATOR PICKLER FETCH| F_TAB_PIPELINED | 20 |
| 4 | TABLE ACCESS FULL | T1 | 10000 |
-----------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement (level=2)


So something is already going wrong here - with the real table and dynamic sampling used we end up with the correct cardinality estimate of 10,000 rows.

Let's have a look at the dynamic sampling query generated:


SELECT
/* OPT_DYN_SAMP */
/*+
ALL_ROWS
IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB)
opt_param('parallel_execution_enabled', 'false')
NO_PARALLEL_INDEX(SAMPLESUB)
NO_SQL_TUNE
*/
NVL(SUM(C1),0)
, NVL(SUM(C2),0)
FROM
(
SELECT
/*+
NO_PARALLEL("KOKBF$")
FULL("KOKBF$")
NO_PARALLEL_INDEX("KOKBF$")
*/
1 AS C1
, 1 AS C2
FROM
TABLE("CBO_TEST"."F_TAB_PIPELINED"(1,20,NULL)) "KOKBF$"
) SAMPLESUB
;


So that's a pity: The dynamic sampling code for table functions at present doesn't recognize the need to gather join column statistics, so the optimizer simply doesn't know about the num_distinct / num_nulls figures of the join column generated by the table function.

It certainly would be nice if Oracle enhanced the dynamic sampling code so that we have at least the same level of information available as with regular tables.

So what does that mean to the Join Cardinality estimate of the optimizer? It looks like that the Join Selectivity formula when dealing with Table Functions could be extended like this:

Join Selectivity = 1 / coalesce(greater(num_distinct(t1.c1), num_distinct(t2.c2)), num_distinct(t1.c1), num_distinct(t2.c2), 100)

So the "greater" function will return NULL if any of the operands are NULL. In this case it seems to use a non-null num_distinct if found, and if none of them are defined, resort to a hard-coded default of 100 resulting in a default selectivity of 1/100.

In our case:

Join Selectivity = 1 / coalesce(greater(10, null), 10, null, 100) = 1 / 10

Join Cardinality = 1 / 10 * 20 * 10000 = 20000

Of course you'll appreciate that the same applies to Table Functions with regards to the non-overlapping join columns - the optimizer doesn't have a clue about these low and high values from the dynamic sampling performed hence it cannot detect such a case.

If you like to see the default case described in above formula in action, it just needs a join of two table functions:


explain plan for
select /*+ dynamic_sampling(t1, 2) dynamic_sampling(t2, 2) */
count(*)
from
(
select
column_value as fk
from
table(f_tab_pipelined(1, 10000, null))
) t1
, (
select
column_value as id
from
table(f_tab_pipelined(1, 20, null))
) t2
where
t1.fk = t2.id
;

set linesize 200 pagesize 0 tab off

select * from table(dbms_xplan.display(null, null, 'BASIC ROWS NOTE'));

-----------------------------------------------------------------------
| Id | Operation | Name | Rows |
-----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 |
| 1 | SORT AGGREGATE | | 1 |
| 2 | HASH JOIN | | 2000 |
| 3 | COLLECTION ITERATOR PICKLER FETCH| F_TAB_PIPELINED | 20 |
| 4 | COLLECTION ITERATOR PICKLER FETCH| F_TAB_PIPELINED | 10000 |
-----------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement (level=2)


So dynamic sampling gets the row source cardinality right, but the join cardinality is way off. If you repeat the same exercise with regular tables and dynamic sampling the correct join cardinality of 20 will be estimated because Oracle detects the 10000 distinct values of T1.FK.

If you really have the need to perform corrective actions with regular tables you can resort to the special hints that Oracle uses in SQL Profiles, which are of course undocumented and therefore you'll have to use them at your own risk. Christian Antognini published a very nice paper about SQL Profiles some time ago that also covers the internal details including the hints introduced for that purpose.

In our case the hint that would allow to provide the missing information would be in particular COLUMN_STATS.

So some variation of the following would be helpful if it worked:


select /*+ dynamic_sampling(t2, 2) column_stats(t2, id, scale, min=1 max=20 nulls=0 distinct=20) */
count(*)
from
t1
, (
select
column_value as id
from
table(f_tab_pipelined(1, 20, null))
) t2
where
t1.fk = t2.id
;


Unfortunately the optimizer obviously refuses to apply those hints to table functions - they are only working with regular tables, so this doesn't help either.

This might also explain why the dynamic sampling code doesn't bother to query that kind of information for Table Functions - may be the optimizer at present cannot process such information and therefore it would be useless to gather it anyway.

Summary

If you would like to use Table Functions and also join them to other row sources you'll have to carefully check the join cardinality estimates of the optimizer, because some crucial information required for a proper join cardinality calculation is not available when dealing with Table Functions. This is also true when using 11g features like Table Function dynamic sampling.

No comments:

Post a Comment