Context and goal: Stack sampling and visualization with flame graphs are powerful techniques to probe the activity of OS processes and in particular for advanced investigations and troubleshooting of Oracle execution (see additional information and references in the article Flame Graphs for Oracle). In the following we will briefly illustrate the use of these techniques with three examples drilling down into the Oracle Optimizer hard parse activity.
Rule based optimizer
In Figure 1 here below we can see the flame graph for the hard parsing of "select count(*) from dba_source" using rule base optimization. For consistency with the following tests the database engine used is 12c and rule-based optimization is forced by running alter session set optimizer_mode=rule prior to parsing.The captured workload shows the almost all the (CPU) time was spent under hard parsing (kksParseCursor) and in particular hard parsing (kksLoadChild), as expected.
The actual parsing of the child cursor can be seen under kksParcheChildCursor (and opi parse) and occupies a large fraction of the samples.
Figure 1: flame graph of select count(*) from dba_source in rule mode. See more details in the SVG version of the graph by hovering over the graph lines.
Cost based optimizer 11g
In Figure 2 we can see the flame graph for parsing select count(*) from dba_source using the 11g cost based optimizer. This has been obtained running under 12c binaries (for consistency with the rest of the examples) by runnig alter session set optimizer_features_enable='11.2.0.4' prior to parsing.The first important difference that we see between this CBO flame graph and the previous one for rule-based optimization is the additional number of data points that have been collected (about 10 times more). This shows that considerably more code has to be executed for cost based optimization compared to rule, at least in this exampl. A result that was expected. The activity of parsing the child cursor under kksParcheChildCursor that we highlighted in Figure 1, is still there in Figure 2, with about the same absolute wait in terms of samples but relatively much less prominent due to the additional cost-based activities. Most of the captured samples, i.e. CPU time, is now spent under the function 'opitca'. We see the appearance of many more function related to query optimizations, for example with prefixes 'apa' (Access Path Analysis?), 'kko' (Kernel Kompile Optimizer?), 'kkq' (Kernel Kompile Query?), or simply with the keyword "Cost" in their name. In particular 'kko*' functions appear to be directly associated with CBO from the documentation.
Cost based optimizer and adaptive optimization in 12c
Figure 3 shows the flame graph for parsing select count(*) from dba_source parsed in 12c, with optimizer_features_enable='12.1.0.1'. Additional relevant parameters are the default values for optimizer_adaptive_features=TRUE and optimizer_adaptive_reporting_only=FALSE. What we notice in Figure 3 is that the amount of sampled points has increased compared to the 11g CBO, this shows that additional work was done. Next we notice that queries are executed during parsing. From the explain plan details of the statement we see that 2 Sql Plan Directives were used when parsing this statement. The directive that caused additional dynamic sampling (or dynamic statistics as they are called now in 12c in this context) are related to SINGLE TABLE CARDINALITY MISESTIMATE for SYS.OBJ$. By performing a 10046 trace (not reported for brevity) we can confirm that the additional recursive SQL is indeed related to sampling SYS.OBJ$.Another method to confirm that the additional work highlighted in Figure 3 comes from adaptive optimization is to turn the feature off by setting optimizer_adaptive_features=FALSE and then measuring the flame graph again (the results confirm the conclusions reported above, however the graph is omitted from this article for brevity).
How the graphs have been generated
- Stack trace data has been sampled at high frequency using perf (on RHEL6): /usr/bin/perf record -a -g -F 100000 -p <pid_of_oacle_session> sleep 5
- trace data is then dumped into a txt file with: /usr/bin/perf script >perf_parse_12c.txt
- Finally the trace file is processed using the Flamegraph stack trace visualizer: grep -v 'cycles:' perf_parse_12c.txt| sed -f os_explain.sed|FlameGraph-master/stackcollapse-perf.pl |FlameGraph-master/flamegraph.pl --minwidth 4 --title "Flame Graph: parsing of select count(*) from dba_source, CBO 12.1.0.1 with adaptive optimzations" > perf_parse_12c.svg
On the Oracle side the parsing workload was generated with the script here below:
DECLARE
v_mycursor number;
v_mysql varchar2(1000);
BEGIN
v_mysql := 'select /*random'||dbms_random.string(null,6)||'*/ count(*) from dba_source';
v_mycursor := dbms_sql.open_cursor;
dbms_lock.sleep(5);
-- now manually start perf stack trace collection at OS level
DBMS_SQL.PARSE(v_mycursor,v_mysql,DBMS_SQL.NATIVE);
dbms_lock.sleep(5);
dbms_sql.close_cursor(v_mycursor);
END;
/
Note a simpler version of this could be to just to run an 'explain plan' of the statement under study (select count(*) from dba_source in this example), however that would include also the work done to insert into the plan_table:
Some hints on how to read the graphs and interpret the Oracle function names can be found in the article "Flame Graphs for Oracle"
An additional link with pointer on how to understand stack traces for Oracle parsing operations can be found in Tanel's blog
Conclusions
The author has demonstrated the use of stack sampling with perf and flame graph visualization for investigating the Oracle Optimizer and how its operations have evolved across major Oracle versions. Notably the 12c Optimizer shows in the example under study a longer and more complex code path for parsing than 11g Optimizer. This has been traced and explained by the new 12c feature of adaptive optimization working together with SQL Plan Directives to provide dynamic statistics at parse time.
Credits to Brendan Gregg for Flame Graphs and to Tanel Poder for publishing extensive work and examples on using stack trace analysis for Oracle troubleshooting.
No comments:
Post a Comment