Monday, September 22, 2014

SystemTap into Oracle for Fun and Profit

Topic: This post is about using SystemTap for investigating and troubleshooting Oracle RDBMS. In particular you will learn how to probe Oracle processes and their userspace functions. These techniques aim to be useful as well as fun to learn for those keen into peeking under the hood of the technology and improve their effectiveness in troubleshooting and performance investigations.


Userspace probing is a very powerful technique that can be used to complement the available Oracle performance and profiling data, such as V$ views and 'event 10046' trace files, for advanced troubleshooting and investigations into the Oracle kernel. These type of investigations have been available for several years already mostly on Solaris with the introduction of DTrace (see for example Tanel's blog).
Dynamic tracing tools that allow kernel and userspace probing are also gaining momentum in the Linux word thanks to recent additions to the kernel. Several tools are available to provide the interface for collecting, aggregating and printing dynamic tracing data. SystemTap is one of the more mature tools, however more tools are now emerging, including 2 ports of DTrace. For a review of this topic see Brendan Gregg's presentation at LinuxCon 2014.
I have first learned about the possibility of using SystemTap for userspace tracing from Frits Hoogland and have since been quite excited by the possibilities that this opens for advanced troubleshooting, and in particular on how this can help investigating Oracle workloads for performance tuning and advanced troubleshooting.
I was was also quite happy to discover that the techniques described in this article work also under RHEL 6.5, this is definitely very good news for me as it means that I can start using SystemTap on many of the existing systems, without waiting for upgrades to the very latest kernels.

SystemTap documentation has additional details on the implementation of userspace probes: with Linux Kernels higher than 3.5 (such as when using RHEL 7.0, OL 7.0 or OL6.5 with UEK3) SystemTap will use the UPROBES module for userspace tracing, with older kernels SystemTap can use the UTRACE kernel extensions if available. SystemTap is able to provide userspace probing both with UTRACE and with UPROBES, more mother dynamic tracing tools seem to work only with UPROBES and therefore only with more recent versions of Linux. As reported by Frits in the above mentioned article you will need SystemTap 2.5 or higher for userspace tracing. As I write this SystemTap 2.6 has just come out.

This is how to check if UTRACE extensions are configured:
# grep CONFIG_UTRACE /boot/config-`uname -r`

This is how to check if UPROBES are available:
# grep CONFIG_UPROB /boot/config-`uname -r`

In the following you will find a few example, ideas and test code to get started with using SystemTap userspace probing with Oracle. The scripts discussed in this blog can be found at this link.

Deep Dive: SystemTap reads Oracle wait event interface data and X$ tables

You will see in this paragraph how to hook SystemTap into the Oracle wait event interface. This provides a way to link together the two types of instrumentation, SystemTap probe-based data and
Oracle-based wait event interface data, and therefore opens the way for more advanced investigations.
The main implementation detail is to create a SystemTap probe into the Oracle kernel at the 'right point' and make it read the information we want.
As discussed in previous work I have published about DTrace and Oracle, "kskthewt" is an Oracle kernel function that can be used for this purpose: it is called by the Oracle kernel at the end of each wait event presumably to update the relevant performance counters. Our first step is to create a probe that starts executing once the Oracle executable steps into the function kskthewt. The next step is make SystemTap read the information we want. What is available are CPU registers and memory areas. Some CPU registers are used to pass arguments to the function, as detailed in the x86 calling conventions. In other cases the CPU registers can contain useful information 'left over' from parent function(s) processing. Reading directly from memory areas and in particular from the SGA opens the way to most of Oracle instrumentation data.
Additional facts that I have found recently  in collaboration with Frits Hoogland and that are of help on building a probe on the Oracle function kskthewt are:
  • When execution jumps into the function kskthewt, the CPU register R13 turns out to point into the X$KSUSE memory structure (with an offset). R12 seems to point to the same address most of the time but I have seen some cases when it didn't so I prefer to use R13.
  • X$KSUSE is a base fixed table for V$SESSION and therefore carries a lot of very interesting information. X$KSUSE is allocated in memory as a segmented array in the SGA, see also Julian Dyke's presentation on SGA internals for more details on this interesting topic.
  • V$SESSION.SADDR, the base address of a given session's V$SESSION data and of X$KSUSE, can be derived from the value of R13 by subtracting an offset. The value of the offset depends of the Oracle version and port and can be found 'experimentally' by using SystemTap or gdb (see more details in the script directory: trace_oracle_events_debug.stp).
  • The fields of interest of X$KSUSE (i.e. V$SESSION column values) can be found from the base value of X$KSUSE with an offset calculated with a query from X$KQFCO and X$KQFTA (see the script: ksuse_find_offsets.sql).
  • The register RDI (which is mapped as arg1 in the SystemTap probe) is found to be set to the timestamp value. This is a timestamp of the wait event, the same value as what can be seen in a trace file for the event 10046, if activated.
  • The register RSI (that is arg2 in SystemTap probes) is found to be set to the wait event number.

If you put all of this together you can write a SystemTap script to collect and print wait event and selected v$session details. An example of such a script is demonstrated in the the figure here. The scripts can be downloaded from here as a zip file. Direct links: trace_oracle_events_11204.stp and trace_oracle_events_12102.stp and have been tested with Oracle on RHEL6.5 and with Oracle on OEL7 respectively.

Figure 1: Script and example output of a SystemTap probe that collects wait event and V$SESSION details from Oracle kernel functions and SGA memory.

Note: In the example above you can see that SystemTap output is piped to sed -f eventsname.sed. This is to transform event id numbers into strings. The file eventsname.sed is generated by running the script eventsname.sql.

Systemtap aggregations and filtering of wait event data

One of the strengths of SystemTap, similarly to the rest of dynamic tracing tools, is to do on-the-fly aggregations and filtering. This helps reducing the amount of data collected, therefore helping to minimize the footprint of the tracing operations, while still producing meaningful reports and real-time tracing. Building on the results discussed above on how to write SystemTap probes hooking into the wait event interface you can now add aggregations and filtering to compute wait event histograms with microsecond precision.
One simple and powerful result that you can reproduce easily is to write a script that effectively reproduces/back-ports the functionality of the 12c feature of microsecond wait event histograms. This can be very useful when studying events of short duration such as random disk reads from solid state devices. In addition to aggregation, with custom filtering you can aggregate wait event details for a given user or on a given file, or any other custom filter condition. Overall SystemTap allows for much freedom in programming our probes, which is quite empowering.
Two examples scripts that can be downloaded from here, direct links: histograms_oracle_events_11204.stp and histograms_oracle_events_12102.stp

Figure2: Script and example output of a SystemTap probe that computes microsecond precision wait event histograms.

Another and alternative way to build a SystemTap probe for gathering and optionally aggregating and filtering wait Oracle event latency data is described in the following. For this you need also an additional piece of research in Oracle kernel function calls: the function kews_update_wait_time is called just before kskthewt with the CPU register RSI set to the wait time in micro seconds (register RSI is also available in SystemTap probes as arg2). See also details reported in this previous work. The results is that you can build a probe computing event histograms that is independent of the Oracle version. An example script is: histograms_oracle_events_version_independent.stp

Figure 2: Script and example output of a SystemTap probe that computes microsecond precision wait event histograms and is independent of the Oracle version.

Tracing Oracle logical I/O with SystemTap userspace probes

You can use Systemtap userspace probes to trace Oracle logical I/O too. The main research underlying this probe comes from Tanel Poder (search for the DTrace-based script and the extensive work of Alexander Anokhin.
By hooking on the Oracle kernel function kcbgtcr we can get the details of the logical reads, or rather cache consistent reads as explained by Alexander Anokhin, kcbgtcr =Kernel Cache Buffer Get Consistent Read and the first argument of the function call points to a memory structure with the details of the tablespace, relative file number, block number and object number of the consistent read. We can use all this to built a profiler probe trace_oracle_logical_io_basic.stp

probe process("oracle").function("kcbgtcr") {
    printf("tbs#=%d, rfile=%d, block#=%d, obj#=%d\n",user_int32(u64_arg(1)), user_int32(u64_arg(1)+4) >> 22 & 0x003FFFFF, user_int32(u64_arg(1)+4) & 0x003FFFFF, user_int32(u64_arg(1)+8))

From there you can also build probes that perform aggregates and compute statics of the logical read operations, see for example: trace_oracle_logical_io_count.stp

Figure 3: Script and example output of a SystemTap probe to capture and aggregate details of logical I/O (for consistent read) in Oracle.

Systemtap can integrate Oracle wait event data with OS tracing

SystemTap's strength and first use case is to trace system calls. You can use this to go deeper with Oracle tracing by combining Oracle wait event interface data (collected with SystemTap as shown above) with data coming from SystemTap OS probes. 
In the following you can see an example based on Oracle using ASM, when storage is made visible to the OS as block devices (asmlib is not used in this example). The I/O calls issued by Oracle processes in this case will typically be pread and pwrite for synchronous single-block I/O, io_submit and io_getevents for multi-block and/or asynchronous I/O. At a lower level all I/O to block devices can be traced as ioblock requests.
You can start exploring the relationship between Oracle wait events, I/O system calls and block I/O with this example starter script from: trace_oracle_iocalls.stp

Figure 4: SystemTap probes for OS I/O tracing and Oracle wait event tracing. This allows to get an integrated view of the actions underlying an Oracle I/O event and relative OS actions.

In the Figure 5 here below you can see an example of tracing Oracle I/O where single-block (synchronous) I/O is performed via a pread call to the OS. This is a typical access path for Oracle random I/O. On the OS side the pread call is cascaded to the block device stack as io_block_request. On the Oracle wait event interface the I/O operation is accounted as db file sequential read wait, with indication of the file number and block number (p1 and p2 parameters)

Figure 5: Oracle performing single-block I/O profiled using SystemTap probes. This provides end-to-end tracing of the I/O operation: we can see the OS call details and the corresponding to the Oracle wait event data

Linux asynchronous I/O interface is often used by Oracle for multi-block I/O. Thisis mainly for performance reasons and can only be done when the underlying filesystem allows it (this is the case in the examples reported here as we are using ASM). A study of Oracle async I/O would take us too far from our main subject, we will limit the discussion to some main observations and an example of how you can use SystemTap probes to further investigate this topic.
With AIO the kernel is actually doing the IO, the user process is just asking for an IO to be done, and checks back to get the requested information. When investigating async I/O you need to trace at least two calls: one for submitting the I/O requests (io_submit) and one for reaping the results (io_getevents). I/O requests are then passed on to the block I/O layer. At this stage they can be split into more operations of smaller size, as it is the case of the examples of Figure 6. See also Martin Bach's post on the topic of "Increasing the maximum I/O size in Linux". More details on tracing Oracle I/O can be found in Frits Hooglands' excellent work on profiling Oracle with gdb and study on multiblock I/O.

Figure 6: SytemTap probe example showing the profiling Oracle I/O in a case of multi-block read when Oracle has chosen to perform asynchronous I/O. The I/O requests at the OS level are submitted using io_submit and reaped with io_getevents. SystemTap allows to see also what happens at the block device level when the I/Os are split into smaller chunks of maximum size 512KB in the system used for this test.

Lab setup and documentation

Here below you can find some pointers on how to setup a test environment to experiment with the techniques and scripts discussed here above:

- Download and install Oracle Virtual Box
- Download OEL 7.0 (or OEL 6.5/6.6) for example from
  - notably install also the kernel-devel rpm

If you want to use OS probes with SystemTap download and install the debuginfo and debuginfo common module. They can be found from and respectively for OEL 7 and OEL 6.
BTW when using only userspace probes you can skip this step.

The scripts discussed here require systemtap 2.5 or higher. Consider uninstalling systemtap if already installed with an older version.
Download the latest version of systemtap from (version 2.6 as I write this).
To install SystemTap from the downloaded source run:

# the output will tell you if additional OS packages are needed, in case install them and repeat
make install

As post-installation check run:
stap --help

A good point to start with the SystemTap documentation is the beginners guide, see also the language reference and tapset reference manual.

Download and install Oracle version or
The software can be downloaded from
Pre-installed VMs with Oracle database are also available for download from OTN (for example at this link)
Installation guides, besides the Oracle documentation, are also available at and in the racattack wiki book.


Dynamic tracing makes available userspace-level probes into the Oracle kernel and Linux probes at the OS kernel level, which can be used to complement Oracle's performance data, notably data from the wait event interface and the statistics counters. These techniques can considerably extend the amount of data available when investigating performance issues and for advanced troubleshooting, therefore helping the troubleshooter to use systematic approaches to problem solving and overall to be more successful and consistent.

You have seen in this short article a few pointers and get-started examples (download from this link). There is a lot of potential in applying and extending these techniques for investigating Oracle and for advanced troubleshooting of Oracle and other complex platforms. I hope to cover some more examples in future posts. For the time being, I hope you enjoyed going through the listed examples and I wish to all the readers who have been patient enough to make this far fun and success in trying these techniques on their systems!


I would like to thank three authors of many original ideas that have inspired the material I have discussed here: Brendan Gregg, Tanel Poder and Frits Hoogland. Additional thanks go to Frits Hoogland for reviewing this post.

Tuesday, August 5, 2014

Scaling up Cardinality Estimates in

Topic: Counting the number of distinct values (NDV) for a table column has important applications in the database domain, ranging from query optimization to optimizing reports for large data warehouses. However the legacy SQL method of using SELECT COUNT (DISTINCT <COL>) can be very slow. This is a well known problem and Oracle provides a new function APPROX_COUNT_DISTINCT implemented with a new-generation algorithm to address this issue by providing fast and scalable cardinality estimates.

Is SELECT COUNT( DISTINCT <col> ) slow? 

The answer as it is often the case is 'it depends'. Let's start with an example to get an idea on what are the key variables in this context. I will describe a few simple tests here below using a table called TESTTABLE (see SQL creation scripts later on in this post) of 10M rows, 4GB in size and with 3 columns: a unique numeric ID, a column with row-size 200 bytes and cardinality 10 and another with row-size 200 bytes cardinality 10M.  I have cached the table in the buffer cache to simplify the interpretation of the test results. As a reference the sqlplus output here below shows that it takes about 1 second to count the number of rows in the table and that no physical reads are performed while doing so because the table is cached in the buffer cache.

SQL> select count(*) from testtable; 


Elapsed: 00:00:00.98

     526343  consistent gets
          0  physical reads
                      ..     ........
          0  redo size

Let's now count the NDV (number of distinct values) for the 3 different columns. SELECT COUNT(DISTINCT <col>) from TESTTABLE; 
The table here below shows the results rounded to the nearest integer value:

Column name Measured NDV Avg Col Length (bytes) Elapsed time (sec)
id 10M 6 11
val_lowcard 10 202 6
val_unique 10M 208 124

The figure here below shows the execution plan and the tkprof report for the execution with the longest time (NDV calculation for the column VAL_UNIQUE). We can see a pattern emerging. The algorithm used for count distinct needs to use large amounts of memory, therefore also CPU and eventually also processing will spill out the temporary areas and perform I/O operations to and from the temporary tablespace. This is why it can become very slow.

Meet the new algorithm in

APPROX_COUNT_DISTINCT is a new function available in Oracle's SQL and is implemented with an algorithm that has low memory footprint and high scalability (more on that later in this post). The other difference with the count distinct legacy approach is that this new function provides estimates of the cardinality as opposed to correct counts, in many cases this would be 'good enough', with the speed difference making up for the lack of precision. Let's see how this new approach performs in the same test system as for the legacy SQL example above. I have run a similar set of tests as in the previous case, the SQL is:

Column name Measured NDV Avg Col Length (bytes) Elapsed time (sec)
id 9705425 6 2
val_lowcard 10 202 5
val_unique 9971581 208 6

Here below the tkprof report for the execution plan for the case with the longest time (NDV calculation for the column VAL_UNIQUE). We can see that almost all the SQL execution time is accounted as CPU by Oracle and no I/O is performed.

Legacy vs. New

The new algorithm and the legacy approach have similar performance when counting columns of low cardinality, this is illustrated in the example above when counting the column 'val_lowcard' (cardinality 10). In the simple test performed here the new algorithm was marginally faster the the legacy SQL for the low cardinality example, however I can imagine that the opposite may also be true in some cases (this has been reported reported by  on a different test system where the legacy SQL was waster than APPROX_COUNT_DISTINCT for counting NDV in a low-cardinality case).
We expect that the elapsed time needed in the case of low NDV is close to time needed for a full scan of the column being measured, that is the most expensive operation when estimating the cardinality of columns with low NDV is typically to read the column data.

Let's now talk about the more interesting case of large NDV. The case of estimating columns of high cardinality is where the new algorithm shines. As the NDV increases also the 'volume of distinct values' increases, the number of distinct values multiplied by their size increases. As this 'volume of distinct values' get bigger, the legacy algorithm starts to fall behind in performance compared to the new function. This is because the operations requiring sorting/hashing and temporary space start taking more and more resources: first CPU is needed to process the additional memory structures, then also I/O to read and write from the temporary tablespace is needed. This can potentially make the legacy SQL with count distinct orders of magnitude slower than APPROX_COUNT_DISTINCT.


APPROX_COUNT_DISTINCT scales very well by using parallel query. By looking at the execution plan and tkprof report above that should appear clear as the processing is entirely on CPU. Moreover additional details discussed below on the algorithm used by APPROX_COUNT_DISTINCT confirm the scalability of this approach when using parallel query.
The legacy approach instead does not scale very well, in particular when multiple parallel query slaves have to handle a concurrent count distinct while also using TEMP space of the temporary tablespace. In the interest of brevity I will not add here additional details and tests performed for this case.


I have first learned about the HyperLogLog (HLL) algorithm from Alex Fatkulin's blog and references therein. I find the whole concept quite neat, in particular I enjoy how clever use of math can help in tackling difficult problems when scaling up data processing to very large data sets. The reason I mention it here is that hints that HLL is used can be found by tracing Oracle process execution by stack profiling. For example see the Flame graph below and the annotations of my guess on function name qesandvHllAdd.
The flame graph data comes from Linux perf command and the graph itself is generated using Brendan Gregg's FlameGraph tool. More details on this technique of stack profiling applied at investigating Oracle at this blog entry.

Here below are pointers to the commands used for perf data collection and flame graph generation:

# perf -a -g -F 99999 -p <oracle_pid> 
# perf script >perf.script
$ cat perf.script|sed -f os_explain.sed| FlameGraph-master/ |FlameGraph-master/ --title "Flamegraph" >Fig1.svg


Oracle makes available to the users a new and fast algorithm for computing cardinality estimates that outperforms the legacy SQL select count(distinct) for most cases of interest. As it is often the case the methods used for investigating the new feature can be more interesting than the actual results Moreover while investigating the new we learn (or re-learn) something important about the old!
The test workload discussed here above in this posts shows also that caching a table into RAM while it is often fast, it does not fix all performance use cases. Caching a table does not necessarily bring all I/O operations to zero, notably because of I/O for TEMP space. This is a well-know fact that appears in several contexts when doing performance tuning, although it may be overlooked at design time. Simplifying for future reference: table caching is not always fast=true.


Alex Fatkulin has covered the topic of HyperLogLog in a series very interesting blog posts. Particularly notable the effort to explain in simple terms the HyperLogLog algorithm here.
It is also worth noting that Oracle has introduced in 11g an algorithm for computing NDV restricted to the usage of the DBMS_STATS package. For additional details see Maria Colgan's blog post and Amit Poddar presentation at Hotsos (currently hosted in Jonathan Lewis' blog).
Tuning of Oracle work areas is discussed in Oracle Support Note "How To Super-Size Work Area Memory Size Used By Sessions?" (Doc ID 453540.1). Alex Fatkulin has presented at Hotsos 2014 "Leveraging In-Memory Storage to Overcome Oracle Database PGA Memory Limits". 

Test definition scripts

Here below you can find the the statements I used to create the test table for this post's examples. The test system where I ran the examples was configured with 5 GB of db_cache_size (6 GB for the whole SGA), automatic PGA management was used with pga_aggregate_target set to 3 GB. Different memory configurations for the buffer cache and for PGA may produce different test results. In the tests discussed here above, testtable is intended to fit in the buffer cache (and be cached there). Test results for select count distinct have a strong dependence on the size of memory work areas allocated in PGA and the thresholds for spilling to disk (see also the references section above).
No competing/concurrent workload was run during the tests. I have also added some pointers on how I cached the table, note that table caching can be achieved with several alternative methods, such as defining and using a keep cache or by using one of the 2 new methods available in using the in-memory option or the big table caching feature.

create table testtable pctfree 0 cache
tablespace DATA01 nologging
select rownum id, to_char(mod(rownum,10))||rpad('mystring',200,'P') val_lowcard, to_char(rownum)||rpad('mystring',200,'P') val_unique
from dba_source,dba_source where rownum<=1e7;

exec dbms_stats.gather_table_stats('SYSTEM','TESTTABLE')

--use this to avoid Oracle's direct reads bypassing the buffer cache 
alter session set "_serial_direct_read"=never;

alter system flush buffer_cache;
--make sure that there is enough space in the buffer cache and clean it first

set autotrace on -- allows to measure logical and physical IO
select count(*) from testtable;
select count(*) from testtable; --repeat if needed

Thursday, June 26, 2014

ASM Metadata, Internals and Diagnostic Utilities

If you use ASM on a regular basis, you may find that knowledge of the internals can be very useful for advanced troubleshooting and in general to acquire familiarity with the technology. In particular this is even more relevant when using ASM capabilities for data redundancy/mirroring (i.e. when deploying disk groups with normal or high redundancy).

I have first started investigating this topic when deploying 10g RAC databases in 2005. From time to time I review this research, typically as new versions of Oracle come out. I have recently gone through my research material on ASM when preparing a presentation for Enkitec E4 2014. Here are three links to documents that I have authored on the subject and recently updated:

Two additional great sources of information on ASM internals are: 
  • by Bane Radulovic
  • Oracle Automatic Storage Management, Oracle Press Nov 2007, N. Vengurlekar, M. Vallath, R.Long. (This book does not cover the changes appeared in the most recent versions of Oracle).

Friday, May 30, 2014

Oracle Optimizer Investigated with Flame Graphs

Topic: This post is about investigating the Oracle Optimizer using stack sampling and flame graphs. We will see how the complexity of the Optimizer has increased over the years and in particular with the introduction of cost-based optimization and more recently with 12c adaptive optimization.

Context and goal: Stack sampling and visualization with flame graphs are powerful techniques to probe the activity of OS processes and in particular for advanced investigations and troubleshooting of Oracle execution (see additional information and references in the article Flame Graphs for Oracle). In the following we will briefly illustrate the use of these techniques with three examples drilling down into the Oracle Optimizer hard parse activity.

Rule based optimizer

In Figure 1 here below we can see the flame graph for the hard parsing of "select count(*) from dba_source" using rule base optimization. For consistency with the following tests the database engine used is 12c and rule-based optimization is forced by running alter session set optimizer_mode=rule prior to parsing.
The captured workload shows the almost all the (CPU) time was spent under hard parsing (kksParseCursor) and in particular hard parsing (kksLoadChild), as expected.
The actual parsing of the child cursor can be seen under kksParcheChildCursor (and opi parse) and occupies a large fraction of the samples.

Figure 1: flame graph of select count(*) from dba_source in rule mode. See more details in the SVG version of the graph by  hovering over the graph lines.

Cost based optimizer 11g

In Figure 2 we can see the flame graph for parsing select count(*) from dba_source using the 11g cost based optimizer. This has been obtained running under 12c binaries (for consistency with the rest of the examples) by runnig alter session set optimizer_features_enable='' prior to parsing.
The first important difference that we see between this CBO flame graph and the previous one for rule-based optimization is the additional number of data points that have been collected (about 10 times more). This shows that considerably more code has to be executed for cost based optimization compared to rule, at least in this exampl. A result that was expected. The activity of parsing the child cursor under kksParcheChildCursor that we highlighted in Figure 1, is still there in Figure 2, with about the same absolute wait in terms of samples but relatively much less prominent due to the additional cost-based activities. Most of the captured samples, i.e. CPU time, is now spent under the function 'opitca'. We see the appearance of many more function related to query optimizations, for example with prefixes 'apa' (Access Path Analysis?), 'kko' (Kernel Kompile Optimizer?), 'kkq' (Kernel Kompile Query?), or simply with the keyword "Cost" in their name. In particular 'kko*' functions appear to be directly associated with CBO from the documentation.

Figure 2: flame graph of select count(*) from dba_source using 11g CBO.See more details in the SVG version of the graph by hovering over the graph lines.

Cost based optimizer and adaptive optimization in 12c

Figure 3 shows the flame graph for parsing select count(*) from dba_source parsed in 12c, with optimizer_features_enable=''. Additional relevant parameters are the default values for  optimizer_adaptive_features=TRUE and optimizer_adaptive_reporting_only=FALSE. What we notice in Figure 3 is that the amount of sampled points has increased compared to the 11g CBO, this shows that  additional work was done. Next we notice that queries are executed during parsing. From the explain plan details of the statement we see that 2 Sql Plan Directives were used when parsing this statement. The directive that caused additional dynamic sampling (or dynamic statistics as they are called now in 12c in this context) are related to SINGLE TABLE CARDINALITY MISESTIMATE for SYS.OBJ$. By performing a 10046 trace (not reported for brevity) we can confirm that the additional recursive SQL is indeed related to sampling SYS.OBJ$.
Another method to confirm that the additional work highlighted in Figure 3 comes from adaptive optimization is to turn the feature off by setting optimizer_adaptive_features=FALSE and then measuring the flame graph again (the results confirm the conclusions reported above, however the graph is omitted from this article for brevity).

Figure 3: flame graph of select count(*) from dba_source using 12c optimizer. See more details by downloading the SVG version of the graph and by hovering over the lines with the function names.

How the graphs have been generated

  • Stack trace data has been sampled at high frequency using perf (on RHEL6): /usr/bin/perf record -a -g -F 100000 -p <pid_of_oacle_session> sleep 5
  • trace data is then dumped into a txt file with: /usr/bin/perf script >perf_parse_12c.txt
  • Finally the trace file is processed using the Flamegraph stack trace visualizer: grep -v 'cycles:' perf_parse_12c.txt| sed -f os_explain.sed|FlameGraph-master/ |FlameGraph-master/ --minwidth 4 --title "Flame Graph: parsing of select count(*) from dba_source, CBO with adaptive optimzations" > perf_parse_12c.svg
On the Oracle side the parsing workload was generated with the script here below:

   v_mycursor number;
   v_mysql varchar2(1000);
    v_mysql := 'select /*random'||dbms_random.string(null,6)||'*/ count(*) from dba_source';
    v_mycursor := dbms_sql.open_cursor;
    -- now manually start perf stack trace collection at OS level

Note a simpler version of this could be to just to run an 'explain plan' of the statement under study (select count(*) from dba_source in this example), however that would include also the work done to insert into the plan_table:

Some hints on how to read the graphs and interpret the Oracle function names can be found in the article "Flame Graphs for Oracle"
An additional link with pointer on how to understand stack traces for Oracle parsing operations can be found in Tanel's blog


The author has demonstrated the use of stack sampling with perf and flame graph visualization for investigating the Oracle Optimizer and how its operations have evolved across major Oracle versions. Notably the 12c Optimizer shows in the example under study a longer and more complex code path for parsing than 11g Optimizer. This has been traced and explained by the new 12c feature of adaptive optimization working together with SQL Plan Directives to provide dynamic statistics at parse time. 

Credits: Brendan Gregg for Flame Graphs and Tanel Poder for publishing extensive work and examples on using stack trace analysis for Oracle troubleshooting.

Friday, May 9, 2014

Flame Graphs for Oracle

Topic: This post is a hands-on introduction to using on-CPU Flame Graphs for investigating Oracle workloads. This technique is about collecting and visualizing sampled stack trace data to analyze and troubleshoot Oracle processes at the OS level (in particular applied to Linux).

Motivations: The techniques and tools described here can be used for performance investigations to complement wait-event based information gathered from the Oracle engine, such as information available with ASH and sql monitoring. They can also be used for complex troubleshooting when the wait interface is failing to instrument a particular problem under study. Last but not least they can be used as a learning aid to investigating Oracle internals.

Context and credits: The use of stack sampling has previously been covered for Oracle investigations by Tanel Poder in several articles (see Tanel's Advanced Oracle Troubleshooting Guide and for example this article).
Brendan Gregg has done original work on the collection and visualization of stack sampled data and by introducing the Flame Graphs visualization technique.
The techniques discussed in this post rely on perf for data collection. Perf is a relatively new addition to the performance tools available for Linux (requires kernel 2.6.31 or higher, for example RHEL6 or OL6). A few additional references on using perf for investigating Oracle are: Frits Hoogland, Kevin Closson, Craig Shallahamer.

This article: In this post we will approach the use of stack sampling with perf and visualization with Flame Graphs by introducing examples of growing complexities and by detailing the steps needed to reproduce the results. The main idea is to give a few hints of what this type of approach can do to help us analyze process execution and performance rather than producing a comprehensive guide.

Example 1: Flame Graph of query execution

In Figure 1 here below we can see a Flame Graph representation of the stack traces collected during the execution of "select count(*) from dba_source" (the details of how the graph was generated are described later in this article).

Figure 1: Flame Graph of "select count(*) from dba_source". See more details for each stack layer by hovering over the graph lines in the SVG version

The stack traces expose the name of many Oracle's internal functions. There is no public documentation of those internal functions, however much can be guessed by their name and from the documentation in the Oracle Support document "ORA-600 Lookup Error Categories" (formerly MOS note 175982.1, more recently found as an attachment to Note 1321720.1 and also in other links on the internet).
In addition to the support note, Tanel Poder has already done a great job of "decoding" an important class of internal functions, the rowsource functions with prefix qer (query execution rowsource) in his os_explain script. Rowsource functions expose execution plan details, similarly to what is exposed in ASH data (v$active_session_history) at the columns SQL_PLAN_OPERATION and SQL_PLAN_OPTIONS.
Note also that in Flame Graphs the horizontal axis is not time, stack traces are simply pasted by ordering alphabetically (follow this link to Brendan Gregg's article on Flame Graphs).

If we extract only the rowsource entries from the stack trace data represented in Figure 1, we obtain a simplified Flame Graph, which highlights the parts related to query's execution plan: Figure 2. We can recognize there the execution plan operations and how they stack and nest on top of each other. Although SQL is mainly a declarative language here we have a good view of how this is translated at run time into procedural execution: after all the Oracle query engine is just another (complex) C program.

Figure 2: Flame Graph of "select count(*) from dba_source" where the stack data has been filtered to show only rowsources. See more details for each stack layer by hovering over the graph lines in the SVG version

The interpretation of the stack samples of a relatively complex execution plan like this one can be challenging. In particular we want to match the measured stack traces with the execution plan operations. This becomes easier if we cross check the stack traces with the execution plan details, as show in Figure 3 for this query.
One subtle, but important point worth repeating is that horizontal axis in the Flame Graph is not time (Brendan Gregg's article on Flame Graphs). However the length of each line in the graph is proportional to the number of captured stack frames, therefore to its execution time. This last sentence may not be valid for a general  case but it holds for this example, as the whole query execution time was 'on-CPU' (see more on this later on when we discuss Flame Graphs for I/O-bound workload)

Figure 3: Execution plan and details of the rows processed by each step (Actual Rows, "A-Rows") and logical reads ("Buffers") for the query "select count(*) from dba_source" (Oracle version

How the Flame Graphs of Figure 1 and Figure 2 have been generated
  • Stack samples have been collected with perf running on RHEL6. This example will not work on old kernel, notably it will not work on RHEL5. Perf has been run as the root user on the DB server.
  • Flame Graphs have been generated using You can download the latest version from that link for example as a zip (or clone the project locally using "git clone").
  • Download os_explain and edit out the string replacement part in a file called os_explain.sed. Here below an example of these steps:

Steps for data collection:
  • Session1: run sqlplus and find the os pid of its server process. For example use this SQL: select spid from v$process where addr in (select  paddr from v$session where audsid =sys_context('USERENV','SESSIONID'));
  • Session1: run the test SQL to load data into the cache, for this example run: select count(*) from dba_source; 
  • Session 2: on a different window as root start recording the stack traces. As the execution is very short in duration we can afford high frequency sampling (100KHz). Example: perf record -a -g -F100000 -p <pid of sqlplus process>
  • Session 1: run the sql under study, in this example: select count(*) from dba_source; 
  • Session 2: CTRL-C to stop recording

Steps for data processing and visualization:

  • Generate the stack trace in a txt file: perf script >myperf_script.txt
  • Create the flame graph: grep -v 'cycles:' myperf_script.txt|sed -f os_explain.sed|../FlameGraph-master/ |../FlameGraph-master/ --title "Flame Graph: select count(*) from dba_source" >Figure1.svg
  • Create the flame graph with only rowsources: grep -i -e qer -e opifch -e ^$ test_perf_script23.txt|sed -f os_explain.sed|../FlameGraph-master/|../FlameGraph-master/ --title "Flame Graph Rowsource: select count(*) from dba_source" >Figure2.svg

Example 2: profiling multiple processes - logical IO

In this example we trace CPU workload running from multiple sessions and generate on-CPU Flame Graphs. We use SLOB for logical IO testing to generate the CPU-bound workload, that is SLOB queries run with their entire data set in the buffer cache in this example.

Figure 4: Flame Graph of SLOB logical IO (8 concurrent sessions). See more details in the SVG version

Similarly to what we discussed in the Example 1 above, we can now filter the stack traces to highlight the rowsource data. In addition in this example we also leave the details of the functions that start with the letter 'k', notably this includes the Oracle kernel function kcbgtcr (Kernel Cache Buffer Get Consistent Read).

Figure 5: Flame Graph of SLOB logical IO (8 concurrent sessions) where the stack data has been filtered to show only rowsource functions and functions stating with 'k'. See more details in the SVG version

We can correlate the rowsource execution steps as we see them in in Figure 5 with the execution plan of the SLOB test query reported here below (Figure 6).

Figure 6: Execution plan of the SLOB query used for the tests described in this article.

How the Flame Graphs 4 and 5 have been generated

Test setup:
  • See example 1 above on the details about downloading the FlameGraph scripts and about the requirements for perf and os_explain.sed
  • Download SLOB and configure the test environment. 8 schemas have been used for the tests reported here. See also this post for additional details on testing with SLOB.
  • Make sure that SLOB runs with all data in buffer cache. 8 schemas of 1M blocks each have been used for this tests and a buffer cache of 70 GB.
  • To facilitate loading data into the buffer cache this SQL was run prior to testing:
    • alter session set "_serial_direct_read"=never;
    • alter table user1.cf1 cache;
    • select count(*) from user1.cf1;
    • repeat for the rest of the test users.
    • while a SLOB test is running we can check that the workload is not doing physical IO for example using Tanel's snapper. For example: snapper all 10 1 "select inst_id, sid from gv$session where username like 'USER%'"

Test execution:
  • Session 1: run the SLOB test. In the test case described here below I used 8 sessions (that is I started the test with ./ 8
  • Session1: identify the PIDs of the server sessions running SLOB workload. For example use: select spid from v$process where addr in (select  paddr from v$session where username like 'USER%');
  • Session 2: gather perf data for 20 seconds at default sampling rate (1 KHz):  perf record -g -a -p 51929,51930,51931,51932,51933,51934,51962,51963 sleep 20

Data processing and visualization:
  • Generate the stack trace in a txt file: perf script >myperf_scriptSLOB.txt
  • Create the flame graph: grep -v 'cycles:' myperf_scriptSLOB.txt|sed -f os_explain.sed|../FlameGraph-master/|../FlameGraph-master/ --title "Flame Graph: SLOB physical IO" >Figure1_SLOB.svg
  • Create the flame graph with only rowsource steps and k* function: grep -i -e qer -e opifch -e ' k' -e ^$ myperf_scriptSLOB.txt|sed -f os_explain.sed|../FlameGraph-master/|../FlameGraph-master/ --title "Flame Graph: SLOB physical IO - filtered" >Figure2_SLOB.svg

Example 3: profiling multiple processes - physical IO

We will use the same SLOB workload as example 2 but this time with a very small buffer cache therefore forcing the DB to do physical I/O to read SLOB data from the storage. Random I/O reads will be instrumented by Oracle with the db file sequential read wait event (see also this post for additional details and examples on testing random I/O with SLOB).

Figure 7: Flame Graph of SLOB physical IO (8 concurrent sessions). See more details in the SVG version

By comparing Figure 7 (SLOB physical I/O) with Figure 4 (SLOB logical I/O) we notice that

  • we have many more function calls in the case of physical I/O than in the corresponding graph with only logical I/O. Moreover in Figure 7 we can see several stack traces that contain only OS kernel calls and that cannot be easily associated with a particular step in Oracle execution. Note the system under tests used NAS storage so physical IO appears as network traffic.
  • Moreover from the SVG graph versions we can count the number of samples stack traces and notice that in the case of physical IO we have considerably less values. This is because perf will only collect stack traces when the process is on-CPU.
  • Another important fact that we notice is the appearing of Oracle kernel kcbzib function. This seems to be associated with reading blocks into the cache with physical reads.

Figure 8: Flame Graph of SLOB physical IO (8 concurrent sessions) where the stack data has been filtered to show only rowsources and functions stating with 'k'. See more details by hovering over the graph lines in the SVG version

Example 4: profiling the server workload

This example is about measuring and visualizing stack traces collected for on entire server. The interpretation becomes more complex as multiple workloads are measured together. Moreover system calls for I/O and other activities are measured as well.
Another important fact to take into consideration is the footprint of data collection. For this reason a reduced frequency of 99 HZ was used for sampling.

Data gathering at 99HZ for 20 seconds for the whole instance: perf record -g -a -F99 sleep 20

The steps for data visualization are the same as was used in Examples 2 and 3. Here below in Figure 9 the Flame Graph of the measured data and a 'filtered version' in Figure 10.
  • Generate the stack trace in a txt file: perf script >perf_data.txt
  • The commands used to generate the 'filtered' Figure 10 are: grep -i -e qer -e opifch -e ' k' -e exe -e ^$ perf_data.txt|sed -f os_explain.sed|../FlameGraph-master/|../FlameGraph-master/ --title "Flame Graph - filtered" >perf_Figure10.svg

From the row source names reported in Figure 10 below we can guess what the workload on the server was, which types of operations were executed, with details on the queries, such as the access path used and on the type of DML. We can also measure which Oracle functions were executed for most of the CPU time. It's worth mentioning again that only on-CPU workload was captured.

Figure 9: Flame Graph of the workload of an entire server for 20 seconds, stack traces sampled at 99 Hz. See more details in the SVG version

Figure 10: Flame Graph of the workload of an entire server for 20 seconds, stack traces sampled at 99 Hz. where the stack data has been filtered to show only rowsources + functions stating with 'k' or containing 'exe' in their name. See more details in the SVG version


High-frequency sampling stack traces is a powerful technique to investigate process execution at the OS level, in particular for CPU-bound processes. This technique can be successfully applied to the study of Oracle performance and complex troubleshooting. It can complement and in some cases supplement the information exposed by Oracle's V$ views and wait event interface in general. Flame graphs are a visualization technique introduced by Brendan Gregg to facilitate the analysis of sampled stack data. The analysis of stack traces for Oracle processes currently relies on the little information available on the Oracle Support document "ORA-600 Lookup Error Categories" (formerly MOS note 175982.1, more recently seen with note 1321720.1) and on the published work of experts who have approached this field and published their findings, notably Tanel Poder. 
Much of the potential of these techniques is still untapped, we have covered in this article a few basic examples.
Additional resources on interpreting stack traces and on profiling Oracle rowsources and SQL execution can be found in Tanel's blog and Enkitec TV video and in Alexander Anokhin's blog. More info on the use of Flame Graphs can be found in Brendan Gregg's blog.

Friday, April 25, 2014

A Closer Look at CALIBRATE_IO

Topic: This blog entry is about investigating Oracle's DBMS_RESOURCE_MANAGER.CALIBRATE_IO

Spoiler: For quantitative analysis of storage performance and in particular for measuring random read I/O in Oracle, I'd rather advise the use tools that allow generating test workloads in a controlled manner, in a way that can be understood and measured and in particular with latency details together with IOPS measurements. For example SLOB by Kevin Closson is an excellent tool that I have happily used in several occasions for this purpose (see also this example). Kyle Hailey has also written on the topic of storage testing, see for example this. Brendan Gregg has written on the advantages of using the active benchmarking method.

From the Oracle performance tuning manual: "The I/O calibration feature of Oracle Database enables you to assess the performance of the storage subsystem, and determine whether I/O performance problems are caused by the database or the storage subsystem."
This is promising and should definitely catch the DBA's attention. At a closer look it appears that CALIBRATE_IO can probably be better described as a simplified version of the Oracle ORION testing tool integrated inside the database with the goal of measuring key metrics of the database storage, namely the maximum throughput for read sequential IO and maximum IOPS for random read I/O. CALIBRATE_IO measures only read workload.

DBMS_RESOURCE_MANAGER.CALIBRATE_IO has a few advantages: it is very simple to use, it does not require any additional installation steps, it works in RAC and it produces a simple output.

CALIBRATE_IO attempts to measure random I/O performance with two numbers: maximum number of IOPS and average latency at which it is measured. Unfortunately this can be often misleading for many of the modern storage systems where the IOPS latency distribution is multi-modal, that is we have a fast speed for I/Os served by SSD (or RAM) cache and a slower speed for I/Os served by physical disks. 
Another shortcoming is that the workload generated by CALIBRATE_IO 'is a black box', we don't know the details of how the numbers that are reported in the output are measured and how the testing workload is generated. We will investigate CALIBRATE_IO workload in the rest of this article.

How to run CALIBRATE_IO, a sqlplus example:

Let's start with an example on how to run procedure (note the code below will generate high load on the storage for a few minutes, so you will probably want to be careful on where you run it).

   lat  INTEGER;
   iops INTEGER;
   mbps INTEGER;
   DBMS_RESOURCE_MANAGER.CALIBRATE_IO (num_physical_disks=>100, max_latency=>100, max_iops=>iops, max_mbps=>mbps, actual_latency=>lat);
   DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
   DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
   DBMS_OUTPUT.PUT_LINE ('max_mbps = ' || mbps);

Input parameters:
The first input parameter num_physical_disks has been set to 100 in the example. From the documentation this parameter should be set to the "..approximate number of physical disks in the database storage. This parameter is used to determine the initial I/O load for the calibration run". Based on the observation of a few tests, I noticed that the procedure auto-calibrates the load so I confirm that it does not seem to be important to get this value right.
The second input parameter is max_latency: "Maximum tolerable latency in milliseconds for database-block-sized IO requests". For the tests described in this article I have typically used the value 100 (milliseconds). Lower values for this parameter may throttle the test workload as CALIBRATE_IO tries to meet this limit for the average I/O latency.

Output parameters:
The results of the measurements are three output parameters: max_iops, latency, max_mbps, as in the example above. The values are also made available in the view DBA_RSRC_IO_CALIBRATE (based on the underlying table SYS.RESOURCE_IO_CALIBRATE$). Notably an additional output parameter MAX_PMBPS "Maximum megabytes per second of large I/O requests that can be sustained by a single process" is only available in DBA_RSRC_IO_CALIBRATE.
MAX_IOPS and LATENCY output parameters are respectively "Maximum number of I/O requests per second that can be sustained. The I/O requests are randomly-distributed, database-block-sized reads." and "Average latency of database-block-sized I/O requests at MAX_IOPS rate, expressed in milliseconds".
Note: the results of the calibrate job can be deleted form the system with: delete SYS.RESOURCE_IO_CALIBRATE$ (and commit). See also this article by Gwen Shapira where also other oddities of CALIBRATE_IO and its relationship with the use of parallelism are discussed.

Investigations of CALIBRATE_IO workload

In the following we will go through the exercise of investigating  the workload from CALIBRATE_IO, taking it as a black box. It will be a simple but instructive way to get to practice some techniques for investigating I/O in Oracle and to further understand what CALIBRATE_IO can do for us and what are its limitations. 

- Start calibrate I/O and observe the processes that are executing the workload.
For investigating this I will use a script that queries from V$SESSION or V$ACTIVE_SESSION_HISTORY. For example top.sql and ash.sql that I have described in the article at this link.

The result is that CALIBRATE_IO spawns a certain number of slaves called CS00, CS01,... The CSXX slaves will be spawned on each active Oracle RAC instance. The number of CALIBRATE_IO processes spawned per instance is equal to the value of the parameter CPU_COUNT, at least in the few tests I have run on a few servers with different numbers of CPUs and consequently of CPU_COUNT, ranging from 8 to 32. The number of slaves that are active at a given time however fluctuates over time as the test progresses. Another fact that we can immediately see is that there is a special wait event triggered by CALIBRATE_IO slaves when doing I/O: "Disk file I/O Calibration"
Here below an example taken from a 4-node RAC database:

- What type of I/O is issued by CALIBRATE_IO?
We can investigate this for example by querying GV$IOFUNCMETRIC with a script such as iometric_details.sql that I have also mentioned in the post at this link.

The results is that CALIBRATE_IO first generates random I/O for small block reads on all RAC nodes, then sequential large I/O reads from multiple sessions and nodes, finally sequential large reads using a single session. This behavior is consistent with the documentation and the tool's output.
Here below an example of monitoring CALIBRATE_IO run on a 4-node RAC during the first execution phase (small random I/O reads). We can see that all nodes are performing I/O for small random read and that the grand total of IOPS is about 30K.

Another example here below is from the same 4-node RAC during the second execution phase, we can see that all four nodes are now executing large sequential reads for a grand total of about 2.5 MB/sec.

- How is the I/O distributed across the storage?
The workload from CALIBRATE_IO is a black box, however we can imagine that the tool uses the existing data files for generating the load. We want to confirm that with some direct investigations: we can query GV$IOSTAT_FILE to get the number of single block requests at the start and at the end of the test workload. By calculating the differences we can see which files have been read. The result is that CALIBRATE_IO reads from all the files in the database. Approximately the amount of blocks read that I have measured is proportional to the file size, that is CALIBRATE_IO spreads the load uniformly across the DB files and filesystems used.
An important fact to keep in mind is that if we use more filesystems/storage types for our database files, CALIBRATE_IO will measure all of them at the same time and produce a result that is an average value.
An example of the type of query that we can use to see which files are being read is:

- CALIBRATE_IO is expected to use asynchronous I/O, can we confirm it?
Using Linux strace on a database using ASM storage over block devices, we can see that I/O is performed using the async IO system calls: io_submit and io_getevents. For the random I/O workload reads are of 8KB, for the sequential I/O workload, reads are of 1MB. This was expected. 
However we can also see that each CSXX slave of CALIBRATE_IO submits I/Os one by one and then most of the time an io_getevents follows immediately reaping 1 I/O. There are exceptions where larger numbers of I/O are reaped together (for example 128 8K I/Os), this latter case shows the true nature of asynchronous I/O for CALIBRATE_IO.
In the second and third part of the workload, CALIBRATE_IO tests for large sequential reads, in this case we see io_submit calls with a payload of 1MB, as expected.

Given the asynchronous nature of the I/O, the latency details for the wait event "Disk file I/O Calibration" do not always give us a precise indication of the I/O service times. In some cases CALIBRATE_IO slaves will issue an I/O and wait till it has been ripped, providing us the service time information as a wait event duration. For high-load tests CALIBRATE_IO may issue many IO requests (typically one by one) but reap them in bunches, as mentioned also in the paragraph above. There the wait event duration becomes uncorrelated with I/O service time. We can see here below an heat map of the waits, where most of the waits appear in the 1ms bucket. This is an artifact of the way Oracle times this wait event (if you are interested in Oracle timing of asynch I/O events, see the excellent investigations by Frits Hoogland in his blog).

These are some key metrics taken during the execution of CALIBRATE_IO for this system with 204 SATA disks, RAC, but running on a single node for this test:

- Can we trust the results from CALIBRATE_IO?
As a reference the result of the CALIBRATE_IO run on the system discussed above with 204 SATA disks in JBOD configuration (Oracle on Linux) was:
max_iops = 24211
latency  = 5
max_mbps = 780

The max IOPS number and max MBPS are the expected values. This was a JBOD without any relevant amount of RAM nor SSD cache. The expected value for max IOPS is basically just another way of counting the number of disks (something north of 100 IOPS per disk on this hardware). Also the expected value for sequential I/O is easy to estimate in about 800MB/s (FC HBA, 2 ports at 4Gig each). Therefore the values for max IOPS and max MBPS measured by CALIBRATE_IO for this system agree with expectations.
However, the measured latency value appears to be too small and probably an artifact of using async I/O.
Here below we can see a test performed, on the same system, with SLOB where we get a better drill down of latency and understand what is happening (high latency exposed, effects of saturation due to high load and to aging disks resulting in I/O outliers, etc). IOPS measurements without latency details do not give us the full picture!

In the figure here below we can see some key metrics taken during the execution of SLOB. In particular note the number of IOPS (21K), which is similar to the previous max IOPS value measured with CALIBRATE_IO. However the average Synchronous IO latency (35 ms on that particular snapshot), is much higher than what reported by CALIBRATE_IO. More details of the measured synch IO latency can be seen in the heatmap above.

- Comments on comparing the two tests for the JBOD case: 
The maximum number of IOPS measured when running SLOB turns out to be about the same as what is reported by CALIBRATE_IO for this particular system. This can be explained as the storage system under study is a JBOD and does not have multi-modal response time (there is very little RAM cache and no SSD cache on the storage system, so almost all I/O are served by spindles).
There is an important difference between the results of the two tests and it is with the latency details. Latency tells us many details about how the IO is being served by our storage. By measuring the latency using V$EVENT_HISTOGRAM and plotting is as a heatmap we can see that the system under study suffers from very high latency. It was indeed a very old system with aging disks (being decommissioned, BTW). We did not get the latency details from the CALIBRATE_IO run, which actually told us the average latency was 5ms at 21K IOPS! That would be quite a good result, however it is not the case in the reality and CALIBRATE_IO cannot be trusted on the latency measurement.

- More comments on CALIBRATE_IO and results for other systems types
I have run a few more measurements of CALIBRATE_IO on a NAS system equipped with SSD cache. I have compared the results of CALIBRATE_IO on those systems with values measured when running SLOB, with estimates based on the HW specs, and with estimates based on measurements on similar systems. The results have quite mixed: in some cases CALIBRATE_IO would report a value of max IOPS that was in the range of the expected values, in some other cases it would report a higher values (up to a factor 3 higher).
Latency values have proven once more to be quite important and possibly a key to understand what is happening: in the cases where max IOPS values were above the expected value, the reported latency was very low, the measured latency was reported to be 0 in some cases!
What is likely to happen is that the SSD cache of the controller is being exercised and CALIBRATE_IO does not realize that.
In conclusion for the tests run on NAS storage with SSD cache, CALIBRATE_IO cannot be trusted on the measurement of IOPS and latency.
Also the measurement of the maximum throughput for sequential I/O on this systems have been consistently below the expected value. We expect 1 GB/sec of maximum sequential I/O throughput for this HW, this is confirmed by some simple tests with parallel query, however in most cases CALIBRATE_IO would report only about 500 MB/sec and therefore for this system it cannot be trusted on the measurement of sequential I/O throughput.


In this post we have investigated the use of CALIBRATE_IO to produce I/O intensive workloads for testing storage for Oracle read workloads. CALIBRATE_IO is an easy tool to use and allows to get started with testing very quickly. However CALIBRATE_IO fails to produce accurate results in many circumstances. In particular some important shortcomings of CALIBRATE_IO are: the workload is a black box, we cannot easily scale up the load and understand the effects of storage RAM or SSD cache in the results, it does not provide a latency breakdown of the I/O, therefore hiding some critical details on the storage behavior. Moreover the results of the measured max IOPS, max MBPS and the provided latency values are not always accurate. Finally, by design CALIBRATE_IO will only measure a single value of max IOPS and this can be misleading for storage system with multi-modal response times (such as storage based on disks for capacity but with large amounts of SSD cache).

See also additional links to tools and methods for storage performance studies provided in the first part of the article.