Tuesday, February 26, 2013

In-memory PQ and physical reads

In my previous post I've demonstrated how in-memory PQ can access the table directly from the buffer cache even when you're using manual DOP.

One interesting question, however, is what happens when PQ slave needs to read some blocks from disk given that object has been qualified for in-memory (cached) access? Would the slave do it using direct or buffered I/O?

The answer becomes somewhat clear once you realize that in-memory PQ is enabled by simply utilizing buffered reads. Since direct path reads can not take advantage of any blocks stored in the buffer cache (local or remote), trying to do direct path reads will defeat the whole point of in-memory PQ.

To demonstrate the point here is the excerpt from a tkprof output for one of the parallel query slaves:
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0    PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
         1          1          1     SORT AGGREGATE (cr=62838 pr=62500 pw=0 time=7492751 us)
     57696      62500      72120      PX BLOCK ITERATOR (cr=62838 pr=62500 pw=0 time=24528381 us cost=18846 size=0 card=500000)
     57696      62500      72120       TABLE ACCESS FULL Z_TEST (cr=62838 pr=62500 pw=0 time=23944184 us cost=18846 size=0 card=500000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  PX Deq: Execution Msg                         128        0.19          1.06
  db file scattered read                       4546        0.16         43.99
  latch free                                      1        0.00          0.00
  latch: cache buffers lru chain                 31        0.01          0.03
  latch: cache buffers chains                     3        0.00          0.00
  latch: object queue header operation            4        0.00          0.00
Note that the slave waited on db file scattered read event which is nothing else but buffered multiblock reads. If you were to run the same test on the Exadata platform you would see cell multiblock physical read event instead, given that in-memory PQ did get utilized. There are a couple of consequences for this:
  • There is no need to do object level checkpoint. This makes in-memory PQ somewhat more friendly to be running in OLTP environment since it doesn't need to flush any dirty buffers to disk.
  • If you running on the Exadata platform, none of the offloading will happen, even if you have to read the significant portion of the table from disk.

On another note it looks like the things came full circle. Serial sessions are now able to utilize direct path reads while parallel query slaves are able to do buffered I/O.

All tests were done on 11.2.0.3 (both Exadata and non-Exadata).

Monday, February 18, 2013

Does in-memory PQ work with PARALLEL_DEGREE_POLICY=MANUAL?

In-memory parallel execution seems to be gaining popularity especially among people running x2-8 and x3-8 Exadata systems or any other system that have large amounts of memory capable of caching lots of data.

Oracle documentation suggests that in order to utilize in-memory PQ, parallel_degree_policy needs to be set to auto.

_parallel_cluster_cache_policy

One of the parameters influenced by parallel_degree_policy is _parallel_cluster_cache_policy. When using Auto DOP _parallel_cluster_cache_policy will be set to cached. The question then becomes what happens if we set _parallel_cluster_cache_policy=cached while still keeping Manual DOP? Will the system use in-memory PQ?

Test table

Below is a test table setup:
SQL> create table z_test tablespace data as
        select level n, rpad('*', 4000, '*') v
                from dual
                connect by level <= 500000;

Table created.

SQL> alter table z_test add constraint pk_z_test primary key (n);

Table altered.

SQL> select bytes/power(1024,2) mb
        from user_segments
        where segment_name='Z_TEST';

        MB
----------
      3968

SQL> exec dbms_stats.gather_table_stats(user, 'z_test');

PL/SQL procedure successfully completed.
The instance is running with 12G buffer cache so it'll have no problems fully caching the above table. All tests were done on my in-house test lab with Oracle 11.2.0.3.3 running inside a Linux VM.

Classic PQ #1

Without setting any additional parameters PQ behave the way it always did -- by utilizing direct path reads directly to the process' memory:
SQL> set timing on
SQL> set autot trace exp stat
SQL> select /*+ parallel(8) full(z_test) */ count(*) from z_test;

Elapsed: 00:00:02.86

Execution Plan
----------------------------------------------------------
Plan hash value: 2128527892

--------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 | 18846   (1)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| Z_TEST   |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 8 because of hint


Statistics
----------------------------------------------------------
         25  recursive calls
          0  db block gets
     500525  consistent gets
     500000  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
It took us about 2.86 seconds to full scan the table which equals 1387MB/s throughput (my test lab storage setup is described here). The above clearly shows that we had to do physical reads in order to access the entire table.

Caching the table

Of course, before testing the in-memory PQ, we need to make sure that our entire table sits in the buffer cache. The easiest way to do it is perform an FTS on the table using an index:
SQL> select /*+ index(z_test,pk_z_test) */ v from z_test;

500000 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 579016438

-----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |   500K|  1907M|   501K  (1)| 00:00:03 |
|   1 |  TABLE ACCESS BY INDEX ROWID| Z_TEST    |   500K|  1907M|   501K  (1)| 00:00:03 |
|   2 |   INDEX FULL SCAN           | PK_Z_TEST |   500K|       |  1052   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     534311  consistent gets
     501105  physical reads
          0  redo size
 2021185355  bytes sent via SQL*Net to client
     367187  bytes received via SQL*Net from client
      33335  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     500000  rows processed
Let's check to make sure all table blocks are in the buffer cache:
SQL> set autot off
SQL> select count(*)
        from v$bh
        where objd= (select data_object_id from user_objects where object_name='Z_TEST')
                and status='xcur';

  COUNT(*)
----------
    500001
Now we're good to go!

Classic PQ #2

Even with the table entirely cached we still get it using physical reads when utilizing classic PQ -- as it should be:
SQL> set autot trace exp stat
SQL> select /*+ parallel(8) full(z_test) */ count(*) from z_test;

Elapsed: 00:00:02.83

Execution Plan
----------------------------------------------------------
Plan hash value: 2128527892

--------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 | 18846   (1)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| Z_TEST   |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 8 because of hint


Statistics
----------------------------------------------------------
         24  recursive calls
          0  db block gets
     500525  consistent gets
     500000  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        523  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
In-memory PQ

Let's flip the parameter responsible for in-memory PQ (while still keeping parallel_degree_policy=manual) and see what happens:
SQL> alter session set "_parallel_cluster_cache_policy"=cached;

Session altered.

Elapsed: 00:00:00.01
SQL> select /*+ parallel(8) full(z_test) */ count(*) from z_test;

Elapsed: 00:00:00.36

Execution Plan
----------------------------------------------------------
Plan hash value: 2128527892

--------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 | 18846   (1)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| Z_TEST   |   500K| 18846   (1)| 00:00:01 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 8 because of hint


Statistics
----------------------------------------------------------
         24  recursive calls
          0  db block gets
     502709  consistent gets
          0  physical reads
          0  redo size
        526  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
For you see -- the entire table got read from the buffer cache this time and much faster! The fact that we did zero physical IOs shows in-memory PQ kicking in.

Conclusion

It is possible to use in-memory PQ with Manual DOP by setting _parallel_cluster_cache_policy=cached. Of course, always consult with Oracle support before flipping any of the underscore parameters.