Thursday, February 20, 2014

AWR Top 5 Timed Foreground Events

I've noticed that people post how to get AWR Top 5 Timed Foreground Events other a range of snapshots using a SQL query from time to time. Since this is something I've done for years here is the version of the SQL I use in case somebody finds it useful:
select case wait_rank when 1 then inst_id end "Inst Num",
 case wait_rank when 1 then snap_id end "Snap Id",
 case wait_rank when 1 then begin_snap end "Begin Snap",
 case wait_rank when 1 then end_snap end "End Snap",
 event_name "Event",
 total_waits "Waits",
 time_waited "Time(s)",
 round((time_waited/total_waits)*1000) "Avg wait(ms)",
 round((time_waited/db_time)*100, 2) "% DB time",
 substr(wait_class, 1, 15) "Wait Class"
from (
select
  inst_id,
  snap_id, to_char(begin_snap, 'DD-MM-YY hh24:mi:ss') begin_snap,
  to_char(end_snap, 'hh24:mi:ss') end_snap,
  event_name,
  wait_class,
  total_waits,
  time_waited,
  dense_rank() over (partition by inst_id, snap_id order by time_waited desc)-1 wait_rank,
  max(time_waited) over (partition by inst_id, snap_id) db_time
from (
select
  s.instance_number inst_id,
  s.snap_id,
  s.begin_interval_time begin_snap,
  s.end_interval_time end_snap,
  event_name,
  wait_class,
  total_waits-lag(total_waits, 1, total_waits) over
   (partition by s.startup_time, s.instance_number, stats.event_name order by s.snap_id) total_waits,
  time_waited-lag(time_waited, 1, time_waited) over
   (partition by s.startup_time, s.instance_number, stats.event_name order by s.snap_id) time_waited,
  min(s.snap_id) over (partition by s.startup_time, s.instance_number, stats.event_name) min_snap_id
from (
 select dbid, instance_number, snap_id, event_name, wait_class, total_waits_fg total_waits, round(time_waited_micro_fg/1000000, 2) time_waited
  from dba_hist_system_event
  where wait_class not in ('Idle', 'System I/O')
 union all
 select dbid, instance_number, snap_id, stat_name event_name, null wait_class, null total_waits, round(value/1000000, 2) time_waited
  from dba_hist_sys_time_model
  where stat_name in ('DB CPU', 'DB time')
) stats, dba_hist_snapshot s
 where stats.instance_number=s.instance_number
  and stats.snap_id=s.snap_id
  and stats.dbid=s.dbid
  and s.dbid=3870213301
  and s.instance_number=1
  and stats.snap_id between 190 and 195
) where snap_id > min_snap_id and nvl(total_waits,1) > 0
) where event_name!='DB time' and wait_rank <= 5
order by inst_id, snap_id;

Inst Snap  Begin Snap        End Snap Event                            Waits    Time(s) Avg wait(ms)  % DB time Wait Class
---- ----- ----------------- -------- --------------------------- ---------- ---------- ------------ ---------- ---------------
   1   191 20-02-14 14:10:10 14:20:10 cell smart table scan           631829    9569.43           15      79.03 User I/O
                                      DB CPU                                    1202.09                    9.93 
                                      direct path read temp            66074    1006.82           15       8.32 User I/O
                                      PX Deq: Slave Session Stats      11730     429.91           37       3.55 Other
                                      latch: shared pool               28134     162.47            6       1.34 Concurrency
   1   192 20-02-14 14:20:10 14:30:11 cell smart table scan          1391832    4620.11            3      67.39 User I/O
                                      DB CPU                                    1017.78                   14.85 
                                      direct path read temp            76329     977.95           13      14.26 User I/O
                                      PX Deq: Slave Session Stats      25043     401.53           16       5.86 Other
                                      latch free                       38836      214.1            6       3.12 Other
   1   193 20-02-14 14:30:11 14:40:14 cell smart table scan          2448539   11075.29            5       79.3 User I/O
                                      DB CPU                                    1529.93                   10.95 
                                      PX Deq: Slave Session Stats      44242    1520.01           34      10.88 Other
                                      direct path read temp            77583     985.65           13       7.06 User I/O
                                      latch free                       67518     376.52            6        2.7 Other
   1   194 20-02-14 14:40:14 14:50:15 direct path read temp            99224      857.3            9      71.63 User I/O
                                      DB CPU                                     328.78                   27.47 
                                      name-service call wait              91        5.4           59       0.45 Other
                                      PX Deq: Slave Session Stats         83       0.17            2       0.01 Other
                                      direct path write                  194       0.12            1       0.01 User I/O
   1   195 20-02-14 14:50:15 15:00:18 DB CPU                                    1188.84                   98.15 
                                      log switch/archive                   1      10.01        10010       0.83 Other
                                      direct path read temp              775       3.96            5       0.33 User I/O
                                      cell smart table scan             1393        1.1            1       0.09 User I/O
                                      cell single block physical         148        0.9            6       0.07 User I/O
                                      read                                                                      
 
 
25 rows selected
 

Friday, February 14, 2014

'active txn count during cleanout', part II

In part I I've shown some interesting side effects that happen when you're trying to select from a table block which have an outstanding active transaction in it. In this post we're going to make things a little bit more interesting by introducing indexes into the picture.

Test Setup

I'll create a table with two rows and an index:
SQL> create table test as
  2   select level n
  3    from dual
  4    connect by level <= 2;
 
Table created
 
SQL> create index i_test on test (n);
 
Index created
Session 1

I'll update one row in my first session and leave the transaction open:
SQL> update test set n=3 where n=1;
 
1 row updated
Here is xid for this transaction:
SQL> select '0x'||to_char(XIDUSN, 'fm000x')||'.'||
  2    to_char(XIDSLOT, 'fm00x')||'.'||
  3    to_char(XIDSQN, 'fm0000000x') xid
  4   from v$transaction
  5   where addr=(
  6    select taddr
  7     from v$session
  8     where sid=sys_context('userenv','sid')
  9    );
 
XID
----------------------
0x0004.01c.00001fd5
Index Block Dump 1

Since I only have two rows in the table I will end up with a special case where my index root block will be able to hold all the data essentially playing a role of both the root block and a leaf block at the same time. This makes it easier for me to dump the relevant index block because I know there is only one index block to dump:
Block header dump:  0x0100008b
 Object id on Block? Y
 seg/obj: 0x12f46  csc: 0x00.1efcb3c  itc: 2  flg: E  typ: 2 - INDEX
     brn: 0  bdba: 0x1000088 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x02   0x0004.01c.00001fd5  0x00c011ac.09a5.0b  ----    2  fsc 0x000e.00000000
We have two rows locked in the index block because the row with value=1 got deleted and a row with value=3 got inserted, as per our update. Let's notice block cleanout scn (csc) value: 0x00.1efcb3c

Session 2

I'll update another row in the second session leaving the transaction open as well:
SQL> update test set n=4 where n=2;
 
1 row updated

XID
----------------------
0x0003.01f.00001eab
Index Block Dump 2

Here is how index block dump looks right now:
Block header dump:  0x0100008b
 Object id on Block? Y
 seg/obj: 0x12f46  csc: 0x00.1efcd0c  itc: 3  flg: E  typ: 2 - INDEX
     brn: 0  bdba: 0x1000088 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x02   0x0004.01c.00001fd5  0x00c011ac.09a5.0b  ----    2  fsc 0x000e.00000000
0x03   0x0003.01f.00001eab  0x00c00e73.0982.31  ----    2  fsc 0x000e.00000000
Notice that csc value has changed from 0x00.1efcb3c to 0x00.1efcd0c

What happened is just another variation of the theme we saw in part I -- when our second session updates the index block it notices an active transaction in the ITL list and tries to perform a cleanout. It will do the same for the table block as well but since I've shown all the relevant mechanics in the previous post I'll leave it at that.

Undo Segment Header Checks

The important consequence from all the above is that a session which tries to perform a cleanout will have to look into the other transaction(-s) undo segment header block in order to find out whether the other transaction has committed or not:
SQL> select
  2    trn.xidusn,
  3    rbs.file_id,
  4    rbs.block_id header_block,
  5    trn.ubablk undo_block,
  6    '0x'||to_char(trn.XIDUSN, 'fm000x')||'.'||
  7    to_char(trn.XIDSLOT, 'fm00x')||'.'||
  8    to_char(trn.XIDSQN, 'fm0000000x') xid
  9   from v$transaction trn, dba_rollback_segs rbs
 10   where trn.XIDUSN=rbs.segment_id
 11   order by 1;
 
    XIDUSN    FILE_ID HEADER_BLOCK UNDO_BLOCK XID
---------- ---------- ------------ ---------- ----------------------
         3          3          160       3699 0x0003.01f.00001eab
         4          3          176       4524 0x0004.01c.00001fd5
Our first session xid was 0x0004.01c.00001fd5 so when our second session performed the update it had to look into block 176 (undo header block) to check the other transaction status and block 4524 (undo block) in order to rollback the other session changes for write consistency checks:
WAIT #140055864053216: nam='db file sequential read' ela= 341 file#=3 block#=176 blocks=1 obj#=0 tim=1392400391392719
WAIT #140055864053216: nam='db file sequential read' ela= 675 file#=3 block#=4524 blocks=1 obj#=0 tim=1392400391393679
I'll continue setting the up stage for a perfect disaster with delayed block cleanout and parallel DML in the upcoming series.

Tuesday, January 28, 2014

'active txn count during cleanout', part I

I was going to write a blog post about some peculiar side effects you can get into with the delayed block cleanout when running parallel DML but soon discovered that the entry became so big that I've decided to split it up into a series of more manageable posts.

For a good background on various themes of block cleanout check out Clean it up by Jonathan Lewis.

Active transactions, consistent reads and table scans

First I'm going to show you some interesting observations about what happens when you're trying to select from the block which has an outstanding open transaction in it. Let's create a test table with one row and update it leaving the transaction open. I'm using 11.2.0.4 here:
SQL> create table test (n number, m number);
 
Table created
 
SQL> insert into test values (1, 1);
 
1 row inserted
 
SQL> commit;
 
Commit complete

SQL> select dbms_rowid.rowid_relative_fno(rowid) file#,
  2    dbms_rowid.rowid_block_number(rowid) block#
  3    from test;
 
     FILE#     BLOCK#
---------- ----------
         4        134
 
SQL> update test set n=n;
 
1 row updated
If we were to look into the buffer headers view (x$bh) we would find the following:
     FILE#     DBABLK STATE CR_SCN_BAS CR_SCN_WRP
---------- ---------- ----- ---------- ----------
         4        134 xcur           0          0
         4        134 cur       595913          0
Now lets select from this table in a different session while checking couple stats at the same time:
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
immediate (CR) block cleanout applications                                0
active txn count during cleanout                                          0
 
SQL> select  * from test;
 
         N          M
---------- ----------
         1          1
 
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
immediate (CR) block cleanout applications                                1
active txn count during cleanout                                          1
 
SQL> select  * from test;
 
         N          M
---------- ----------
         1          1
 
NAME                                                                  VALUE
---------------------------------------------------------------- ----------
immediate (CR) block cleanout applications                                2
active txn count during cleanout                                          2
 
SQL> select  * from test;
 
         N          M
---------- ----------
         1          1

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
immediate (CR) block cleanout applications                                3
active txn count during cleanout                                          3
The first thing worth mentioning is that both immediate (CR) block cleanout applications and active txn count during cleanout statistics increment every time we execute the select. Immediate (CR) block cleanout applications indicates that the session is performing delayed block cleanout while doing the consistent read (CR). Active txn count during cleanout indicates how many currently active transactions the cleanout process encountered in each block. From one perspective this makes sense -- when our select reads the block and discovers that there are open transactions it may not know whether these transactions are currently active or the block indeed requires a cleanout. However, something interesting happened when we look into x$bh again:
     FILE#     DBABLK STATE CR_SCN_BAS CR_SCN_WRP
---------- ---------- ----- ---------- ----------
         4        134 xcur           0          0
         4        134 cur       595913          0
         4        134 cur       595922          0
         4        134 cur       595926          0
         4        134 cur       595940          0
Clearly each subsequent select generated a new consistent read version of the block at a different SCN. Indeed, if we were to dump the table's block before and after the last select here is what we would find:

Before:
buffer tsn: 4 rdba: 0x01000086 (4/134)
scn: 0x0000.000917d6 seq: 0x01 flg: 0x04 tail: 0x17d60601
frmt: 0x02 chkval: 0xf347 type: 0x06=trans data
...
Block header dump:  0x01000086
 Object id on Block? Y
 seg/obj: 0x113da  csc: 0x00.917d6  itc: 2  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x1000080 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0002.01f.00000242  0x00c0c19c.0035.19  C---    0  scn 0x0000.00090e7e
0x02   0x0008.00b.00000241  0x00c00989.0039.38  ----    1  fsc 0x0000.00000000
After:
buffer tsn: 4 rdba: 0x01000086 (4/134)
scn: 0x0000.000917e4 seq: 0x01 flg: 0x04 tail: 0x17e40601
frmt: 0x02 chkval: 0xf375 type: 0x06=trans data
...
Block header dump:  0x01000086
 Object id on Block? Y
 seg/obj: 0x113da  csc: 0x00.917e4  itc: 2  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x1000080 ver: 0x01 opc: 0
     inc: 0  exflg: 0

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0002.01f.00000242  0x00c0c19c.0035.19  C---    0  scn 0x0000.00090e7e
0x02   0x0008.00b.00000241  0x00c00989.0039.38  ----    1  fsc 0x0000.00000000
Notice that cleanout scn (cscn) is different and the block got updated with the same scn as well. As you probably have guessed by now, each select generates a block cleanout redo record too:
REDO RECORD - Thread:1 RBA: 0x00000c.00035faf.0010 LEN: 0x006c VLD: 0x05
SCN: 0x0000.000917e4 SUBSCN:  1 01/28/2014 00:16:44
(LWN RBA: 0x00000c.00035faf.0010 LEN: 0001 NST: 0001 SCN: 0x0000.000917e4)
CHANGE #1 TYP:0 CLS:1 AFN:4 DBA:0x01000086 OBJ:70618 SCN:0x0000.000917d6 SEQ:1 OP:4.1 ENC:0 RBL:0
Block cleanout record, scn:  0x0000.000917e4 ver: 0x01 opt: 0x01, entries follow...
From another perspective why even bother doing any of this? It almost looks like the cleanout code is moving along in tiding up the block, updates scn plus cscn and then discovers that there is really nothing to cleanup because the other transaction is still active. But it did changes to the current version of the block by now which then results in proliferation of CR copies from the selects and increased redo generation from the (essentially) shell block cleanout records. There is going to be more interesting details worth mentioning in the next post.

Saturday, January 25, 2014

crsd.bin core dumps

Core dump issues sometimes can be notoriously difficult to troubleshoot. I've got a call this morning from one of my customers saying that after a power outage Grid Infrastructure is not able to fully come up on some nodes on their Exadata cluster. After further examining the situation it turned out that crsd.bin binary is simply core dumping upon start up.

Troubleshooting Grid Infrastructure startup issues when nothing is core dumping sometimes could be a chore so what could be more fun when it's not able to fully start due to a major daemon core dumping?

One of the useful things to do when a binary core dumps is to get a stack trace to see which function raised the exception (you can examine the core file the gdb, for example, in order to do that). Let's see what the stack trace holds for us:
Core was generated by `/u01/app/11.2.0.3/grid/bin/crsd.bin reboot'.
Program terminated with signal 6, Aborted.
#0  0x0000003ea3e30285 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x0000003ea3e30285 in raise () from /lib64/libc.so.6
#1  0x0000003ea3e31d30 in abort () from /lib64/libc.so.6
#2  0x0000003ea56bed94 in __gnu_cxx::__verbose_terminate_handler() ()
   from /usr/lib64/libstdc++.so.6
#3  0x0000003ea56bce46 in ?? () from /usr/lib64/libstdc++.so.6
#4  0x0000003ea56bce73 in std::terminate() () from /usr/lib64/libstdc++.so.6
#5  0x0000003ea56bcef9 in __cxa_rethrow () from /usr/lib64/libstdc++.so.6
#6  0x0000000000df8672 in Acl::Acl (this=0x4556d440, domain=..., resource=...,
    aclString=..., useOcr=true, $U7=,
    $U8=, $U9=,
    $V0=, $V1=) at acl.cpp:120
#6  0x0000000000df8672 in Acl::Acl (this=0x4556d440, domain=..., resource=...,
    aclString=..., useOcr=true, $U7=,
    $U8=, $U9=,
    $V0=, $V1=) at acl.cpp:120
#7  0x0000000000df879c in Acl::_ZN3CAA3AclC1ERKSsS2_S2_b (this=0x4556d440,
    $U7=, $U8=,
    $U9=, $V0=,
    $V1=)
#8  0x0000000000a4d81e in SrvResource::initUserId (this=0x7f15803d7550,
    $1=) at clsAgfwSrvResource.cpp:204
We can see that the source of the exception is in the Acl::Acl which is then propagated through the standard libraries. Moreover, function SrvResource::initUserId appears in the stack trace as well, which makes you wonder whether there is some issue with some of the resource's Access Control List, in particular with it's user id setting.

Armed with that knowledge you can now sift through the Grind Infrastructure logs in a much more effective way because these logs are notoriously big and "chatty" (I think my worst nightmare is when the database alert log will become like GI alert log thereby making it much less useful). And there we have it:
Exception: ACL entry creation failed for: owner:ggate:rwx
Turned out the nodes which were core dumping were recently added to the cluster and the user ggate, which is the owner of the GoldenGate resource, simply did not exist on these nodes. Apparently that was enough to cause crsd.bin core dumps. Yikes!

Tuesday, December 03, 2013

Oracle 12cR1, UDF Pragma and HyperLogLog

One interesting enhancement in 12cR1 PL/SQL is UDF pragma which has the following description:
The UDF pragma tells the compiler that the PL/SQL unit is a user defined function that is used primarily in SQL statements, which might improve its performance.
I though it would be very cool to try it out with my HyperLogLog post I did recently and see if it results in any measurable performance improvement.

Test Table

I'll use the same test table as I did in my original post:
SQL> create table z_hll_test as
  2   select dbms_random.string('x', 4)||rpad('x', 500, 'x') n
  3    from dual
  4    connect by level <= 1000000;
 
Table created

SQL> alter table z_hll_test cache;
 
Table altered
Note that I'm explicitly setting the table to cache in order to make in-memory PQ kick in and eliminate disk I/O as a factor from my test case. For each test I made sure that no physical I/O has happened (including temp I/O for native distinct test).

Regular Function
create or replace function num_zeroes(
 p_n binary_integer
 ) return binary_integer deterministic is
 l_t binary_integer;
 l_b binary_integer;
begin
 --assume 32-bit hash value, 10-bits for bucket
 if (p_n = 0) then return 22; end if;

 l_t := 1;
 l_b := 0;

 while ( bitand(p_n,l_t) = 0 )
 loop
  l_t := l_t*2;
  l_b := l_b+1;
 end loop;

  return l_b;

end num_zeroes;

SQL> select
  2    case
  3      when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4      when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5      else round(hll)
  6    end num_distinct
  7    from (
  8      select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9        from (
 10          select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11            from (
 12              select /*+ parallel(z 4) */ mod(ora_hash(n), 1024) bucket,
 13                  max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
 14                from z_hll_test z
 15                group by mod(ora_hash(n), 1024)
 16            )
 17        )
 18  );
 
NUM_DISTINCT
------------
      748175
 
Executed in 0.889 seconds
Pragma UDF Function
create or replace function num_zeroes(
 p_n binary_integer
 ) return binary_integer deterministic is
 pragma udf;
 l_t binary_integer;
 l_b binary_integer;
begin
 --assume 32-bit hash value, 10-bits for bucket
 if (p_n = 0) then return 22; end if;

 l_t := 1;
 l_b := 0;

 while ( bitand(p_n,l_t) = 0 )
 loop
  l_t := l_t*2;
  l_b := l_b+1;
 end loop;

  return l_b;

end num_zeroes;

SQL> select
  2    case
  3      when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4      when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5      else round(hll)
  6    end num_distinct
  7    from (
  8      select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9        from (
 10          select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11            from (
 12              select /*+ parallel(z 4) */ mod(ora_hash(n), 1024) bucket,
 13                  max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
 14                from z_hll_test z
 15                group by mod(ora_hash(n), 1024)
 16            )
 17        )
 18  );
 
NUM_DISTINCT
------------
      748175
 
Executed in 0.593 seconds
Pragma UDF gives us ~33% performance boost which is not too bad considering we didn't have to do anything else. However, that's not the most interesting part -- let's take a look at the native distinct next.

Native Distinct
SQL> select /*+ parallel(z 4) */ count(distinct n) from z_hll_test z;
 
COUNT(DISTINCTN)
----------------
          753204
 
Executed in 0.983 seconds

Note that this was an optimal execution!

Summary

Let's summarize results in a table:
Regular FunctionPragma UDF FunctionNative Distinct
0.8890.5930.983

As you can see pragma udf actually beats native implementation by a considerable margin which is very impressive given the fact that distict had an optimal execution.

Tuesday, November 26, 2013

Result Cache Latch and PDBs

One interesting aspect of Oracle 12cR1 database when it comes to PDBs is how latching is done. For example, if all PDBs have to work under the same latch then contention in one PDB can easily affect users in other PDBs too.

Continuing my series of posts on the Result Cache latch today I'll check what happens when you try to acquire RC latch from two different PDBs.

Session 1

The first session is connected under container name TEST:
SQL> select sys_context('userenv', 'con_name') con_name from dual;

CON_NAME
--------------------
TEST

SQL> select addr from v$latch where name='Result Cache: RC Latch';

ADDR
----------------
0000000060041C78
Session 2

The second session is connected under container name TEST2:
SQL> select sys_context('userenv', 'con_name') con_name from dual;

CON_NAME
--------------------
TEST2

SQL> select addr from v$latch where name='Result Cache: RC Latch';

ADDR
----------------
0000000060041C78
As you can see the address of the latch is exactly the same under both containers so both PDBs appear to share exactly the same latch. Let's confirm it by trying to acquire the latch in exclusive mode in both sessions:

Session 1
SQL> oradebug setmypid
Statement processed.
SQL> oradebug call kslgetsl_w 0x0000000060041C78 1 1 1 16
Function returned 1
Session 2
SQL> oradebug setmypid
Statement processed.
SQL> oradebug call kslgetsl_w 0x0000000060041C78 1 1 1 16
...session hangs...
...and we can confirm that it waits on the RC latch:
SQL> select event, p1, to_char(p1, 'fmxxxxxxxxxxx') addr, state, seconds_in_wait
  2   from v$session_wait
  3   where sid=18;
 
EVENT              P1 ADDR         STATE               SECONDS_IN_WAIT
---------- ---------- ------------ ------------------- ---------------
latch free 1610882168 60041c78     WAITING                          25
The bottom line is that the single RC latch appears to be shared by all PDBs.

Monday, November 18, 2013

How to use HyperLogLog to incrementally maintain number of distinct values

In this post I'll show how extremely easy it is to maintain the number of distinct values when using HyperLogLog. Please reference to my previous post for some description how HyperLogLog works.

Let's assume we have a table with some existing data:
SQL> create table existing_data as
  2   select round(dbms_random.value(0, 77777)) n
  3    from dual
  4    connect by level <= 100000;
 
Table created
Precise number of distinct values:
SQL> select count(distinct n) from existing_data;
 
COUNT(DISTINCTN)
----------------
           56192
Now in order for the incremental refresh to work we first need to create HyperLogLog synopsis:
SQL> create table existing_hll as
  2   select mod(ora_hash(n), 1024) bucket,
  3     max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
  4    from existing_data
  5    group by mod(ora_hash(n), 1024);
 
Table created
The table is extremely small as it contains only 1024 rows yet it would be enough to describe data with billions of rows in it. Of course we can now use that synopsis to estimate number of distinct values we have using HyperLogLog:
SQL> select
  2    case
  3     when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4     when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5     else round(hll)
  6   end num_distinct
  7   from (
  8    select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9     from (
 10      select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11       from existing_hll
 12       )
 13   );
 
NUM_DISTINCT
------------
       57676
Again, the result is within 2% of the precise distinct, which is pretty good considering how little information we had to store about the entire data set.

Now let's say there is some new data you want to add into the existing table:
SQL> create table new_data as
  2   select round(dbms_random.value(5555, 111111)) n
  3    from dual
  4    connect by level <= 10000;
 
Table created
So what we're going to do is calculate HyperLogLog synopsis about this new data and then merge it with HyperLogLog synopsis for the existing data:
SQL> merge into existing_hll e
  2   using (
  3    select mod(ora_hash(n), 1024) bucket,
  4      max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
  5     from new_data
  6     group by mod(ora_hash(n), 1024)
  7   ) n on (e.bucket=n.bucket)
  8   when matched then update set e.val=greatest(e.val, n.val)
  9   when not matched then insert values (n.bucket, n.val);

1024 rows merged.

SQL> commit;

Commit complete.
Of course the above is a lot more efficient than what you would have to do otherwise, i.e. calculate the number of distinct values from scratch for the entire data set. Once the synopsis has been refreshed we can estimate the new number of distinct values we have:
SQL> select
  2    case
  3     when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4     when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5     else round(hll)
  6   end num_distinct
  7   from (
  8    select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9     from (
 10      select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11       from existing_hll
 12       )
 13   );
 
NUM_DISTINCT
------------
       62288
And it's no different had I computed HyperLogLog for both data sets from scratch:
SQL> select
  2    case
  3     when hll <= 2560 and zeroes > 0 then round(1024*ln(1024*1/zeroes))
  4     when hll >  1/30 * power(2,32) then round(-power(2,32) * ln(1 - hll/power(2,32)))
  5     else round(hll)
  6   end num_distinct
  7   from (
  8    select 0.72054 * (1048576/(current_sum+zeroes)) hll, zeroes
  9     from (
 10      select sum(1/power(2, val)) current_sum, (1024-count(*)) zeroes
 11       from (
 12        select mod(ora_hash(n), 1024) bucket,
 13          max(num_zeroes(trunc(ora_hash(n)/1024)))+1 val
 14         from (
 15          select n from existing_data
 16          union all
 17          select n from new_data
 18         ) group by mod(ora_hash(n), 1024)
 19       )
 20       )
 21   );
 
NUM_DISTINCT
------------
       62288
And the precise distinct count:
SQL> select count(distinct n) from
  2   (
  3   select * from existing_data
  4   union all
  5   select * from new_data
  6   );
 
COUNT(DISTINCTN)
----------------
           60983
I think the ability to incrementally refresh the data is the most powerful aspect of HyperLogLog.