This Is Really Not Good Oracle Application (but it does work)

I ran into another interesting problem some time ago that I decided to write about just recently. This problem is interesting because it is a great example of an Oracle application that should not use Oracle at all. It also makes it clear that too many programmers these days don’t know much about anything outside of their favorite computer language, which is almost always Java. On the other hand, it also shows that Oracle can be used in applications that it is really unsuited for, if you throw enough hardware at the problem.

So, here is the application: we have a large company that is taking transactions from many different sources at the same time, hundreds of thousands of transactions. The application is not actually auditing the transactions, all it is doing is counting the transactions every minute then displaying a bar graph to a dashboard, along with some other statistics. In other words, it’s a very basic dashboard application.

The process is this: Every transaction that comes in is written to an Oracle table with some basic information (none of which is privacy type identify information). The information includes the time, type, and location of the transaction, and the time it took to run the transaction. At the end of each minute, a query is run to retrieve from the Oracle table (qualifying on the minute) to count and display that information on a dashboard. Then some basic statistical information is calculated. The minute by minute transaction information is saved for six days, then deleted.

Obviously, this really isn’t a good use for Oracle. It would make much more sense to count this data as it comes in and send it to the dashboard at the end of each minute, and only writing the summary information to the Oracle database. Writing individual transactions to the database for no other reason than displaying counts to a dashboard is rather pointless. This method being has thousands more writes per minute than are really needed, and one more read per minute. This entire operation could be done much more efficiently using a traditional programming language such as FORTRAN or C, or even PASCAL or ADA. I suspect even a good Perl scripter could produce something that would accomplish what needs to be done much more effectively. All that is really needed is to count the required information as it comes in, add the totals, write the totals to the database, send the summary to the dashboard, and dump the data. For this type of application, there is no good reason at all to use the Oracle database to store the individual transactions, with all the overhead of maintaining read and write consistency. From the start, this appeared to be a classic circular queue design that could be handled very well with a 3rd generation language, and with much less overhead.

What was produced was a purely database application written in Java that leverages none of the inherent advantages of a third generation language, and as a consequence requires a server of tremendous power to handle the load. The server has 256G RAM with 128 CPU’s (not sure how that breaks out in physical cores).

Originally, the data was to be stored in a standard Oracle table, un-partitioned, with a regularly scheduled delete process to delete the old data. Needless to say, this resulted in continuously growing table, large gaps in the indices, and gradually deteriorating performance. Eventually, the design team got hold of a DBA to try and figure out what was going on with their performance. Needless to say, they should have had one involved from the start, instead of bringing in one at the end, a week before they planned to move to production.

Anyway, after some thought, and discussion, it was pretty clear that their table design was about the worst possible for this sort of application (and it was too late to get the out of the Oracle database entirely), so I designed what I hoped would get the best possible performance from this application. What I designed was a classic circular queue that uses Oracle partitioning. The partitioning keys are day of week and minute of day.

The DDL statement (column names and such have been changed, and some columns are left off) is below, I also left of the create index statement on the TIME_IN column:
Create table dbakerber.tranlog_table

(time_in timestamp default systimestamp not null,
Application_id number(2) not null,
Transaction_type_id number(2),
Transaction_id varchar2(250),
Transaction_start_time timestamp with time zone,
Elapsed_mils number(6),
Day_of_week number(2) generated always as (to_number(to_char(time_in,’d’))),
Min_of_day number(5) generated always as (60* extract(hour from time_in)+extract(minute from time_in))
)  tablespace tranlog
Partition by list(day_of_week)
Subpartition by range (min_of_day)
(partition part_Sunday values (1)
(subpartition partsun_1 values less than (1) tablespace tranlog_part_sun,
subpartition partsun_2 values less than (2) tablespace tranlog_part_sun,
subpartition partsun_3 values less than (3) tablespace tranlog_part_sun,
subpartition partsun_4 values less than (4) tablespace tranlog_part_sun,
subpartition partsun_5 values less than (5) tablespace tranlog_part_sun,
.
.
subpartition partsun_1440 values less than (1440) tablespace tranlog_part,
subpartition partsunbad_values values less than (maxvalue) tablespace tranlog_part
),
Partition part_Monday values (2)
( subpartition partmon_1 values less than (1) tablespace tranlog_part_mon,
subpartition partmon_2 values less than (2) tablespace tranlog_part_mon,
.
.
.)

Remainder of create table left off for the sake of space.

We then wrote a stored procedure for truncating the partitions:

CREATE OR REPLACE
procedure del_old_partitions(trunc_date in date default sysdate-6)
is
comline varchar2(400);
part_name varchar2(30);
part_day varchar2(30);
p_name varchar2(60);
begin
  dbms_output.enable(1000);
  part_day:=to_char(trunc_date,'DAY');
  part_name:='PART_'||part_day;
  comline:='alter table dbakerber.tranlog truncate partition '||part_name||' update indexes';
--  dbms_output.put_line(comline);
  execute immediate comline;
  exception when others then
  begin
    raise;
  end;
end;

We ran into an application problem whereby for some reason it was not killing off its old processes, thus leaving locks on the partitions when it came time to truncate the partition. So we wrote a shell script to go out and kill the locking processes prior to the partition truncation, and this is what we ended up with:

#!/bin/ksh
#
#############################################################################
export ORAENV_ASK=NO
export ORACLE_SID=ORASID1
export ORACLE_HOME=/oracle/product/11.2.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH

sqlplus "/ as sysdba" <<EOF
set verify off echo off pages 0 lines 300
set termout off feedback off heading off
spool /tmp/killprocs.ksh
select
'#!/bin/ksh'
from dual;
SELECT distinct
  'ssh '||host_name||' "'||'kill -9 '||p.spid||'"'
FROM gv\$locked_object v,
     dba_objects d,
     gv\$lock l,
     gv\$session s,
     gv\$process p,
     gv\$instance i
WHERE v.object_id = d.object_id
AND (v.object_id = l.id1)
AND v.session_id = s.sid
and s.paddr=p.addr
and owner='DBAKERBER'
and object_name='TANLOG'
and logon_time < sysdate-1
and object_type='TABLE PARTITION'
and s.inst_id=i.inst_id;
exit;
EOF
chmod +x /tmp/killprocs.ksh
sh /tmp/killprocs.ksh
 
sqlplus "/ as sysdba" <128,cascade=>TRUE);
exit
EOF

wcount=`grep ORA- /tmp/clean_old_partitions.log | wc -l`
if [ $wcount -gt 0 ]; then
  mailx -s 'Error in **PROD*** Partition Cleanup Script' myemailaddress < /tmp/clean_old_partitions.log
fi

So, we finished up with this application, and the performance is quite acceptable. However, some interesting notes. This is running in a two node cluster in Oracle dataguard to another two node cluster, at a remote site. The redo logs are sized at 1g, and even at that size we are switching logs about every 1.5 minutes. So, think about this: They don’t need to keep this data, they don’t update the data, the data is only used to update a dashboard, and its thrown away after a week. But they are using 4 nodes, each with 256G RAM and 128 CPU’s to keep up with the data, and the SAN behind it has to be pretty good also. Considering the amount of network traffic involved, they probably do need some high quality network hardware also.

After some thought, I strongly suspect that if this were written in a third generation language using flat files, this entire process could be run on four commodity nodes at about ten percent of the cost, and that assumes they really do need a primary and a standby cluster for a dashboard.

It has been a while since I did anything in Pascal or Ada, but it might be an interesting project.

One Response to “This Is Really Not Good Oracle Application (but it does work)”

  1. Marko Sutic (@MarkoSutic) Says:

    Interesting case Andrew. I can not even imagine how much it all costs.

    It will be interesting when managers realize that all that work could be done with ten percent of all costs.

    Thanks for sharing!

    Regards,
    Marko

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: