DBA Security Frustration

February 3, 2016

One of the most common frustrations for the Oracle DBA is dealing with security.  There are many common DBA tasks that require root or administrator access, and acquiring that access can be a frustrating and time consuming task, especially in larger organizations.

In the world of virtualization (sometime called the private cloud), those security concerns are much easier to handle, even though the security teams  in large organizations have not caught up with that fact in most cases.  Even the primary providers of the private cloud, VMware, seem to be unaware of the how a major side-effect of their product is a real reduction in security concerns.

A connection to a Virtual Machine (VM) looks exactly like a connection to a physical machine, whether Unix or Windows based.  And when a user is logged into a VM, as with a physical machine, the user can only see and access what is on that particular VM.  This seems obvious, so what is my point, right? 

My point is this:  virtualization allows an organization to create a VM for each individual application within an organization.  The native networking components of virtualization products allow for even more isolation.  Typically, in a large organization, the intent of security is to prevent an individual from accessing data outside his or her particular area of application.  Meaning, for example, the DBA in charge of medical records database should not be able to access a financial database 

Virtualization allows the database for health care records to reside on one VM, while financial data can reside on another VM, even on another network, without increasing the Oracle licensing costs. 

The use of VM’s allows the DBA in charge of health care records database to be restricted to the VM or VM’s on which that data resides.  He or she can be granted full root and Oracle access on that server, without any risk of the DBA being able to access the financial database, which he is not authorized to see.

In order to access the financial data, the health care records DBA would need to acquire a completely different set of passwords and access, which is easy to restrict at the server level.  This means that the security team can grant the DBA full access to every server that his or her data is on, without risking access to data to which he is not entitled. Although the same strategy could be used in the physical world, the cost would soon become prohibitive and extremely inefficient in terms of hardware resources.

Of course this does not alleviate all security concerns. There is always the possibility of a fully compromised server. This is most often the result of an external attack rather than internal. However, a fully compromised server inside the company firewall can be a nightmare. And as the number of servers increase, whether physical or virtual, the chance of a compromised server increases. On the other hand, the chance that a virtual server becomes fully compromised is roughly the same as that of a physical server, regardless of the role based security being used.

In addition, the risks of a compromised server can be minimized by the appropriate use of physical and virtual firewall appliances, and appropriate rules to segregate servers by function. VMware NSX is specifically designed for this sort of network micro-segmentation and isolation, and is the logical method for achieving the required network isolation within a VMware environment.

The use of virtualization as a security method in and of itself is a completely new paradigm for the typical security team.  Normally separation of duties would require that the DBA either have no root access at all, or be restricted to specific commands in order to ensure that they cannot access information to which they are not entitled.  But with the appropriate use of VM’s, the DBA can have full and unfettered access to the server with their data, without putting any other data at risk.

This does not absolve the DBA of basic responsibilities of course.  When installing Oracle patches that require root access, they must still take the basic precautions, backing up the database, snapshotting the operating system drive and perhaps the oracle binaries installation, and taking any other basic precautions that are required.  The DBA should also take the time to learn some operating system basics, and perhaps chat with a system administrator if they are not sure of what they are doing.  At the same time, this high level of access should allow the DBA to get their job done more quickly, and with much less frustration.

In summary, VMware and other suppliers for the private cloud have unintentionally provided an easy and effective method of improving organizational security, with very little additional effort required.

Installing the Oracle Database (Basic Install)

December 16, 2015

In this post, I am going to cover the simplest of Oracle software installations. A single instance Oracle installation on a file system. The first step is to download the Oracle software installation files. The oracle database software can be downloaded from here: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html. Or, if you have an Oracle CSI and appropriate licensing you can download the latest patchset from My Oracle Support (MOS).

For this installation, we are using file system, so we need the first two files, linuxamd64_12c_database_1of2.zip and linuxamd64_12c_database_2of2.zip. Files 3 and 4 are required if you plan to user ASM, or are installing oracle Grid Infrastructure (GI).

1. Download the files to your database server. I will be using a virtual server that was set up using the methodology described here: https://dbakerber.wordpress.com/2015/09/23/installing-oracle-software-part-1-prerequisites/.

2. Unzip the database files. This will create a directory named ‘database’ under your current directory..

3. Start the installer:

cd database

./runInstaller

Uncheck the box about security updates, if you are already signed up it will just add to your volume of emails, which you probably don’t want.

clip_image002

Then assure Oracle that you really do know what you are doing.

clip_image004

On the next screen, choose what you consider appropriate. For this example, we will go ahead and create and configure a starter database.

clip_image006

Next, choose server class, even if installing on a desktop class vm. The server class option gives more flexibility in installation options.

clip_image008

On the next screen choose advanced install. Once again, this gives us more installation options.

clip_image010

Next choose the appropriate language. This is a US English blog, so that is what we choose.

clip_image012

Next, choose the appropriate edition. The particular download we are using only has enterprise edition, but a full download from otn.oracle.com will have both Enterprise and Standard Edition. For details about the advantages of each, I recommend a contact other than Oracle corp. Oracle will of course recommend the most costly option of Enterprise Edition. One possible source of information is the consulting firm House of Brick Technologies (http://houseofbrick.com/) , though as a consulting firm they do charge for their time. They can discuss the advantages of various Oracle editions, as well as virtualization of your database load.

clip_image014

I normally install in the default directory structure, /u01. Obviously this can change with your preference and company policies. But for this example, we will take the default file system.

clip_image016

On the next screen, also take the default.

clip_image018

Next, choose your database type. This screen will not appear if you did not choose to create a database earlier.

clip_image020

We are not going to choose the default oracle database name. This is one of those well known items that can lead to security holes.

clip_image022

We are going to name our instance dbatest. We will not create it as a container database, you can change this later if you want.

clip_image024

In the next screen, you choose the memory footprint. Note that by default, this installation will use Automatic Shared Memory Management (ASMM), not Automatic Memory Management (AMM). There is substantial ongoing debate among DBA’s on which method is better. If at some point you may want to use hugepages, you must use ASMM. Otherwise, the general opinion among DBA’s is to use ASMM. ASMM generally gives a little more control of memory usage, and most DBA’s prefer to use it.

clip_image026

Click on the character set tab to choose the character set. I generally prefer to use the Unicode character set, AL32UTF8.

clip_image028

Since this is a training exercise, choose the install sample schemas option on the sample schemas tab.

clip_image030

Next, choose the location of the oracle database files. I normally use ASM, but for this example we are using file system, so the default file location is fine. Change it if you require.

clip_image032

On the next screen, if you choose to register the database with OEM Cloud Control, click on the box. If you don’t want to register, or have no idea what it is, leave the box unchecked.

clip_image034

On the next screen, enable recovery if you choose. I am going to go ahead and enable recovery, and choose a location, since I may want to use this instance when I discuss recovery in a later blog, however, generally we don’t bother with recovery on test instances.

clip_image036

clip_image038

On the password screen, it is generally easiest to choose a single password for all schemas, then change them individually per your company’s standards.

clip_image040

Use the default groups. Changing these can sometimes cause problems.

clip_image042

The installer will next verify that the installation requirements have been met, and come up with a summary screens.

clip_image044

I am going to save a response file here, because I plan to do another blog topic on silent installation at a later date.

clip_image046

After clicking ‘install’, the installation commences.

clip_image048

After installation is complete, the prompt to run the root scripts will come up. If you have the appropriate access, run these scripts as the root user, otherwise you will need to have a system administrator or other person with root access run it.

clip_image050

After running the root scripts, click continue, and then the database configuration assistant will start running to create the database that you defined earlier.

clip_image052

After software installation and database creation is completed, a screen will pop up with the database information, including the database express URL.

clip_image054

Save the URL for the em database express, and click ok.

clip_image056

Then click on close.

At this point, your database is up and ready to use.

For an easy to use hypertext management front end, oracle supplies the database express, whose URL you coped earlier, htttps://oel64node.localdomain:5500/em, in my example.

Navigate to the database express URL using your web browser of choice.

clip_image058

If you are new to oracle, I suggest that you explore this page to familiarize yourself with the oracle database.

clip_image060

This then is the easiest and most basic method of installing the Oracle database.

Installing Oracle Software, Part 1 – Prerequisites

September 23, 2015

Forward

The intent of this series of posts is to cover basic Oracle skills for the beginning DBA. These are follow-on posts to my previous article titled ‘Eleven Critical Skills for the Beginning DBA’  https://dbakerber.wordpress.com/2015/09/14/eleven-critical-skills-for-the-beginning-dba/.

I debated with myself for some time on how detailed I need to be, and at what point in the installation process I should begin this article. After some consideration, I have decided to start this entry with the installation of Oracle required packages.

This article assumes that the Oracle user has been created per the Oracle installation documentation, that transparent hugepages have been disabled, and that the required kernel setting changes have been made for an Oracle installation.

I considered it unlikely that the beginning DBA would be entrusted with those three preliminary tasks in a typical working environment.

Admittedly, the beginning DBA may well not be entrusted with installing packages either, but that was the point that I chose to start this article.

Step 1 – Install the Oracle Required Packages

Installing the Oracle required packages is straightforward. And in fact remarkably easy if you are on Oracle Enterprise Linux (OEL). First of all, your database server must be configured with repository access for its Unix version. Without repository access, none of the yum commands below will work.

Next, if you are on Oracle Enterprise Linux, install this package: oracle-rdbms-server-12cR1-preinstall  or if you are installing 11g, install this package: oracle-rdbms-server-11gR2-preinstall. The command is simply (as root):

# yum install oracle-rdbms-server-12cR1-preinstall

or

# yum install oracle-rdbms-server-11gR2-preinstall

This will install all the oracle required packages, and configure the Oracle user, and set certain OS settings. This is the easiest way to do it. If you are using Oracle Enterprise Linux, you can skip to Part 2: Setting up Xwindows on your Dekstop.

If you are not using Oracle Enterprise Linux, you will need to install the individual packages by themselves, and configure the operating system as described in the remainder of this article.

The current list of required packages from 12c is below (note: tuned and xclock are not required, but are highly recommended):

binutils.x86_64
compat-db43.i686
compat-glibc-headers.x86_64
compat-glibc.x86_64
compat-libcap1.x86_64
compat-libstdc++-33.i686
compat-libstdc++-33.x86_64
elfutils-libelf-devel.x86_64
gcc-c++.x86_64
gcc.x86_64
gdbm.i686
gdbm.x86_64
glibc-common.x86_64
glibc-devel.i686
glibc-devel.x86_64
glibc-headers.x86_64
glibc.i686
glibc.x86_64
kernel-headers
ksh.x86_64
libaio-devel.i686
libaio-devel.x86_64
libaio.i686
libaio.x86_64
libgcc.i686
libgcc.x86_64
libgomp.x86_64
libstdc++-devel.i686
libstdc++-devel.x86_64
libstdc++.i686
libstdc++.x86_64
libXi.i686
libXp.i686
libXp.x86_64
libXtst
make.x86_64
mksh.x86_64
ncompress
nfs-utils
openmotif21.i686
portmap
sysstat.x86_64
unixODBC-devel.i686
unixODBC-devel.x86_64
unixODBC.i686
unixODBC.x86_64
unzip.x86_64
util-linux-ng.x86_64
xauth
xorg-x11-libs-compat.i386
xorg-x11-utils.x86_64
tuned
xclock

In order to install all of these packages quickly (as the root user), just build a shell script with these lines: and run it:

yum -y install binutils.x86_64
yum -y install compat-db43.i686
yum -y install compat-glibc-headers.x86_64
yum -y install compat-glibc.x86_64
yum -y install compat-libcap1.x86_64
yum -y install compat-libstdc++-33.i686
yum -y install compat-libstdc++-33.x86_64
yum -y install elfutils-libelf-devel.x86_64
yum -y install gcc-c++.x86_64
yum -y install gcc.x86_64
yum -y install gdbm.i686
yum -y install gdbm.x86_64
yum -y install glibc-common.x86_64
yum -y install glibc-devel.i686
yum -y install glibc-devel.x86_64
yum -y install glibc-headers.x86_64
yum -y install glibc.i686
yum -y install glibc.x86_64
yum -y install kernel-headers
yum -y install ksh.x86_64
yum -y install libaio-devel.i686
yum -y install libaio-devel.x86_64
yum -y install libaio.i686
yum -y install libaio.x86_64
yum -y install libgcc.i686
yum -y install libgcc.x86_64
yum -y install libgomp.x86_64
yum -y install libstdc++-devel.i686
yum -y install libstdc++-devel.x86_64
yum -y install libstdc++.i686
yum -y install libstdc++.x86_64
yum -y install libXi.i686
yum -y install libXp.i686
yum -y install libXp.x86_64
yum -y install libXtst
yum -y install make.x86_64
yum -y install mksh.x86_64
yum -y install ncompress
yum -y install nfs-utils
yum -y install openmotif21.i686
yum -y install portmap
yum -y install sysstat.x86_64
yum -y install unixODBC-devel.i686
yum -y install unixODBC-devel.x86_64
yum -y install unixODBC.i686
yum -y install unixODBC.x86_64
yum -y install unzip.x86_64
yum -y install util-linux-ng.x86_64
yum -y install xauth
yum -y install xorg-x11-libs-compat.i386
yum -y install xorg-x11-utils.x86_64
yum -y install tuned
yum -y install xclock
The above commands will install the required packages without prompting for further user input.
Part 2: Setting up X-Windows on Your Desktop
Thinking back to my early days as a DBA, I remember getting the X-windows configuration figured out was one of the most irritating pieces of the process. After all, who uses X-windows anymore? The answer of course is Oracle. So here are the steps to set it up on the server and on your desktop.
First of all, make sure X-windows is installed on the Linux/Unix system. The easiest method for doing this is to check for the existence of the X11 packages on the system using the rpm command:
$ rpm -qa | grep X11
libX11-1.6.0-2.2.el6.x86_64
libX11-1.6.0-2.2.el6.i686
libX11-common-1.6.0-2.2.el6.noarch

If these packages do not show up, you will need to install X on the server. This requires root access, which may require discussion with the system administrator. The command to install X11 is this:

# yum install xorg-x11-apps

After identifying the application and all the required pre-requisites, yum will prompt for an OK to install the package. Enter Y and the packages will be installed.

Step 2, Installing an X-Server

Next, you will need an X-Server on your desktop. There are several available, the most common of which are Xming and Cygwin. Generally Xming has the smaller footprint, so it is what I recommend. Xming can be downloaded from here:

http://sourceforge.net/projects/xming/files/

Install xming for your software version onto your Desktop, and also download and install xming fonts.

Next, you will want an SSH software that supports X11 forwarding. The most common software for this is Putty. Putty can be downloaded from here:

http://www.putty.org/

Download the full putty installer for your version of windows, and install putty using the default options.

Next, start up Xming.

Use the X-Launch icon under the xming group, and choose the defaults, but select the box for ‘No Access Control’. Later you can experiment with unchecking the box, but for now turn it on. Once XLaunch finishes, you will see an Xming icon in your running programs, which is what you want.

Next, start up putty. Enter your hostname on the appropriate line.

putty2

Then on the left side, under connection, click on SSH, then X11. In the X11 window choose the ‘Enable X11 Forwarding’ box. In the ‘X display location.’ You can enter ‘localhost’, though that is not required.

Putty

Next, open your putty session to the database server (click on ‘open’) in the session window, and log in as the Oracle user, if possible.

Then, if you are logged in as the Oracle user, run these commands:

$ xclock

At this point, a small clock should show up in a window showing the current time. This indicates that you have X-Windows properly configured on the Oracle server, and that your Xserver is running properly on your desktop, so you can run the Oracle installer.

If xclock does not start, run this command (on the server):
export DISPLAY=localhost:10.0
And try again.

If you are not logged in as the Oracle user, but as yourself, do the following:

$ xauth list
myhost/unix:10 MIT-MAGIC-COOKIE-1 206e9b7808ad8cbf561dd6b2e42a29a7

Copy the above line (that begins with your host name). If the output has multiple lines, copy the last line to the clipboard. Then, sudo or su to the Oracle user, whichever method is available to you.

After switching to the Oracle user, run this command (replacing everything after add with your output from the above xauth list command):

$ xauth add myhost/unix:10 MIT-MAGIC-COOKIE-1 206e9b7808ad8cbf561dd6b2e42a29a7

$ xclock

At this point, a small clock should show up in a window showing the current time:

xclock

Once again, if xclock does not start, run this command (on the server):
export DISPLAY=localhost:10.0
And try again.

If the clock shows up, it indicates that you have X-Windows properly configured on the Oracle server, and that your Xserver is running properly on your desktop.

Now we are ready to install Oracle on Linux using the Oracle installer. Hit ctl-c to exit the xclock.

For my next post, I will continue with the installation instructions using the Oracle GUI installer.

Eleven Critical Skills for the Beginning DBA

September 14, 2015

I have been in the Oracle Database Administration business since about the year 2000 now, so at this point I feel like I am ready to write my first article ever on the basic DBA skills for a beginning DBA.

These are eleven critical skills every beginning DBA should master quickly if she expects to continue in the business. Please note this is not intended to be comprehensive, no doubt there are other important skills. These are just eleven skills I see as very important.

  1. Install Oracle software.

This means install it using the gui, install it using silent install, install it on Windows, Linux, AIX, and install it through OEM Cloud Control. There are several different methods that might be used to install the Oracle software, and you need to know all of them.

  1. Troubleshoot SQL Code.

You don’t need to be the SQL code guru, that’s for the senior DBA and senior developers but you do need to be able to understand what the SQL is doing, and make suggestions on how to improve it. That does mean you have to be able to get an ‘explain plan’, and identify the problems.

  1. Backup a database

You should know how to back up a database through RMAN, and at least one other method, whether its exp, expdp, or OS utilities. Every good DBA should have two good backups and something in his back pocket to recover data.

  1. Backup a table

You need to know how to back up a single table, just like you need to know how to back up a database. There are several different ways to do this, you should know at least three.

  1. Restore a database to current/alternate location

This is a critical skill. You must have a good back up, and you must be able to restore the database. When your hardware crashes, you as the DBA better have a plan in place to restore your database. If you don’t, you should put a plan in place to get a new job.

  1. Restore tablespace to alternate location

People are fallible, they corrupt their data all the time. It will be your job as a DBA to restore that tablespace and reload those corrupted tables.

  1. Restore a table.

When you make a backup of a single table, because you are making changes, you better have a plan in place to restore it. There are several ways to do this, you need to know at least three.

  1. Create a user and grant appropriate privileges and quotas.

This is a basic skill, and not hard. But make sure you don’t default to giving everyone unlimited tablespace. Make sure you know how to grant users access to only the tablespaces they need.

  1. Read an alert log.

Does that line in the alert log mean anything? I just had a log switch, is it switching logs too frequently? You need to be able to review the last hundred or so lines in the alert log, and figure out right away if you have a problem.

  1. Write a shell script.

This is probably the skill the most beginning DBA’s lack. You need to be able to write a shell script (most Oracle installations are on a Unix variant), put it in cron, and know how to send the output to the recipients who need to see it. Whether it is a report or an alert, you need to know this skill.

  1. Create a dataguard standby

Failover/HA is big business these days. Oracle dataguard is a very common solution. Be familiar with it.

Those are the skills I see as most important. Some of these topics are discussed in my earlier blog entries. In the future, I will be covering each of these points in more detail, and giving my viewpoints on each of these, from the perspective a DBA that has worked in consulting, as well as in small and large companies.

Dave Welch at House of Brick Technologies Writes a Serious Takedown of a Gartner Article

July 13, 2015

I don’t normally discuss Oracle licensing on my blog. This is primarily because it is not an area I have spent a lot of time on. However, the licensing experts (note: they are not lawyers) over at House of Brick technologies are very good at explaining licensing issues with Oracle on VMware.

In this paper, Dave Welch at House of Brick writes a fairly severe takedown of a Gartner article on Oracle licensing. And while I did not read the Gartner article ($195 seems a little pricey to me), Welch makes a point repeatedly that is good to remember when dealing with Oracle: Oracle statements and blogs and other similar publications are not legally binding, only what is written in your contract or other supporting legal documents are legally binding. This is especially important when it comes to partitioning your Oracle installation.

Here is the article. http://houseofbrick.com/review-of-gartners-nov-2014-oracle-licensing-paper/

If you are looking for further information on Oracle licensing on VMware, there are several applicable articles on the House of Brick web site: http://houseofbrick.com/blog/

Running Jobs in Parallel using DBMS_SCHEDULER (part 2)

April 28, 2015

My second example of using DBMS_SCHEDULER to run jobs in parallel involves another common DBA task, rebuilding indices.

Under certain circumstances, deteriorating performance on a database can be traced to fragmented indices. In addition, direct path loads and certain other operations can invalidate indices, thus requiring them to be rebuilt.

In this example, indices that are invalid are identified and jobs submitted to rebuild the index. Not only can the individual rebuild processes run in parallel, but multiple rebuild jobs can be run at the same time by using DBMS_SCHEDULER.

This example does not use a driver table, but rather looks at the Oracle data dictionary tables to identify candidates for rebuild. The changes required to use a driver table containing the indices to rebuild are fairly simple, and I leave that to the reader.

In this example, we create a program that calls a stored procedure, and then individual jobs that call the program.  This allows us to use the arguments of owner and index name to specify each index to be rebuilt.

Below is the driving script, that finds each index needing to be rebuilt, and creates the job to rebuild them.  Typically we will rebuild two indexes at the same time, each with degree of parallelism set to four.


CREATE OR REPLACE
procedure maint_enable_rebuild(num_simul in integer default 2,par_degree in integer default 4) authid current_user
is
owner varchar2(30);
index_name varchar2(30);
type index_record is record
( owner varchar2(30),
index_name varchar2(30));
type index_array_type is varray(10000) of index_record;
index_count integer :=0;
lcounter integer :=0;
sql_cmd varchar2(2000);
index_array index_array_type;
num_running integer :=0;
continue_next varchar2(5) :='FALSE';
exit_now varchar2(5) :='FALSE';
num_ran integer :=0;
num_not_ran integer :=0;
num_short_count integer := num_simul;
begin
begin
maint_drop_rebuild_jobs;
exception
when others then null;
end;
dbms_scheduler.create_program
(program_name=>'REBUILD_INDEX_PROGRAM',
program_type=>'STORED_PROCEDURE',
program_action=>'MAINT_RUN_REBUILD',
number_of_arguments=>3,
enabled=>false,
comments=>'Stored procedure to rebuild unusable indexes asynchronously'
);
dbms_scheduler.define_program_argument (
program_name=>'REBUILD_INDEX_PROGRAM',
argument_name=>'owner',
argument_position=>1,
argument_type=>'varchar2'
);
dbms_scheduler.define_program_argument (
program_name=>'REBUILD_INDEX_PROGRAM',
argument_name=>'index_name',
argument_position=>2,
argument_type=>'varchar2'
);
dbms_scheduler.define_program_argument (
program_name=>'REBUILD_INDEX_PROGRAM',
argument_name=>'par_degree',
argument_position=>3,
argument_type=>'integer',
default_value=>4
);
dbms_scheduler.enable('REBUILD_INDEX_PROGRAM');
dbms_output.enable(100000);
index_array:=index_array_type();
for icursor in
(select owner, index_name from all_indexes where status in ('INVALID','UNUSABLE') and index_type='NORMAL' and owner !='SYS')
--/* testing only */ (select owner, index_name from all_indexes where owner='SCOTT' and table_name like 'E%')
loop
if icursor.owner is not null
then
index_count:=index_count+1;
index_array.extend();
index_array(index_count).owner:=icursor.owner;
index_array(index_count).index_name:=icursor.index_name;
-- dbms_output.put_line(to_char(index_count)||' '||icursor.owner||' '||icursor.index_name);
end if;
end loop;
for lcounter in 1..index_count
-- for lcounter in 1..52
loop
while continue_next = 'FALSE'
loop
if num_short_count <= 0
then
sleep(1);
select count(1) into num_running from all_scheduler_jobs where job_Name like 'REBUILD_UNUSUABLE_INDEXES%' and state='RUNNING';
num_short_count:=num_simul-num_running;
-- dbms_output.put_line(to_char(num_running));
end if;
if num_running <= num_simul
then
if trim(index_array(lcounter).owner) is not null
then
-- dbms_output.put_line(index_array(lcounter).owner||'.'||index_array(lcounter).index_name);
dbms_scheduler.create_job(
job_name=>'REBUILD_UNUSUABLE_INDEXES'||to_char(lcounter),
program_name=>'REBUILD_INDEX_PROGRAM',
comments=>'Unusable index asynch job',
start_date=>sysdate+1000,
enabled=>false,
auto_drop=>false);
dbms_scheduler.set_job_argument_value(
job_name=>'REBUILD_UNUSUABLE_INDEXES'||to_char(lcounter),
argument_position=>1,
argument_value=>index_array(lcounter).owner
);
dbms_scheduler.set_job_argument_value(
job_name=>'REBUILD_UNUSUABLE_INDEXES'||to_char(lcounter),
argument_position=>2,
argument_value=>index_array(lcounter).index_name
);
dbms_scheduler.set_job_argument_value(
job_name=>'REBUILD_UNUSUABLE_INDEXES'||to_char(lcounter),
argument_position=>3,
argument_value=>par_degree
);
dbms_scheduler.enable('REBUILD_UNUSUABLE_INDEXES'||to_char(lcounter));
dbms_scheduler.run_job(job_name=>'REBUILD_UNUSUABLE_INDEXES'||to_char(lcounter), use_current_session=>false);
end if;
num_short_count:=num_short_count-1;
continue_next:='TRUE';
else
sleep(5);
continue_next:='FALSE';
end if;
end loop;
continue_next:='FALSE';
end loop;
while exit_now='FALSE'
loop
select count(1) into num_not_ran from all_scheduler_jobs where job_Name like 'REBUILD_UNUSUABLE_INDEXES%' and run_count=0;
if num_not_ran >0
then
sleep(5);
else
exit_now:='TRUE';
select count(1) into num_ran from all_scheduler_jobs where job_Name like 'REBUILD_UNUSUABLE_INDEXES%' and run_count>0;
dbms_output.put_line('Total Indexes rebuilt: '||to_char(num_ran));
end if;
end loop;
end;
/

Note that in this example, we cannot create a job to run immediately because we need to set the job arguments.  Because of this, the procedure DBMS_SCHEDULER.RUN_JOB is used to execute the job.  It may also be possible to use the DBMS_SCHEDULER.ENABLE procedure to execute them.

At the start of the driving procedure, the procedure below is called to remove any old index rebuild jobs that may exist:


CREATE OR REPLACE
procedure maint_drop_rebuild_jobs  authid current_user
is
begin
for drop_job_cursor in
(select job_name from all_scheduler_jobs where job_name like 'REBUILD_UNUSUABLE_INDEXES%')
loop
begin
dbms_scheduler.drop_job(job_name=>drop_job_cursor.job_name,force=>true);
exception
when others then null;
end;
end loop;
begin
dbms_scheduler.drop_program('REBUILD_INDEX_PROGRAM');
exception
when others then null;
end;
end;
/

Finally, as you review the initial script you will see the stored procedure that is called to rebuild the index, It is a very simple stored procedure:


CREATE OR REPLACE
procedure maint_run_rebuild(owner in varchar2, index_name in varchar2, par_degree in integer default 4) authid current_user is
sql_command varchar2(4000);
begin
sql_command:='alter index '||owner||'.'||index_name||' rebuild online parallel '||to_char(par_degree);
--  dbms_output.put_line(sql_command);
execute immediate sql_command;
end;
/

I definitely recommend keeping the number of indices rebuilt at any one time low until you have determined what kind of load your system can handle.  Needless to say they are highly IO intensive.  An index rebuild will generate a lot of redo. The procedures in this blog do not use the nologging option as we have a physical standby database configured. When a database has a physical standby, the force logging option is set so any nologging options are ignored.

If you have many smaller indices to rebuild, set parallel to 1 or 2, and rebuild more indices at the same time.  If you have fewer, larger, indices rebuild fewer indices simultaneously at a higher degree of parallelism.  If you have many large indices to rebuild, well, just figure out what works best.

Running Jobs in Parallel Using DBMS_SCHEDULER (part 1)

April 24, 2015

DBMS_SCHEDULER is a fairly nice feature of the Oracle database that allows Oracle to schedule and run jobs in the background. This also allows the clever DBA to run multiple procedures at the same time.

The method I describe here can be used for most types of database processing. In the examples this post and for part two, I will be demonstrating methods for running some standard database tasks in parallel: index rebuilds and manual statistics jobs.

We have run into certain very large tables in our database that require more attention than is typical when it comes to maintaining stats. Rather than spend the time to tweak the percentage required to mark the stats as stale, we determined that is simpler to just add those tables requiring special attention to our special stats job, which runs weekly to gather statistics on those tables. Below is the table layout:

OWNER		VARCHAR2(60)
TABLE_NAME		VARCHAR2(60)
GROUP_NUMBER		NUMBER
PARALLEL_DEGREE		NUMBER
ESTIMATE_PERCENT		NUMBER
LAST_RUN_DATE		DATE
RUNNING_NOW		CHAR(1)

The intent of this task is to allow the prioritization of the tables by groups, with group one being the most important. Tables in group one are analyzed by themselves, and with a high degree of parallelization to ensure that these will always finish.  After completing all the tables in group one in real time, the code then proceeds through the other groups and analyzes the tables in those groups.

Group numbers larger than one allow the program to work through the individual groups until all are done. Right now, we only have two groups, but the configuration allows for as many groups as desired. Next we have the driver program, this is a stored procedure with a single argument, the number of simultaneous detached jobs that can be run at once.

The procedure ANALYZE_BY_GROUP, below, is the driving procedure for this process.

procedure analyze_by_group(num_simul in number) authid current_user is
/* This Stored procedure is used to analyze only those tables whose statistics need special attention outside the normal
gather_stats_job. Do not place all tables in the stats_groups table, use this only for tables requiring special attention*/
num_running number;
job_exists number;
newstatid varchar2(30):=to_char(sysdate,'yymmddhh24mi');
begin
 dbms_output.enable(100000);
 for analyze_cursor in
 (select owner, table_name, parallel_degree, estimate_percent from stats_groups where group_number=1 and owner=(select user from dual))
 loop
 dbms_stats.export_table_stats(statown=>analyze_cursor.owner,stattab=>'SPECIAL_TAB_STATS',ownname=>analyze_cursor.owner,tabname=>analyze_cursor.table_name,statid=>'SPECIAL'||to_char(sysdate,'yyyymmddhh24mi'));
 if analyze_cursor.estimate_percent=0
 then
 update stats_groups set running_now='T' where owner=analyze_cursor.owner and table_name=analyze_cursor.table_name;
 commit;
-- dbms_output.put_line(analyze_cursor.owner||'.'||analyze_cursor.table_name);
 dbms_stats.gather_table_stats(ownname=>analyze_cursor.owner,tabname=>analyze_cursor.table_name,degree=>analyze_cursor.parallel_degree,
 estimate_percent=>dbms_stats.auto_sample_size,cascade=>true);
 else
 update stats_groups set running_now='T' where owner=analyze_cursor.owner and table_name=analyze_cursor.table_name;
 commit;
-- dbms_output.put_line(analyze_cursor.owner||'.'||analyze_cursor.table_name);
 dbms_stats.gather_table_stats(ownname=>analyze_cursor.owner,tabname=>analyze_cursor.table_name,degree=>analyze_cursor.parallel_degree,
 estimate_percent=>analyze_cursor.estimate_percent,cascade=>true);
 end if;
 update stats_groups set running_now='F', last_run_date=sysdate where owner=analyze_cursor.owner and table_name=analyze_cursor.table_name;
 commit;
 end loop;
 for analyze_cursor2 in
 (select distinct group_number from stats_groups where group_number >1 and owner=(select user from dual))
 loop
-- sleep(10);
 select count(1) into num_running from all_scheduler_jobs where job_name like 'ASYNC_ANALYZE%';
-- dbms_output.put_line(to_char(num_running));
 if (num_running <num_simul)
 then
-- dbms_output.put_line('submitting job '||to_Char(analyze_Cursor2.group_number));
 select count(1) into job_exists from all_scheduler_jobs where job_name='ASYNC_ANALYZE_'||to_char(analyze_cursor2.group_number);
 if job_exists >0
 then
 dbms_scheduler.drop_job(job_name=>'ASYNC_ANALYZE_'||to_char(analyze_cursor2.group_number),force=>true);
 end if;
 dbms_scheduler.create_job(
 job_name=>'ASYNC_ANALYZE_'||to_char(analyze_cursor2.group_number),
 job_type=>'PLSQL_BLOCK',
 job_action=>'BEGIN async_analyze_group('||analyze_cursor2.group_number||'); END;',
 comments=>'Asynch analyze job for tables requiring special attention. Argument is the group number from the stats_group table',
 auto_drop=>true, --set this to false for debugging purposes, job will remain on dba_scheduler_jobs view until manually removed
 enabled=>true);
 end if;
 end loop;
end;

Note that the code above simply executes the stored procedure ASYNC_ANALYZE_GROUP using DBMS_SCHEDULER, which allows us to run multiple detached jobs. Below is the ASYNC_ANALYZE_GROUP stored procedure:

procedure async_analyze_group(in_group_number in number)authid current_user
is
  newstatid varchar2(30):=to_char(sysdate,'yyyy-mm-dd hh24mi');
begin
  dbms_output.enable(100000);
  for analyze_cursor in
  (select owner, table_name, parallel_degree, estimate_percent from stats_groups where group_number=in_group_number and owner=(select user from dual))
  loop
    update stats_groups set running_now='T' where owner=analyze_cursor.owner and table_name=analyze_cursor.table_name;
    commit;
    dbms_stats.export_table_stats(statown=>analyze_cursor.owner,stattab=>'SPECIAL_TAB_STATS',ownname=>analyze_cursor.owner,tabname=>analyze_cursor.table_name,statid=>'SPECIAL'||to_char(sysdate,'yymmddhh24mi'));
    if analyze_cursor.estimate_percent=0
    then
      dbms_stats.gather_table_stats(ownname=>analyze_cursor.owner,tabname=>analyze_cursor.table_name,degree=>analyze_cursor.parallel_degree,
        estimate_percent=>dbms_stats.auto_sample_size,cascade=>true);
    dbms_output.put_line(analyze_cursor.owner||'.'||analyze_cursor.table_name||'    0');
    else
       dbms_stats.gather_table_stats(ownname=>analyze_cursor.owner,tabname=>analyze_cursor.table_name,degree=>analyze_cursor.parallel_degree,
        estimate_percent=>analyze_cursor.estimate_percent,cascade=>true);
    end if;
    update stats_groups set running_now='F',last_run_date=sysdate where owner=analyze_cursor.owner and table_name=analyze_cursor.table_name;
    commit;
  end loop;
  update stats_groups set running_now='F' where group_number=in_group_number;
  commit;
end;

The async analyze job is run asynchronously through the Oracle job scheduler.  Because it is asynchronous, we can start additional jobs to run at the same time. Note that the logic of this code divides various tables into groups, and then gathers stats group by group.  Because of this layout, we can also put each table in its own group and then run multiple tables simultaneously.  Either method is legitimate, and the code was written to allow for both possibilites.

The procedures above are a fairly simplistic method for running large processes in parallel. Your comments are appreciated as always.

Update on Hugepages (rewrite to fix formatting issues)

March 11, 2015

Update on Hugepages Post

** UPDATED on Feb 14, 2014 **As an update on the hugepages post, I have researched the requirements for enabling the equivalent of Hugepages in AIX.

AIX uses the term largepages when referring to the closest equivalent of lInux hugepages. AIX large pages are 16mb in size, and are implemented differently than Linux hugepages. In fact, my research showed that of all operating systems I generally work with (Linux, Windows, AIX, and sun-sparc) Linux is the most difficult to implement.

Solaris will implement large pages automatically using ism (intimate shared memory). Nothing special needs to be done to do this.

Enabling Large Pages in AIX

In AIX, large pages are implemented as described below. In AIX and Solaris, as in Linux, largepages are not swappable and the full amount is allocated when the instance starts:

  1. Calculate the total size of all SGA’s you will need on the AIX server. This will be the SGA max size, since all memory will be allocated on startup. So if you have 3 instances, with SGA max size of 32G, 8G, and 4G, the total size is 44G. Add in a fudge factor of 2G to be safe, for a total of 46G.
  2. Divide the total SGA in bytes by 16777216 (the AIX large page size) to get the total number of large page segments and add 1:

    (46*1024*1024*1024)/(16777216)=2944+1=2945. So 2945 is the total number of large pages required.

  3. Use vmo to allocate the largepages. You will need to allocate them to take effect immediately, and to persist after reboot, so the vmo command is run twice:

    vmo –o lgpg_regions=2945 –o lgpg_size=16777216

    vmo –r –o lgpg_regions=2945 –o lgpg_size=16777216

  4. Grant the database installation owner (typically oracle) the required privileges to use large pages and to pin memory:
    # Show pinshm settings:
    vmo -o v_pinshm
    v_pinshm = 0# set pinshm to 1 to allow shared memory to be pinned
    vmo -o v_pinshm=1
    vmo -r -o v_pinshm #persist change

    chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
    NOTE: after setting these, the oracle user will have to log out and back in for them to take effect.

  5. In the Oracle database, set sga_max_size and sga_target to the same value, set lock_sga=true, and restart the instance eg:

    Alter system set sga_max_size=16g scope=spfile sid=’*’;
    Alter system set sga_target=16g scope=spfile sid=’*’;
    Alter system set lock_sga=true scope=spfile sid=’*’;
    Shutdown immediate;
    Startup;

To check the status of huge usage, run this command:

vmstat -P all
For 16M hugepages, look at the 16m row
System configuration: mem=61440MB

pgsz memory page
—– ————————– ————————————
siz avm fre re pi po fr sr cy
4K 5986176 1095680 61570 0 0 0 92 153 0
64K 35208 34938 270 0 0 0 0 0 0
16M 2241 2079 162 0 0 0 0 0 0

Addendum to Hugepages in Linux

Oracle has added a new ‘feature’ to OEL 6, called transparent hugepages. This is also in RHEL6 and SLES11 (Suse Linux). Oracle note 1557478.1 says that transparent hugepages should be disabled on servers running Oracle database. According to the note, the ‘feature’ can cause node reboots and performance problems. Transparent hugepages are allocated and deallocated like normal memory, so most likely the performance issues come from excessive CPU usage while managing the hugepages. In order to disable transparent hugepages, do the following:

  1. Verify that transparent hugepages are enabled.

    cat /sys/kernel/mm/transparent_hugepages/enabled

    [always] never

    If the always is bracketed above, hugepages are enabled.

    Alternate method:

    # grep /AnonHugePages /proc/meminfo

    If this returns a value > 0kB, the kernel is uses Transparent HugePages

  2. Preferred method to disable hugepages:

    Add transparent_hugepage=never to /etc/grub.conf, and reboot the server, eg:

    title Oracle Linux Server (2.6.32-300.25.1.el6uek.x86_64)
    root (hd0,0)
    kernel /vmlinuz-2.6.32-300.25.1.el6uek.x86_64 ro root=LABEL=/ transparent_hugepage=never
    initrd /initramfs-2.6.32-300.25.1.el6uek.x86_64.img

  3. Alternate method, disable in rc.local and reboot the server. Add the following section to rc.local and reboot:

    if test -f /sys/kernel/mm/transparent_hugepage/enabled; then

    echo never > /sys/kernel/mm/transparent_hugepage/enabled

    fi

    if test -f /sys/kernel/mm/transparent_hugepage/defrag; then

    echo never > /sys/kernel/mm/transparent_hugepage/defrag

    fi

 

Below is a link to spreadsheets containing common settings for hugepages (Linux) and largepages (AIX).

 

common settings

DBMS_REDEFINITION Revisited

March 3, 2015

It has come to my attention that my last post on dbms_redefinition was really too simplistic to be useful. Therefore, I have set up a new example to show some of the true capabilities of dbms_redefinition. So for this example, we will change column data types instead of just the column sizes. Below are the layouts of our source table (CUSTDATA) and our interim table (CUSTDATA_INT):


SQL> desc custdata
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------
 CRDATTIM                                  NOT NULL CHAR(26)
 CASE_NUMBER                                        CHAR(10)
 LAST_NAME                                          CHAR(30)
 FIRST_NAME                                         CHAR(30)


SQL> desc custdata_int
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------------
 CRDATTIM                                  NOT NULL TIMESTAMP(6)
 CASE_NUMBER                                        VARCHAR2(30)
 LAST_NAME                                          VARCHAR2(30)
 FIRST_NAME                                         VARCHAR2(30)

Note that the custdata table layout is not particularly good, but it is an example of an actual case, this is a table of legacy data that we are unable to get rid of.  I am working on getting our application team to redesign it to a more sensible layout, but that will take time. As can be seen, there is an extensive amount of data changes to go through.  As in our previous example, first we have to verify that the table can be redefined use DBMS_REDEFINITION:

<pre>
SQL> SET SERVEROUTPUT ON
SQL> begin
  2  DBMS_REDEFINITION.CAN_REDEF_TABLE(
  3  uname=>'SCOTT',
  4  tname=>'CUSTDATA',
  5  options_flag=>DBMS_REDEFINITION.cons_use_rowid);
  6  end;
  7  /

PL/SQL procedure successfully completed.

Now that we have verified that the source table can be redefined using dbms_redefinition, we begin the process:


SQL> SET SERVEROUTPUT ON
SQL> begin
  2  DBMS_REDEFINITION.START_REDEF_TABLE(
  3  uname=>'SCOTT',
  4  orig_table=>'CUSTDATA',
  5  int_table=>'CUSTDATA_INT',
  6  col_mapping=>'to_timestamp(crdattim,'||''''||'yyyy-mm-dd-hh24.mi.ss.ff'||''
''||') crdattim,
  7  rtrim(ltrim(case_number)) case_number,
  8  rtrim(ltrim(first_name)) first_name,
  9  rtrim(ltrim(last_name)) last_name',
 10  options_flag=>DBMS_REDEFINITION.cons_use_rowid);
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> SET SERVEROUTPUT ON
SQL> begin
  2  DBMS_REDEFINITION.SYNC_INTERIM_TABLE(
  3  uname=>'SCOTT',
  4  orig_table=>'CUSTDATA',
  5  int_table=>'CUSTDATA_INT');
  6  end;
  7  /

PL/SQL procedure successfully completed.

So, everything is going well.  Let’s move on to the next step:

 
SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2  l_num_errors PLS_INTEGER;
  3  begin
  4  DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
  5  uname=>'SCOTT',
  6  orig_table=>'CUSTDATA',
  7  int_table=>'CUSTDATA_INT',
  8  copy_indexes => DBMS_REDEFINITION.cons_orig_params,
  9  copy_triggers => TRUE,
 10  copy_constraints => TRUE,
 11  copy_privileges => TRUE,
 12  ignore_errors => FALSE,
 13  num_errors => l_num_errors);
 14  end;
 15  /
DECLARE
*
ERROR at line 1:
ORA-01442: column to be modified to NOT NULL is already NOT NULL
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1015
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1907
ORA-06512: at line 4

At this point, I abort the process and try and track down the problem.  To abort a dbms_Redefinition process, the command is abort_redef_table, like this:


SQL> begin
  2  DBMS_REDEFINITION.ABORT_REDEF_TABLE(uname=>'SCOTT',orig_table=>'CUSTDATA',int_table=>'CUSTDATA_INT');
  3  End;
  4  /

PL/SQL procedure successfully completed.

The error message does look like a possible Oracle bug.  Let’s see what the Metalink (My Oracle Support) says: I searched on ORA-01442 DBMS_REDEFINITION, and the first result is Document ID 1116785.1, ORA-1442 Error During Online Redefinition – DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS.  Oracle doesn’t actually say it is a bug, and it may not be.  It probably stems from the fact that I created my interim table using a create table as select, then made changes to it.  It looks like I just need to drop the not null constraint on CUSTDATA_INT and start over, so let’s try that. I ran this query:

SQL> select * from dba_constraints where table_name ='CUSTDATA' and owner='SCOTT';
OWNER
-------------------------------------
CONSTRAINT_NAME                C
------------------------------ -
SCOTT
SYS_C0038648                   C

So there is only one constraint, a check constraint.  I will drop that, and see if that fixes the problem.

SQL> alter table custdata_int drop constraint SYS_C0038648;
Table altered.

Sure enough, that fixed it (NOTE: We have to restart at the beginning since we used the ABORT_REDEF_TABLE procedure.)

SQL> SET SERVEROUTPUT ON
SQL> begin
  2  DBMS_REDEFINITION.START_REDEF_TABLE(
  3  uname=>'SCOTT',
  4  orig_table=>'CUSTDATA',
  5  int_table=>'CUSTDATA_INT',
  6  col_mapping=>'to_timestamp(crdattim,'||''''||'yyyy-mm-dd-hh24.mi.ss.ff'||''
''||') crdattim,
  7  rtrim(ltrim(case_number)) case_number,
  8  rtrim(ltrim(first_name)) first_name,
  9  rtrim(ltrim(last_name)) last_name',
 10  options_flag=>DBMS_REDEFINITION.cons_use_rowid);
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> SET SERVEROUTPUT ON
SQL> SET SERVEROUTPUT ON
SQL> begin
  2  DBMS_REDEFINITION.SYNC_INTERIM_TABLE(
  3  uname=>'SCOTT',
  4  orig_table=>'CUSTDATA',
  5  int_table=>'CUSTDATA_INT');
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2  l_num_errors PLS_INTEGER;
  3  begin
  4  DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
  5  uname=>'SCOTT',
  6  orig_table=>'CUSTDATA',
  7  int_table=>'CUSTDATA_INT',
  8  copy_indexes => DBMS_REDEFINITION.cons_orig_params,
  9  copy_triggers => TRUE,
 10  copy_constraints => TRUE,
 11  copy_privileges => TRUE,
 12  ignore_errors => FALSE,
 13  num_errors => l_num_errors);
 14  end;
 15  /

PL/SQL procedure successfully completed.

So, lets finish the process:

 
SQL> SET SERVEROUTPUT ON
SQL> begin
  2  DBMS_REDEFINITION.FINISH_REDEF_TABLE(
  3  uname=>'SCOTT',
  4  orig_table=>'CUSTDATA',
  5  int_table=>'CUSTDATA_INT');
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> alter table custdata drop unused columns;
Table altered.

SQL> exec dbms_stats.gather_table_stats(ownname=>'SCOTT',tabname=>'CUSTDATA', ca
scade=>true);

PL/SQL procedure successfully completed.

SQL>

SQL> desc custdata;
 Name                                      Null?    Type
 ----------------------------------------- -------- -----------------
 CRDATTIM                                           TIMESTAMP(6)
 CASE_NUMBER                                        VARCHAR2(30)
 LAST_NAME                                          VARCHAR2(30)
 FIRST_NAME                                         VARCHAR2(30)

Note a couple of steps I left off in my earlier example: drop unused columns and gathering stats.  The unused column was used in the redefinition process because I was using the ROWID option (cons_use_rowid).  Oracle added at the start of the process to keep track of rows, and Oracle automatically marks it as unused once the process is complete, but it is a best practice to drop the column when the process is complete.