Kubernetes + ONTAP = DBaaS Part 2: Setup

In part 1 of this series, I discussed various reasons why you might want to run Oracle in containers. Now it’s time to start building, beginning with Docker configuration.

Installing Docker

My lab is currently a collection of RHEL7.3 guests running under ESX. Everything is connected by 10Gb ethernet with jumbo frames, and the main storage system is an FAS8060 with hybrid and SSD aggregates. It’s pretty powerful for a lab system. As far as I can tell, I have to pay for support for the “real” Docker Enterprise Edition. I assumed someone at NetApp could provide me with access with a developer license, but then I found the Docker Community Edition link. That seemed easy enough, so I enabled the docker-ce YUM repository and tried to install.

I had a prerequisite problem related to docker-ce-selinux that took a while to work out. Apparently, the package I needed was in the rhel-7-server-extras-rpms repository and that wasn’t enabled by default. After I enabled it, I was able to install docker-ce-17.03.1.ce-1 without problems.


After Docker was installed, my servers are ready to run containers, but how do I define a container?

This is where Oracle in a container differs from most container deployments. When you read about containers, it’s mostly about microapps. Let’s say you have a web-based application written in Java. You need security, and you want to scale up and down easily. How do you run 1000 applications and make sure that they don’t step on each other, make sure that there are no data leaks between application instances, and that you can easily increase and decrease the number of running apps?

That is the classic use case for containers. Sure, if you want to send VMware a lot of money, you could run 1000 virtual machines, each running one instance of the app. That would provide scalability and security, and you could automate that solution, but there’s a better way. Rather than using an ESX cluster, you can build a Kubernetes cluster. You only need to install one OS, and there is the overhead of just the one running kernel on that x86 server. You can also save $$$ on the ESX licenses, although there are also legitimate reasons to use containers on a virtualized OS under a hypervisor like KVM or ESX too.

An app running in a container is running in a sort of lightweight VM. Like any VM it would need an “operating system,” but that operating system is only composed of the files actually required to make the application work. If you have a java app, it might just be a handful of files and libraries. You might only see 20 files inside of that container. Everything else is outside of that namespace.

This is where Oracle in a container looks weird to some people. A normal Oracle database installation integrates heavily with the operating system. There are critical files in /etc and /var, and the database has a lot of dependencies. I’m sure Oracle will eventually create a more container-friendly version of the database, but for now you pretty much need the entire OS.

So, how do you containerize your database when it needs an entire OS filesystem? You start by containerizing almost the entire OS.


A container is largely defined by a dockerfile. It’s a file that you feed to Docker to construct the basic container image, including its filesystems. For example, let’s say you want to run PostgreSQL in a container. Your dockerfile might look like this:

FROM centos
RUN yum -y update
RUN yum -y install sudo epel-release
RUN yum -y install postgresql-server postgresql postgresql-contrib supervisor pwgen

You then run “docker build” to create your image. That file tells the build process to start with a basic CentOS image that contains the things that almost any Linux application needs to run. You can find even more lightweight images, but this one is only 195MB.

The build process then continues and runs the yum commands to install the postgresql binaries and dependencies. The end result is an image that is approximately 300MB in size, as you can see below:

[root@jfs4 dtest]# docker image ls postgres –format {{.Repository}}:{{.Tag}} {{.Size}}”
postgres:latest 306MB

It looks something like this from within a running container:

[root@6ee8419e5617 /]# df -k
Filesystem                            1K-blocks    Used Available Use% Mounted on
overlay                               335360012 3457904 331902108   2% /
tmpfs                                     65536       0     65536   0% /dev
tmpfs                                   4005140       0   4005140   0% /sys/fs/cgroup
/dev/mapper/dockervg-var–lib–docker   335360012 3457904 331902108   2% /etc/hosts
shm                                       65536       0     65536   0% /dev/shm
tmpfs                                   4005140       0   4005140   0% /proc/scsi
tmpfs                                   4005140       0   4005140   0% /sys/firmware

If it looks like there’s an outrageously large amount of space available in my container, that’s because the space available reflects the space where Docker images reside, which is /var/lib/docker on RHEL7. There is another 3.5GB of data on /var/lib/docker on my host, and obviously you need to be careful not to fill up /var/lib/docker.

[root@6ee8419e5617 /]# du -sk /*
12      /anaconda-post.log
0       /bin
0       /dev
9260    /etc
0       /home
0       /lib
0       /lib64
0       /media
0       /mnt
0       /opt
0       /proc
24      /root
0       /run
0       /sbin
0       /srv
0       /sys
4       /tmp
209040  /usr
23116   /var

Every time I make another container, I get another private mini-OS environment for that specific container. The process is efficient, meaning I don’t lose another 300MB of space for every database. It’s a sort of copy-on-write cloning process. Originally all images point to the same files.

This example illustrates why I’m referring to a container as a lightweight VM. From the sysadmin and DBA point of view, that’s what it is. I have filesystems and processes. I can log in to the container from the outside world, and so on. The containers are all using the same OS kernel, however. That makes it more efficient overall, plus I can “boot” a container in milliseconds because the OS is already running. I just need to start a process in a private namespace to bring up my application.

Oracle Dockerfiles

Many container users use publicly available images. They just download the image they want and go without the need to build an image. There are also Oracle database docker images that contain everything you need, but Oracle in a container can be more complicated.

In many cases, you will need to build your own image, partially because of licensing. Oracle needs to protect their IP, and freely offering their product in a prebuilt image would lead to problems controlling who’s running what. I expect that they will eventually provide free Docker container images that come complete with Oracle binaries installed, but, even then, there are reasons to build your own. Versions change, patches are required, and so on.

I found a lot of Docker files at https://github.com/oracle/docker-images. It’s an impressive effort. Someone over at Oracle clearly thinks that containerization is the future, especially for Cloud. They have almost everything covered. I was particularly interested in the Oracle Database dockerfiles.

There’s a second reason I want to build my own Oracle-specific docker image. Customers buy NetApp products for the data management capabilities. I don’t just want an Oracle database in a container that is merely present on NetApp storage, I want a manageable container with integrated backup, recovery, provisioning, and cloning. Therefore, I built my own dockerfile.

DBAAS-NTAP Dockerfile

I called my dockerfile DBAAS-NTAP. I hope to post this dockerfile somewhere public soon. The dockerfile is pretty straightforward. However, it includes some complicated scripts that manage database creation. I have had a personal toolkit of scripts for years, and it only took some minor changes to integrate them with a container. The following examples are sometimes edited for clarity.

The dockerfile reads as follows, and contains the complete definition of the container. It is quite intuitive after you scan it a few times.

FROM oraclelinux:7‑slim

MAINTAINER Jeffrey Steiner 


   LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib \

COPY /orabin/install/
COPY NTAP.install. /orabin/install/NTAP.install.rsp
COPY NTAP.dbc. /orabin/NTAP.dbc
COPY NTAP.go NTAPlib.zip NTAP.renameDB NTAP.dbca.rsp.tmpl NTAP.createDB NTAP.startDB NTAP.setPassword /orabin/

RUN chmod ug+x /orabin/NTAP.setPassword && \
   chmod ug+x /orabin/NTAP.go && \
   chmod ug+x /orabin/NTAP.startDB && \
   chmod ug+x /orabin/NTAP.createDB && \
   chmod ug+x /orabin/NTAP.renameDB && \
   yum ‑y install oracle‑database‑server‑12cR2‑preinstall unzip wget tar openssl && \
   yum ‑y install openssh‑server && \
   yum ‑y install vim  && \
   yum clean all && \
   echo oracle:oracle | chpasswd && \
   chown ‑R oracle:dba /orabin

USER oracle
RUN cd /orabin/install && \
   unzip /orabin/install/ && \
   /orabin/install/database/runInstaller ‑silent ‑force ‑waitforcompletion ‑responsefile   /orabin/install/NTAP.install.rsp ‑ignoresysprereqs ‑ignoreprereq && \
   ln ‑s /orabin/NTAP.setPassword /home/oracle/ && \
   echo “DEDICATED_THROUGH_BROKER_LISTENER=ON”  >> $ORACLE_HOME/network/admin/listener.ora && \
   echo “DIAG_ADR_ENABLED = off”  >> $ORACLE_HOME/network/admin/listener.ora;

USER root
RUN /orabin/oraInventory/orainstRoot.sh && \
    $ORACLE_HOME/root.sh && \
    rm ‑rf /orabin/install && \
    mkdir /oradata && \
    mkdir /logs && \
    unzip /orabin/NTAPlib.zip ‑d /orabin && \
    rm /orabin/NTAPlib.zip && \
    chmod ug+x /orabin/NTAPlib/* && \
    chown oracle:dba /orabin/NTAPlib && \
    chown oracle:dba /oradata && \
    chown oracle:dba /logs

USER oracle
RUN mkdir /home/oracle/.ssh && \
    chmod 700 /home/oracle/.ssh && \
    ssh‑keygen ‑t rsa ‑f /home/oracle/.ssh/ssh_host_rsa_key
COPY NTAP.oracle.sshd_config /home/oracle/.ssh/sshd_config
COPY NTAP.authorized_keys /home/oracle/.ssh/authorized_keys

WORKDIR /home/oracle

EXPOSE 1521 2022

CMD exec /orabin/NTAP.go

Let’s break it down. It starts with the Oracle Linux 7 image.

FROM oraclelinux:7-slim

That oraclelinux:7-slim image is approximately 120MB in size. It contains the bare-bones binaries and libraries required for a process to run.

Next, I set the basic Oracle environment variables required to make the Oracle installation happy. That means an ORACLE_BASE and ORACLE_HOME. My approach to automation requires certain fixed paths so that the setup scripts know where to find various binaries and configuration files. I usually place ORACLE_BASE at /orabin and ORACLE_HOME at /orabin/product/(full version)/dbhome_1.

It’s very important to set all of these paths correctly.

   LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib \

Very small errors in paths like CLASSPATH or LD_LIBRARY_PATH cause cryptic, frustrating errors during the installation and startup process. I was burned by this a number of times while I was developing this framework.

Next comes the COPY instruction. As the image is built, it copies the specified files into the docker image. The first file is the zipfile that contains the Oracle installer. I placed that in /orabin/install. Next the build process copied my various utilities and files into the correct locations. I will explain what all those utilities and files relate to later.

COPY /orabin/install/
COPY NTAP.install. /orabin/install/NTAP.install.rsp
COPY NTAP.dbc. /orabin/NTAP.dbc
COPY NTAP.go NTAPlib.zip NTAP.renameDB NTAP.dbca.rsp.tmpl NTAP.createDB NTAP.startDB NTAP.setPassword /orabin/

Now we start executing commands. They run within a container, operating on the image that is being constructed. Therefore, you are using a container to create the image that is used with your containers. The first five commands are just chmod operations to set permissions correctly.

RUN chmod ug+x /orabin/NTAP.setPassword && \
   chmod ug+x /orabin/NTAP.go && \
   chmod ug+x /orabin/NTAP.startDB && \
   chmod ug+x /orabin/NTAP.createDB && \
   chmod ug+x /orabin/NTAP.renameDB && \
   yum -y install oracle-database-server-12cR2-preinstall unzip wget tar openssl && \
   yum -y install openssh-server && \
   yum -y install vim  && \
   yum clean all && \

The yum commands shown above illustrate an interesting point about how an image and container work. Remember that the image is based on the slim Oracle Linux 7 image. That’s only the basic OS components, and it’s not enough to make an Oracle database work. The yum commands are executing within a container that is based on the growing image. They pull down the preinstallation dependencies for an Oracle 12cR2 database. That’s why the yum package is called oracle-database-server-12cR2-preinstall. Oracle has helpfully made this available for Oracle Linux customers to simplify the installation process. I’m still using RHEL for the OS, but my container looks like OL7, and the yum command is using the yum configuration within that OL7 container.

In addition, I’m running this particular test on Red Hat Linux, but my container is an OL environment. The Oracle database processes will run as if they are running in OL, irrespective of the actual OS used. That’s another benefit of containers – portability. The container includes the complete runtime environment, so you don’t have to worry about the actual OS being used and whether you have the right dependencies on that OS. All you really need to provide to the container is the kernel.

I also added unzip, wget, tar, openssl, openssh, and vim to the image. There’s a reason for that which gives many container administrators heartburn unless they understand the demands of an Oracle DBA. More on that in a few paragraphs.

Next, I created the Oracle users and ran some commands as oracle:

echo oracle:oracle | chpasswd && \
chown -R oracle:dba /orabin
USER oracle
RUN cd /orabin/install && \
   unzip /orabin/install/ && \
   /orabin/install/database/runInstaller -silent -force -waitforcompletion -responsefile /orabin/install/NTAP.install.rsp -ignoresysprereqs -ignoreprereq && \
   ln -s /orabin/NTAP.setPassword /home/oracle/ && \
   echo “DEDICATED_THROUGH_BROKER_LISTENER=ON”  >>    $ORACLE_HOME/network/admin/listener.ora && \
   echo “DIAG_ADR_ENABLED = off”  >> $ORACLE_HOME/network/admin/listener.ora;

This is basic unattended installation work. An Oracle responsefile was copied in one of the earlier steps, and this was then fed to the installer running in silent mode. The result was installation of the Oracle RDBMS binaries at the specified ORACLE_HOME path. This also illustrates the value of building your own Oracle installation dockerfile. If I need to do something special with an installation, I can alter NTAP.install.rsp, which is my unattended install parameter file, and customize the way Oracle is installed. For example, I might want to skip some database components, or I might want both Enterprise Edition and Standard Edition variants.

The next step was running some operations as root, not Oracle. First, I ran the usual orainstRoot.sh and root.sh scripts that every DBA knows about. These are almost always required after a new installation of Oracle binaries.

Next, I deleted the installation files, set up the basic directories that are used for the Oracle database files, uncompressed some of my database management utilities, and then set ownership of all that to oracle:dba.

USER root
RUN /orabin/oraInventory/orainstRoot.sh && \
   $ORACLE_HOME/root.sh && \
   rm -rf /orabin/install && \
   mkdir /oradata && \
   mkdir /logs && \
   unzip /orabin/NTAPlib.zip -d /orabin && \
   rm /orabin/NTAPlib.zip && \
   chmod ug+x /orabin/NTAPlib/* && \
   chown oracle:dba /orabin/NTAPlib && \
   chown oracle:dba /oradata && \
   chown oracle:dba /logs

Remote Access

Here’s the bit that experienced container architects don’t like. I configured SSH.

USER oracle
RUN mkdir /home/oracle/.ssh && \
chmod 700 /home/oracle/.ssh && \
ssh-keygen -t rsa -f /home/oracle/.ssh/ssh_host_rsa_key
COPY NTAP.oracle.sshd_config /home/oracle/.ssh/sshd_config
COPY NTAP.authorized_keys /home/oracle/.ssh/authorized_keys

If you do a simple Google search about “Docker and SSH” you will find a lot of warnings. I agree with most of them, because most of the time you shouldn’t need SSH access to a container. If you are using SSH, then you are probably doing something wrong or at least doing something in a less-than-optimal way.

An Oracle DBaaS service is an exception for two reasons. First, Oracle DBA tasks frequently require full access to the database environment to collect diagnostic data, pick up logs, change parameters, apply patches, and so on. There must be a way to get shell access to the container environment.

Second, I’m aiming for a secure multitenant architecture. I do not want to grant anyone direct access for any reason to the host OS where the containers are running.

The best solution to this problem that I could find is to run an SSH daemon as the oracle user within the container. With my approach, I can manage access with the authorized keys file. I can turn ssh on and off at will soo. The security risk could be mitigated by using a firewall to lock down the hosts permitted to access the SSHD listener port, or SSH could be enabled or disabled on demand. For example, include “request ssh access” in a portal where a user enters their SSH key, and the portal then pushes the key to the host and turns on SSH for a limited period of time. It’s easy to invoke such commands within a container from a central location.

Kubernetes also offers a CLI capability. I don’t think it’s quite as secure at the SSH approach, but it’s close, and it’s easier to implement if you’re dealing with a strict need for total network and container isolation.

Finally, I defined a working directory, I defined which network ports were open into the container, and I specified the command to run when starting a container.

WORKDIR /home/oracle
EXPOSE 1521 2022
CMD exec /orabin/NTAP.go

Remember, the container process namespace starts with a single process. That process becomes process ID 1 for the container, which is the parent of all other processes. In this case, the initial command is /orabin/NTAP.go, which spawns a python script. This script performs various tasks ending with the sqlplus command that starts the database.

Multiple Versions of Oracle

I created three different variants of the dockerfile for versions,, and They are all nearly identical. I could have made one image with all three sets of binaries, but the image would have been a bit bloated if I did that. I might have also run into dependencies issues if two versions of Oracle had differing needs for certain libraries.

In the end, my images looked like this:

[root@jfs5 kube]# docker image ls
REPOSITORY             TAG                 IMAGE ID           CREATED         SIZE
database            de5a7fb77912       30 days ago      18.9GB
database            06cb81ce600b       30 days ago      12.7GB
database            1e944ec99a18       30 days ago      14.4GB

The next post will cover storage integration with NetApp Trident. This is the key to DBaaS. Trident gives me policy-based, automated storage provisioning, integrated database backups, and the ability to clone a running database.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s