Kubernetes+ONTAP = DBaaS Part 5: Manageability

Parts 1 to 4 of this series cover the general requirements for DBaaS, Docker setup, Kubernetes setup, the NetApp® Trident storage driver, and orchestration.

I could write a short novel on the many benefits of DBaaS, specifically there are some manageability benefits that don’t quite fit into the earlier topics, so I am covering examples of those in this section..

Database Names

An interesting benefit of using containers is the ability to run multiple databases with identical names (ORACLE_SID) on the same server. This works because of namespaces. Each set of Oracle processes is running in an isolated namespace so they can’t see each other and therefore they don’t complain about conflicts.

This is helpful in application development where an application is configured to look for a database with a specific SID.

The dbaas utility allows the user to set the SID for the database and, optionally, a PDB. The scripts included with the Docker image handle the rename process, this adds about 30 seconds to the cloning process but it is optional.

CLI access

As mentioned in a previous post, CLI access is important for maintaining an Oracle database. Sometimes you must retrieve a log, run an OS command, or install a patch.

Security is a possible concern, though. The Dockerfile I described in this series of posts includes a Secure Shell (SSH) daemon running as Oracle. That only allows access to the container and if SSH security was breached the intruder would be limited to the container itself. Obviously nobody wants an intrusion at all, but if you need remote access you have to make a door.

If security is too critical to permit any kind of SSH access, the daemon can be removed from the Dockerfile with a few quick edits. Then only the administrator can access the container.

There are a number of ways to access a container directly. For example, let’s say I want to access the container with the uuid ERPAPP. First, I list the containers with my utility:

[root@jfs4 kube]# ./dbaas.v5 show
UUID      NAME                       TYPE      STATUS  MANAGED DB     VERSION  NODE
‑‑‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑ ‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑ ‑‑‑‑
12cR2     12cr2‑ntap‑ntappdb‑dbf     dbf       Bound   True    oracle 12.2.0.1
12cR2     12cr2‑ntap‑ntappdb‑log     log       Bound   True    oracle 12.2.0.1
ERPAPP    erpapp‑ntap‑ntappdb        container Running True    oracle 12.2.0.1 jfs5
ERPAPP    erpapp‑ntap‑ntappdb‑dbf    dbf       Bound   True    oracle 12.2.0.1
ERPAPP    erpapp‑ntap‑ntappdb‑log    log       Bound   True    oracle 12.2.0.1
test1     test1‑ntap‑ntappdb         container Running True    oracle 12.2.0.1 jfs6
test1     test1‑ntap‑ntappdb‑dbf     dbf       Bound   True    oracle 12.2.0.1
test1     test1‑ntap‑ntappdb‑log     log       Bound   True    oracle 12.2.0.1
testclone testclone‑ntap‑ntappdb     container Running True    oracle 12.2.0.1 jfs5
testclone testclone‑ntap‑ntappdb‑dbf dbf       Bound   True    oracle 12.2.0.1
testclone testclone‑ntap‑ntappdb‑log log       Bound   True    oracle 12.2.0.1

The container itself is called erpapp-ntap-ntappdb, as shown above. I could just do this:

[root@jfs4 kube]# kubectl exec -ti erpapp-ntap-ntappdb — /bin/bash
[oracle@erpapp-ntap-ntappdb ~]$

This command also leverages namespaces. The kubectl does an exec operation of /bin/bash using the namespaces defined for erpapp-ntap-netappdb. When the kubectl forks and turns into /bin/bash with the new namespace, the result is a bash shell running in the container itself. I can only see the container processes, the container’s filesystems, and so on.

I can easily add this ability to the existing dbaas utility. What happens here is dbaas forks into the kubectl command, which forks into the shell within the container.

[root@jfs4 kube]# ./dbaas.v5 show
UUID   NAME                    TYPE      STATUS  MANAGED DB     VERSION  NODE
‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑ ‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑ ‑‑‑‑
12cR2  12cr2‑ntap‑ntappdb‑dbf  dbf       Bound   True    oracle 12.2.0.1
12cR2  12cr2‑ntap‑ntappdb‑log  log       Bound   True    oracle 12.2.0.1
ERPAPP erpapp‑ntap‑ntappdb     container Running True    oracle 12.2.0.1 jfs5
ERPAPP erpapp‑ntap‑ntappdb‑dbf dbf       Bound   True    oracle 12.2.0.1
ERPAPP erpapp‑ntap‑ntappdb‑log log       Bound   True    oracle 12.2.0.1
test1  test1‑ntap‑ntappdb      container Running True    oracle 12.2.0.1 jfs6
test1  test1‑ntap‑ntappdb‑dbf  dbf       Bound   True    oracle 12.2.0.1
test1  test1‑ntap‑ntappdb‑log  log       Bound   True    oracle 12.2.0.1

[root@jfs4 kube]# ./dbaas.v5 cli ERPAPP
[oracle@erpapp‑ntap‑ntappdb ~]$ ls /oradata/NTAP
NTAPPDB  sysaux01.dbf  temp01.dbf     users01.dbf
pdbseed  system01.dbf  undotbs01.dbf

[oracle@erpapp‑ntap‑ntappdb ~]$ ps ‑ef | grep smon
oracle    2298     1  0 09:36 ?        00:00:00 ora_smon_NTAP
oracle    4327  4305  0 12:27 pts/0    00:00:00 grep ‑‑color=auto smon

[oracle@erpapp‑ntap‑ntappdb ~]$ exit
exit
[root@jfs4 kube]#

That’s possibly safe enough to use for an internal cloud project, but it is not acceptable for a true security multitenancy environment. One exception might be if there was a portal where you could run the “kubectl exec /bin/bash” command to the end user through a interactive browser interface, similar to how vCenter allows the user to create a virtual console.

SSH access

As mentioned above, I’m currently running an ssh daemon as the Oracle user inside the container to allow easier access. As long as I have the SSH keys, I can directly access the container. In order to do that, I need a good way to find the IP address of a container.

Here’s another example of how easy it is to automate Kubernetes tasks.

The “dbaas show” command harvests container (pod) data that includes metadata starting with “ntap-dbaas-“. While I’m doing that, I can also retrieve the IP data with a few additional lines to the script. The kubectl command is already providing that detail, I just need to parse the container/IP/status/PodIP field and display the result.

Now, “dbaas show” creates output like this:

[root@jfs4 kube]# ./dbaas.v5 show
UUID  NAME                   TYPE      STATUS  MANAGED DB     VERSION  NODE IP
‑‑‑‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑ ‑‑‑‑‑‑‑ ‑‑‑‑‑‑ ‑‑‑‑‑‑‑‑ ‑‑‑‑ ‑‑‑‑‑‑‑‑‑‑‑‑‑‑
12cR2 12cr2‑ntap‑ntappdb‑dbf dbf       Bound   True    oracle 12.2.0.1
12cR2 12cr2‑ntap‑ntappdb‑log log       Bound   True    oracle 12.2.0.1
aaaa  aaaa‑ntap‑ntappdb      container Running True    oracle 12.2.0.1 jfs6 192.168.244.34
aaaa  aaaa‑ntap‑ntappdb‑dbf  dbf       Bound   True    oracle 12.2.0.1
aaaa  aaaa‑ntap‑ntappdb‑log  log       Bound   True    oracle 12.2.0.1

Using ssh I can access the container from the outside world:

[root@jfs5 oracle-ntap]# ssh oracle@192.168.244.34 -p 2022
[oracle@aaaa-ntap-ntappdb ~]$ ls /oradata
NTAP

Summary

The team that created the Trident driver did a brilliant job. It does everything necessary to meet the requirements of a DBaaS project as I outlined in Part 1 of this series of blog posts:

  • Fast provisioning of a database
  • Simple backup strategy
  • Rapid cloning of a database
  • Simple teardown of a database
  • Option for secure multitenancy, with each database running in isolation from each other
  • Everything must be easily automatable and be able to integrate with a variety of automation frameworks
  • No need for hypervisor licenses, this can be run under virtualization, but it’s not required. Just install Linux on regular servers and let Kubernetes handle your clustering

What else could you want for DBaaS?

In my opinion, there is a need and an opportunity for someone to create a container management utility that is comparable to vSphere in terms of usability and intuitiveness. I’ve looked around and haven’t seen anything that really impresses me so far.

Still, the native Kubernetes functionality is good. I’m just a guy with a Python book and not much training in any of this, and I’m really impressed just how easy it was to get Kubernetes up and running and providing practical value to a common IT use case. If I was still in an IT management role, I’d absolutely be promoting a container strategy.

If anyone would like to talk further, please email me. I’m happy to discuss sharing the Dockerfile and associated Kubernetes management scripts, and if there’s something you’d like to see changed or enhanced, I’m happy to investigate that as well.

 

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s