Oracle 19c on Docker: Install Oracle via RPM (Part II)
This is the second in a multi-part series on building Docker images for Oracle 19c. In the first installment I demonstrated a Dockerfile for building images using the Oracle 19c RPM and applying Release Updates (RU) to the new home. In this post I describe the remaining steps necessary for running Docker images using the RPM-based installation.
In the next part of the series I describe a method to merge multiple images together, creating containers with a low-version Oracle home and database (11g, 12c, 18c) running alongside a preconfigured 19c database home, ready for testing and practicing upgrades to Oracle 19c!
Differences in RPM Installations
If you’re not already familiar with RPM-based installations, there are a few unique elements to be aware of that bear on the installation.
Configuration
First, installation of the database software and configuration of a database are performed by the root
user via a default initialization script found at /etc/init.d/oracledb-ORCLCDB-19c
. Databases are managed by passing options to the script—configure
, start
, stop
, etc. It reads a file created at /etc/sysconfig/oracledb-ORACLECDB-19c.conf
that provides limited database configuration:
#This is a configuration file to setup the Oracle Database. #It is used when running '/etc/init.d/oracledb_ORCLCDB configure'. #Please use this file to modify the default listener port and the #Oracle data location. # LISTENER_PORT: Database listener LISTENER_PORT=1521 # ORACLE_DATA_LOCATION: Database oradata location ORACLE_DATA_LOCATION=/opt/oracle/oradata # EM_EXPRESS_PORT: Oracle EM Express listener EM_EXPRESS_PORT=5500
Additional database configuration is embedded in the init script itself:
# Setting the required environment variables export ORACLE_HOME=/opt/oracle/product/19c/dbhome_1 export ORACLE_VERSION=19c export ORACLE_SID=ORCLCDB export TEMPLATE_NAME=General_Purpose.dbc export CHARSET=AL32UTF8 export PDB_NAME=ORCLPDB1 export LISTENER_NAME=LISTENER export NUMBER_OF_PDBS=1 export CREATE_AS_CDB=true
It seems odd for this information to be hardcoded in the script until you look a little deeper:
CONFIG_NAME="oracledb_$ORACLE_SID-$ORACLE_VERSION.conf" CONFIGURATION="/etc/sysconfig/$CONFIG_NAME"
It accommodates multiple databases (even multiple homes) via separate files dedicated to and named for each database and database version.
This is the first hurdle when creating a 19c database in Docker using the RPM installer. What if we want to create a database under a different home path? SID? Pluggable database name? Character set? Correctly named initialization and configuration files must be created with appropriate entries for these values.
The groundwork is laid in the Dockerfile, by making these files part of the oinstall
group and allowing its members to edit the file:
chown root:oinstall /etc/sysconfig/$CONFIG_FILE /etc/init.d/$INIT_FILE chmod 664 /etc/sysconfig/$CONFIG_FILE
When the image is run as a container, the oracle
user will be able to edit these files with custom values defined in docker run
. We’ll return to this in just a bit, but let’s first look at changes needed in the runOracle.sh
script.
Starting and Stopping Oracle
The init script in /etc/init.d
handles startup and shutdown of Oracle database components installed via RPM as part of the operating system startup sequence. In a container there is no init process. The start and stop commands need to be moved into the runOracle.sh
script. In Oracle’s Docker images, runOracle.sh
starts and stops the listener and database via lsnrctl
and sqlplus
. These are replaced with the corresponding init file commands:
sudo $ORACLE_BASE/oradata/dbconfig/$ORACLE_SID/$INIT_FILE [stop|start|config]
Hang on…
I said that the init file was under /etc/init.d
but this is running something under a different path. Oracle’s convention for Docker images builds places database configuration files under the oradata
directory. This allows all database content—both data and configuration—to exist (and persist) in a volume outside the container. For more information on leveraging this feature, including how to create database gold images and “instant” databases in Docker:
Placing the initialization file here follows the same convention.
In Part I I noted the oracle
user was added to the sudoers
file and here we see why—the init script must be run as root
.
This covers the significant differences needed in the RPM-ready run script.
Creating a Database
The same script that starts and stops database services also runs the Database Creation Assistant (DBCA) in silent mode. In a non-RPM Docker environment, DBCA is called and references a response file (dbca.rsp.tmpl
) for configuration. In an RPM install, database configuration is defined in multiple files.
As already mentioned, each database has its own init script and configuration file with hardcoded entries. Additional configuration can be added to the database template or to a response file. The native RPM configuration doesn’t call a response file and in my initial development I opted to modify the template rather than changing the existing DBCA command. This is arguably a more difficult and less obvious approach than putting content in a response file. I’ll change this in an upcoming release but for now the setupDB.sh
script replaces placeholder text in the template at runtime, allowing docker run
to pass arguments for SID, character set, PDB name and PDB count to the init script:
# Update the configuration: sed -i -e "s|export ORACLE_SID=ORCLCDB|export ORACLE_SID=$ORACLE_SID|g" \ -e "s|General_Purpose.dbc|$INSTALL_TMP|g" \ -e "s|export CHARSET=AL32UTF8|export CHARSET=$ORACLE_CHARACTERSET|g" \ -e "s|export PDB_NAME=ORCLPDB1|export PDB_NAME=$ORACLE_PDB|g" \ -e "s|export NUMBER_OF_PDBS=1|export NUMBER_OF_PDBS=$PDB_COUNT|g" \ -e "s|oracledb_\$ORACLE_SID-\$ORACLE_VERSION.conf|$CONFIG_FILE|g" \ -e "s|/etc/sysconfig/\$CONFIG_NAME|$ORACLE_BASE/oradata/dbconfig/$ORACLE_SID/$CONFIG_FILE|g" $ORACLE_BASE/oradata/dbconfig/$ORACLE_SID/$INIT_FILE
I’ve also updated the setup to add entries for multiple pluggable databases. I frequently need multiple PDB for labs, instruction, demonstrations, and feature and upgrade testing and this pattern can be used to build multi-PDB databases by specifying the appropriate variables as part of a docker run
command or docker-compose
yaml:
# Add TNS entries for each PDB: PDB_COUNT=${PDB_COUNT:-1} for ((PDB_NUM=1; PDB_NUM<=PDB_COUNT; PDB_NUM++)) do cat << EOF >> $ORACLE_HOME/network/admin/tnsnames.ora ${ORACLE_PDB}${PDB_NUM} = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ${ORACLE_PDB}${PDB_NUM}) ) ) EOF done
That represents the majority of the changes necessary for running an RPM installation. The only other script is the health check which remains relatively unchanged from that included in the Oracle Docker repository.
Results
I find that my RPM-based images build in slightly less time and are a few MB smaller than images that use runInstaller
. The resulting Oracle binaries are the same so the size and time difference are certainly a product of the method. I haven’t yet done a deep analysis of the differences in the images to confirm this but I expect there is less filesystem overlay in the RPM images.
If the differences are so slight, why go through the trouble?
For one, because it’s there! I’ve played with a few RPM installations but never got too deeply into their mechanics. There’s a significant difference between running commands and building automation; the first can be as simple as following some directions. Automating a process typically requires deeper understanding to handle abstractions and variability. It means digging deeper into things.
I also find it easier to add and apply Release Updates and patches to RPM images. This may be my own mindset but the Dockerfiles “look” cleaner because the installation is part of the Dockerfile yum
commands and run by root
, vs. breaking it into separate scripts and mixing steps for oracle
and root
.
Extending RPM Images
In Part III of this series I’ll take an RPM installation and merge it with a 12c database image. The result produces a container with two Oracle home directories for practicing and working through upgrade scenarios. Stay tuned!