How to use Docker for updating Old Production Applications?


When working on continuously expanding projects over long periods of time, software developers often come across code that must be updated in order for the project to progress. If the code is outdated, preparing the environment and launching certain applications become problematic. Docker is an open-source, virtual environment tool primarily used on Linux and OS6 applications that allows the developer to more quickly update and deploy apps inside various virtual software containers. These containers do not require separate operating systems. They access the Linux kernel’s features, meaning several containers can be run simultaneously by a single server.


Docker further speeds up deployment by providing consistent development and production environments. All members of the team can use the same system libraries, language run-times, and OS. It supports multiple language versions, including Python, Ruby, Java, and Node, meaning developers can run a script inside a Docker image, rather than installing various languages and running scripts separately. Docker’s “Copy-on-Write” mechanism duplicates any requested code in a standard UNIX pattern. Developers can also begin running a container in less than 0.1 seconds, and the entire container requires less than 1 MB of disc-space. These are significant improvements over standard VMs.


To help developers take advantage of these many benefits, this article shows how to update an old production application with Docker. The app uses a non-standard Sphinx build, wkhtmltopdf, NPM, and Resque. They will initially be kept within the application and then moved to their own containers.


The goals are to:

  • Keep 100% forward compatibility with devs not using Docker.
  • Make the app use environment variables for configuration.
  • Build Sphinx, install Node.js, and wkhtmltopdf in a basic container.
  • Keep the app and Resque in the same container.
  • Simplify the Procfile.
  • Create a base Docker-composed configuration file and extend it for development and CI envs.


Step 1: Setup


Here is what you’ll need to install Docker and its NFS plug-in for OSX Native, OSX Brew and VirtualBox, or Ubuntu.


For OSX Native:


For OSX via Brew and VirtualBox:


brew install caskroom/cask/brew-cask
brew cask install virtualbox
brew install docker docker-machine docker-compose

# create the vm
docker-machine create -d virtualbox default
# import environment variables for the docker-cli
eval "$(docker-machine env default)"


For Ubuntu: (install docker) (install docker-compose)


Now that you have Docker installed, you can run the application on it. Docker apps are configured via a Dockerfile, which defines how the container is built.


Step 2: Unify File Configuration


Assume the app you are updating has hard-coded config files under the version control. When running the app in Docker, the secrets will no longer be stored inside the image.


When modifying this file to unify the configuration, Hash#fetch helps return default value unless it already exists in the ENV variable:


# database.yml 
development: &default
  username: <%= ENV.fetch('MYSQL_USER', 'foo') %>
  password: <%= ENV.fetch('MYSQL_PASSWORD', 'bar') %>
  host:     <%= ENV.fetch('MYSQL_HOST', '') %>
  database: app_development


# mongoid.yml
      database: app_development
        - <%= ENV.fetch('MONGO_HOST', 'localhost') %>:ENV.fetch('MONGO_PORT', 27017) %>


# sphinx.yml 
development: &default
  address: <%= ENV.fetch('SPHINX_HOST', 'localhost') %>


Step 3: Run Dockerfile


This app uses light-weighted Ruby: 2.1 as the base image and changes BUNDLE_PATH to get cache via volume (


FROM ruby:2.1-slim


RUN apt-get update -qq && \
  apt-get install -y \
    build-essential git \
    libmysqlclient-dev mysql-client libxslt-dev libxml2-dev \
    curl net-tools --no-install-recommends


Step 4: Install NPM and Node.js via Node Version Manager


At this point, you are explicitly specifying the Node libraries. As mentioned, the application is using a non-standard Sphinx build. Therefore, you can’t use already existing Docker images for this purpose:


ENV NVM_DIR      "/root/.nvm"

RUN curl${NVM_VERSION}/ | bash \
    && . $NVM_DIR/ \
    && nvm install $NODE_VERSION \
    && nvm alias default $NODE_VERSION \
    && nvm use default

ENV PATH      $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH


Step 5: Containerize the App


This step is a temporary solution to keep Sphinx within the application. When you are preparing for production release, Sphinx must be moved to a separate container:



RUN curl -o sphinx.tar.gz \
    && tar -zxvf sphinx.tar.gz > /dev/null \

    && curl -o libstemmer_c.tgz \
    && tar -xvzf libstemmer_c.tgz > /dev/null \

    && cd sphinx-*/ \
    && cp -R ../libstemmer_c/* ./libstemmer_c \
    && ./configure --with-mysql --with-libstemmer > /dev/null \
    && make > /dev/null && make install > /dev/null


During this step, you also need to clean up extraneous data to reduce the image size:


# Cleanup
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*


Step 6: Mount Bundled Gems Folder Inside Container


Docker supports mount points called “volumes” which let you access data from either the native host or another container. In this case, you can mount your bundled gems folder inside the container:


ENV APP /app
VOLUME /bundle

# Gems
RUN gem install bundler

ADD Gemfile .
ADD Gemfile.lock .
ADD vendor .
RUN bundle install --jobs 4 --retry 5


(To learn more about BUNDLE_PATH and why gems must be installed as the last step, see:


Step 7: Install Node Packages


When using Docker to perform this step, the NPM is invoked with root privileges, so it will change the UID to the user account or to the UID specified by the user config, which, in this case, defaults to nobody.


Start by setting the unsafe-perm flag to run scripts with root privileges:


ADD package.json .
RUN npm install --unsafe-perm


(For more details on running scripts with root privileges, see:


During the last part of this step, you add the app’s code, expose the internal 3000 port, and process any incoming commands thought bin/


# App
RUN mkdir $APP


ENTRYPOINT bin/ $0 $@


Step 8: Create docker-compose.yml File


Before executing any code in the app’s container, you need to make sure that the old PID files have been deleted and the database is available:


bundle check || bundle install --jobs 4 --retry 5

pids=( )
for file in "${pids[@]}"
  if [ -f $path ]; then
    rm $path


echo "[INFO] Running in app: $@"
exec "$@"


Now you’ll use to hold the execution until the DB is ready:


echo "[INFO] Waiting for mysql"

until mysql -h"$MYSQL_HOST" -P3306 -u"$MYSQL_ROOT_USER" -p"$MYSQL_ROOT_PASSWORD" -e 'show databases;'; do
  >&2 printf "."
  sleep 1

echo "[INFO] Mysql ready"


Next, you need to integrate the app with Mysql, Mongo, Redis, volumes and expose the env variables.


As an additional step, you can create docker-compose-base.yml which will be extended in the compose file for dev and ci/prod.


Many of the images on Dockerhub accept env variables for configuration. In this case, you want to config mysql’s root user and expose the data in the application container, as well as in the Redis and Mongo URLs:


# .docker/.env

# .docker/db.env




The docker-compose.yml includes the application service, along with the images for Mysql, Redis and Mongo:


# docker-compose.yml
version: '2'
    build: .
    command: ./bin/
      - .:/app
      - bundle:/bundle
      - '3000:3000'
      - '4000:4000'
      - ./.docker/db.env
      - ./.docker/.env
      - db
      - redis
      - mongo

    image: mysql:5.7
      - 3306
      - ./.docker/db.env
      - mysql:/var/lib/mysql

    image: redis:latest
      - redis:/data

    image: mongo
      - mongo:/data/db


# Procfile.docker
web: bundle exec rails s -b -p 3000 -e ${RACK_ENV:-development}
react: npm start
resque: QUEUE=... bundle exec rake resque:work
sphinx: bundle exec rake ts:start NODETACH=true

# tells the bash script to exit whenever anything returns a non-zero return value.
set -e


# echo "[INFO] Precreate databases..."
# rake db:version || bundle exec rake db:setup

# echo "[INFO] Running db:migrate..."
# rake db:migrate

bundle exec foreman start -f Procfile.docker


At this point, you have completed all the steps needed to create your new Docker container and can move on to starting and running the application.


Step 9: Start and Run the Application


Your entry point:


 docker-compose up


Point your browser to: https://localhost:3000 or


The Docker default machine’s IP is To make sure your IP is correct, run:


docker-machine ip docker


If you want to bash to the app container or run custom command, don’t forget to enable and map services to the host by passing the “service-ports” option to RUN command:


docker-compose run --service-ports app bash


Here’s what you need to run the specs:


docker-compose run app bundle exec rspec


Step 10: Prepare to Release the App to Production


  1. Move applications logs to STDOUT/STDERR. During the updating process, logs will be saved to the container’s default file system.
  2. Integrate this STDOUT/STDERR log file into a centralized logging system. This is usually a 3rd party service.
  3. Move Sphinx, Node.js, Resque, wkhtmltopdf and all schedule-based tasks and processes to their own containers.
  4. Create a compose file for CI and prod.


Containerizing outdated apps with Docker is becoming increasingly popular due to its many benefits. It speeds up the entire development process while improving portability, transparency, and security.

Related Articles