In my last post, I talked about going with Docker hosted by DigitalOcean. This post will walk through the setup and initial design of my Docker app. I wanted to go with something that would support future growth and scalability as I play around with multiple languages and technologies. I’ll be building this from the perspective of working on a Mac, but it should be almost the same for Windows/Linux. Also, I’ll be using VS Code as my IDE, because IT’S AWESOME.

Design

I’ll be using Docker Compose for setting up the whole environment. This makes it really easy to setup all each application and keep track of everything running in your environment. Docker Compose is like a checklist of your environment!

Each app will be setup via a Dockerfile and added to Docker Compose.

I’ll add a diagram here, soon.

Nginx

I’ll be using Nginx (Pronounced “Engine-X”) as a Static Web Server, Reverse Proxy, Load Balancer, and for some basic security. If you’re not familiar, a reverse proxy is often used like a routing table. In this example, any /api/ requests will be sent to the Node servers; all other requests will be sent to the static content directories. This makes it easy to add any other fun stuff like .Net Core, Ruby, or Python servers with some simple routing! The load balancing support is pretty amazing as well. You can set different weights for each , priortizing basic on load, connections, etc. all in Nginx. Piece of cake!

NodeJs

Up next is NodeJs! Why?

Drawing

In this example, I’ll be setting up three different server nodes to distribute the load. (Note: this is just on one VPS, so the nodes will be sharing server resources. I may put up an example of how to setup multiple servers/droplets and load balance them. Also, for anyone curious, from some benchmarking, it does offer better performance with multiple nodes even though they are on the same VPS.)

MongoDB and Redis

MongoDb will be the database of choice. NoSQL, baby! Personally, I come from a strong SQL background. If you’re also coming from a relational data background and grumble at these new ‘fads’, you should give it a chance. It’s not the perfect solution to every problem, but it’s definitely eye opening! There’s several other NoSQL options, but this one is pretty easy to use and will get the job done for this app.

Redis will be used for some basic caching. It’s really easy to use and ties in nicely with NodeJs. As a side note, for .NET apps, it even has full Linq-to-SQL support!

Drawing

Jekyll

I’ll be building the static content/blog using Jekyll. It’s based on Ruby, extremely fast, and very lightweight! There’s no database needed or dynamic content rendering.

I’ve been playing around with some Node based blogs such as Ghost and Hexo. Even though I’m more familiar with Javascript over Ruby, Jekyll seemed easier vs. Ghost which had a database backend and Hexo which is still growing.

Let’s get started. Docker up!

If you don’t have Docker setup yet, just go download the lovely client for your system. This will give you support for the docker commands and will startup your local environment. Next, let’s create a new app directory. On the mac, you can navigate to where you want to create the directory. Personally, I’m going to create mine under ~/code/. Just pop open terminal and type:

mkdir dockerdemo

Now, let’s start up a basic Compose file. To create a new docker compose file, type:

touch docker-compose.yml

Ok, let’s run it! Type:

docker-compose up --build

Congratulations! You build a docker app that does absolutely nothing! For future reference, this is the only command really needed to startup our whole environment.

Developers, start your Nginx’s!

For starters, let’s put all of the apps in a new sub-folder called:

mkdir docker-images
cd docker-images

This is the point where I open the folder with VS Code and create any new folder/files in the IDE (I’m personally too lazy to use the Terminal), but I’ll type out the commands so you can be lazy too. Now create another folder under docker-images called nginx.

mkdir nginx
cd nginx

At this point, we will create a Dockerfile. This is essentially a script with commands to build and setup a docker image. Important! there’s no extension. It’s just a file named Dockerfile.

touch Dockerfile

Now, let’s build out the Dockerfile! Just paste this into the file:

# Set nginx base image
FROM nginx:latest

# File Author / Maintainer
MAINTAINER Ermish

# Delete the default configuration
RUN rm -v /etc/nginx/nginx.conf

# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf

You can change the maintainer to your name, username, or your favorite superhero. Alright, let’s break this down. The FROM command will pull from Docker Hub by default which is where most public images are located. These images are pre-setup on a Linux distro and often configured as good as you could do or better. The :latest tag makes it easy to control the version of the image you want to use and makes it super easy to update your app since someone else did the configuration and compatibility work for you! Thanks, bro!

Drawing

The RUN Command is used to remove the default Nginx configuration file. Finally, the COPY command is used to copy over our server configuration (hang on, it’s next…). If you’re curious why the config settings aren’t in the Dockerfile, this is how we can separate different environment configurations without having to change our docker image.

Here’s the config file for nginx:

worker_processes 4;

events { worker_connections 1024; }

http {
        include  /etc/nginx/mime.types;

         upstream nodeapp {
               least_conn;
               server node1:8080 weight=10 max_fails=3 fail_timeout=5s;
               server node2:8080 weight=10 max_fails=3 fail_timeout=5s;
               server node3:8080 weight=10 max_fails=3 fail_timeout=5s;
         }
         
        server {
              listen 80;

              #root /src/static;
              root /data/jekyll_demo;

              location = /favicon.ico { access_log off; log_not_found off; }
              location = /robots.txt  { access_log off; log_not_found off; }
              location = /sitemap.xml { access_log off; log_not_found off; }

             # location ~* \.(png|jpg|jpeg|gif|ico|woff|ttf|svg|eot|otf)$ {
             #     add_header "Access-Control-Allow-Origin" "*";
             #     expires 1M;
             #     access_log off;
             #     add_header Cache-Control "public";
             # }

              location ~* /assets/ {
                  add_header "Access-Control-Allow-Origin" "*";
                  expires 1M;
                  access_log off;
                  add_header Cache-Control "public";
              }

               location /api/ {
                 proxy_pass http://nodedemo;
                 proxy_http_version 1.1;
                 proxy_set_header Upgrade $http_upgrade;
                 proxy_set_header Connection 'upgrade';
                 proxy_set_header Host $host;
                 proxy_cache_bypass $http_upgrade;
               }

              location ~* / {
                    try_files $uri $uri.html $uri/ =404;
              }
        }
}

This file allows us to do some serious magic! This will allow us to load balance between the three node servers which we will spin up later. All requests will point to the static files unless the url contains /api/ in which case will be redirected to the node servers. At this point there’s no security and everything is public for the static jekyll content. For the api/data requests, additional security can be placed within the nodejs api.

Mongo and Redis, you’re up!

It’s time to make some more folders and Dockerfiles!

Let’s start with Redis and make another folder and Dockerfile under the docker-images folder:

cd ..
mkdir redis
cd redis
touch Dockerfile

Now, let’s build out this Dockerfile:

# Set the base image to Ubuntu
FROM        ubuntu

# File Author / Maintainer
MAINTAINER Ermish

# Update the repository and install Redis Server
RUN         apt-get update && apt-get install -y redis-server

# Expose Redis port 6379
EXPOSE      6379

# Run Redis Server
ENTRYPOINT  ["/usr/bin/redis-server"]

Hokay so, this is about as default as you can get. We’ll run Redis under the default port 6379 and use the default config. If you need to make custom settings, just copy over a redis.conf into /usr/bin/redis-server.

Alright, so thing for MongoDB:

cd ..
mkdir mongodb
cd mongodb
touch Dockerfile

Here’s the Dockerfile:

# Set the base image to Ubuntu
FROM        ubuntu

# File Author / Maintainer
MAINTAINER Ermish

# Update the repository and install Redis Server
RUN         apt-get update && apt-get install -y redis-server

# Expose Redis port 6379
EXPOSE      6379

# Run Redis Server
ENTRYPOINT  ["/usr/bin/redis-server"]

talk about configuration here

Node Nodes! Time to setup a NodeJs cluster.

Same old, same old:

cd ..
mkdir node
cd node
touch Dockerfile

Here’s the Dockerfile:

# latest official node image
FROM node:latest

MAINTAINER Ermish

# Install nodemon
RUN npm install -g nodemon 

# Expose port
EXPOSE  8080

WORKDIR /data/app_node/

# Run app using nodemon
CMD ["nodemon", "server.js"]

So let’s break this dockerfile down:

  • Pull the latest Node image. This essentially installs npm.
  • Expose our node server port.
  • Point to our code and spin up our Nodemon server! (If you’re wondering what to do with your source code, don’t worry, it’s coming up.)

Wait a minute…what’s Nodemon?

Nodemon is a node wrapper that is similar to, but can also compliment forever. They both essentially serve the purpose of keeping your node server alive. The main differences I’ve seen are that Nodemon can watch your files and rebuild the server when you make any changes. This is top-notch when you’re doing development! It will also ensure your app doesn’t spin down for any reason, including exceptions getting thrown. If the app crashes for any reason, it will retry starting it up to a certain fail count.

Forever, on the other hand, I think is closer to a production node server wrapper. It does almost everything the same minus the file watching/rebuilding.

Let’s make a data container together, baby!

Data containers can be used several different ways. Deciding how to setup and use data containers has probably been the toughest decision throughout this setup. In this example, I’m using them as containers that hold source code that is compiled outside of the container and checked into souce control already compiled. This is the way I can ensure I have the same consistent compiled code base throughout all environments. Environment variables, such as connections strings, can be passed into the container without having to change any code.

Another common data container design is to have the data container be the actual build server. It can have a Github hook and compile the code each time a checkin is made. Having a build server container is awesome, but I think the actual files should still be in a separate container from the build server and rebuilt each time. I’ll skip the build server in this example.

There are two common ways to build the data container. One is to create a dockerfile that simply copies the files and creates a docker volume from them. The other option is to create them directly in the docker compose file which is the method I will be going with. Some people say that the dockerfile is a good separation of concerns, but since these data containers are only built for this app, I think skipping the dockerfile simplifies things.

Let’s do some composing

Drawing

Remember that command that we ran in the beginning? It looked like this:

touch docker-compose.yml

Well, it’s time to make that file actually do something! Here’s the file completely built out:

version: "2"

services:
  node_demo:
    image: busybox:latest
    command: ["true"]
    volumes:
        - ../node_demo/dist:/data/node_demo #for node

  jekyll_demo:
    image: busybox:latest
    command: ["true"]
    volumes:
        - ../jekyll_demo/dist:/data/jekyll_demo #for jekyll

  nginx:
    build: ./images/nginx
    image: dockerdemo_nginx
    env_file: docker_env.env
    ports:
        - 5000:80
    links:
        - node:node1
        - node2:node2
        - node3:node3
    volumes_from:
        - node_demo
        - jekyll_demo

  mongodb:
    build: ./images/mongodb
    image: dockerdemo_mongodb
    env_file: docker_env.env
    ports:
        - 27017:27017
    volumes_from:
        - node_demo
        - jekyll_demo

  redis:
    build: ./images/redis
    image: dockerdemo_redis
    env_file: docker_env.env
    ports:
        - 6379:6379
    volumes_from:
        - node_demo
        - jekyll_demo

  node:
    build: ./images/node
    image: dockerdemo_node
    env_file: docker_env.env
    # ports:
    #      - 5060:8080
    links:
        - mongodb
        - redis
    volumes_from:
        - node_demo
        - jekyll_demo
  node2:
    build: ./images/node
    env_file: docker_env.env
    # ports:
    #      - 5061:8080
    links:
        - mongodb
        - redis
    volumes_from:
        - node_demo
        - jekyll_demo
  node3:
    build: ./images/node
    env_file: docker_env.env
    # ports:
    #      - 5062:8080
    links:
        - mongodb
        - redis
    volumes_from:
        - node_demo
        - jekyll_demo

So let’s break this compose file down:

  • Each of the dockerfiles that we have setup will now be built into an Image and hosted in a Service.
  • Let’s talk about those data containers some more. node_demo and jekyll_demo do not have dockerfiles, so essentially we use the same commands here to build an Image.
    • First we build from an image.
    • Docker requires us to run either a Run command or have an ENTRYPOINT. So here, we just pass in true for it to do nothing.
    • Then we will create a docker Volume. I think the easiest way to explain this is to image a volume being a “network drive” for the services. All of the other services can read/write data from this volume. The way volumes handle data can be tricky and is better explained here.
  • Up next, are the rest of the services. These will use the dockerfiles that we built out earlier. I’ll explain these commands:
    • Build builds the dockerfiles we’ve been setting up.
    • env_file passes in any environmental variables you have. (connection strings, localization, etc.)
    • ports allows you to forward a port to listen on. Important! In this example port 5062:8080, the right variable points inside the dockerfile. the left variable is the new value in the compose file.
    • links allows one app/dockerfile to be aware of the other. In this example, we want the node servers to have access to the redis server
    • volumes_from allows access to volumes created by other services/dockerfiles.

Ermergerd, it works!

Drawing

That will achieve the same thing as using the docker cli to individually build and setup each service manually/one at a time. Instead, the docker compose file allows us to “script” out all of these steps in one easy to read file.

To actually run the compose file do this:

docker-compose up --build

This will run the services, but soon you will run into issues with cached images and volumes being invalid. As a result, I have a few small shell scripts I run instead that will delete an containers, images, and volumes that exist. Be careful as this essentially wipes anything docker related running on the server! However, this is my favorite part of docker! I treat the docker compose file as the exact image of what will be on the server. If I want to spin up any additional servers, I can just run the same compose file again and KNOW it will be identical.

Here’s the scripts:

docker-rebuild.sh

This script essentially deletes everything, but does not start the server.

# # Delete all containers
docker rm $(docker ps -a -q) --force
# Delete all images
docker rmi $(docker images -q) --force

# #Remove images not in compose file
docker rmi $(docker images -f "dangling=true" -q) 

docker-compose pull
docker-compose build

docker-start.sh

This script just starts the server and builds any images.

docker-compose build
docker-compose up

docker-stop.sh

This script stops the server.

#stop containers
docker stop $(docker ps -a -q)

Awesome! What about CI? (continuous integration)

There are so many ways to achieve CI with docker. The way I ended up going was to use Docker Hub. They give you a free private repository! Since I had several different containers I wanted to use, I ended up using tags as a way to store different images in the same repository. Essentially, my CI ended up looking like this:

  • Run the docker-rebuild.sh script
  • Tag the images
  • Push these images into my docker hub repository
  • Have docker hub run a script on my server to:
    • stop the containers
    • pull down the newly tagged images in my repository
    • rebuild the server
    • start the services again

Here’s the file for pushing the images to docker hub:

docker tag mydemoapp_node_demo ermish/mydemoapp:mydemoapp_node_demo
docker tag mydemoapp_jekyll_demo ermish/mydemoapp:mydemoapp_jekyll_demo
docker tag mydemoapp_redis ermish/mydemoapp:mydemoapp_redis
docker tag mydemoapp_node ermish/mydemoapp:mydemoapp_node
docker tag mydemoapp_nginx ermish/mydemoapp:mydemoapp_nginx

docker push ermish/mydemoapp

That’s all folks!

Drawing

Hope you enjoyed the tutorial, and feel free to reach out with any questions!