Building a multi container app
For development environment set-up only
In real life, more than one containers are used to build a complete application where each container is assigned a specific function. One of the most common set up is to set up one container for the frontend, another for the back end and a third container for the database.
The image below illustrates the schema of the multi container app that we built through the course.
Dockerizing the MongoDB Service
We can dockerize the MongoDB database by using the official image found on Docker Hub. In the command line ^
allows us to write a command on multiple line. This will create a mongo DB database.
Dockerizing the back-end application
Step 1: Create a Dockerfile
The back-end application is running on NodeJs. The first step is to create and build the image using a Dockerfile. Our image will be based on the official node image.
If the mongodb is running on our localhost, we have to made some change to the code in the app.js file to be able for our back-end to communicate with the database.
This line below:
'mongodb://localhost:27017/course-goals'
should be replaced by
'mongodb://host.docker.internal:27017/course-goals'
Step 2: Building the Dockerfile
Step 3: Running the docker container based on the image
Here the port 80 is exposed and allows the backend to communicate with the front end.
Dockerizing the front-end application
Step 1: Create a Dockerfile
As for the back-end application, the front-end app requires a Dockerfile that will be used as template for our image. Even if the front-end does not use NodeJS explicitly, React relies NodeJS to be able to run our SPA directly in the browser. This is why our front-end image is also based on an official node image. The npm start
command will run the script specified in the package.json
file that aim to start React.
Step 2: Building the image
We can build the front-end image by running:
Step 3: Running the front-end container
The configuration of the application requires the -it
flag to be specified when running the container. Otherwise, the container will start and then stop immediately.
Live Source Code Update
We can create a bind mount pointing to the localhost ./front-end/src
folder mapped to the /app/src
folder in the container for our changes to be reflected automatically within the application without having to manually rebuild the image.
Because the front-end is build upon React, all modules and extensions are installed for the application to be able to pick up modifications of the source code whenever it happens (no need of nodemon
, for example).
If using WSL2 on Windows, our projects files should be located inside WSL2 repertories. See article.
Optimization
In the step above, we have dockerized the three main components of our application (the database, the front-end and the back-end). However, our set up is not yet optimal for two main reasons:
The containers communicate with each others through localhost since we exposed specific ports.
We have not implemented any data persistency solution, meaning that our data will be lost each time a container get removed.
Docker network creation
For our containers to be able to communicate with each other, we can create a network. Our containers will be part of that network.
Database
Now, we can run our container again, but this time without the need of exposing the port. This time our containers will communicate each others inside the network, but won't be able to communicate with our localhost. We can specify the --network
flag when running the containers to specify the network we want our container to be part of.
Back-End
For the back-end, we can not just run the container as the mongoDB database. We need to modify our app.js
. because this string below won't apply anymore since our backend will communicate with our database inside the network and not through localhost.
We can then rebuild our backend images as we made some modifications to the code and run the container based on that new image.
Front-end
To include the front end to the network, we also need to make some modifications to the app.js file of the front-end application since the front-end will communicate with the back-end inside the network and not via localhost anymore.
Then, we can rebuild our frontend image as we made some modifications to the code and run the frontend container based on that new image. However, for the front-end, we need to expose a port since we want to be able to communicate with interface through our browser.
We might expect our application work correctly. However, we had this error!
Specificity for React JS Applications
This error is due to how ReactJS works. In fact, React run the code entirely in the browser and not on the server in the container environment. Therefore, the browser is not able to translate the goals-backend container address and communicate with the backend container application.
Because of how React works, we have to make communicate the frontend and the backend of that specific React Application through localhost, which is an address the browser can understand.
Then, we can rebuild the front end images and run it again. This time, the front end does not need the --network
flag since React applications does not care about what's happening in the container. React applications are exclusively running in the browser.
Data persistency to the multi containers app
In our example below, we have not implemented any data persistency solution yet. In this section, we want:
to save our data even if the mongodb container is removed.
to be able to change the source code of the application without having to rebuild the images each single time.
to restrict the access to the mongodb database.
Data persistency for the mongodb container
Looking into the documentation, we could learn that the mongodb container stores the data at this location /data/db
. This path will be used to create our named volume. The -v flag in this command creates a name volume which map the name of the volume we specify with the internal path containing the data we want to save.
The image below shows that we were able to create the mongo-volume
If we stop the mongodb container (which will also remove it because of the --rm
flag), and re-run it specifying the same volume, our data will be restored.
Securing the database
For the mongodb
container, we can specify environment variables to secure our database with a username and password (see documentation).
MONGO_INITDB_ROOT_USERNAME
, MONGO_INITDB_ROOT_PASSWORD
Then, we need to to specify to the backend container app the username and password of the database. We can do this by specifying the authentication information in the connection string in the backend app.js
file.
Note that we should NEVER put credentials in our source code. The credentials should be stored in an environment variable file and when specifying the username and password the source code should point to this file. This example was only to show the syntax.
Alternatively, we can store the default MongoDB username and password in environment variables in the Dockerfile, which will be available within the running container :
We can use these environment variables within our source code by dynamically set the credentials inside the MongoDB connection strings in app.js
.
Once we changed our connection string, we should rebuild our backend image to pick up the changes and then re-run it. When running our container we use the -e
flags the set the environment variables values.
Data persistency for the back-ends logs and source code
We want the logs data to persist to the removal of the containers. For that, we will create named volume.
We also want that any modifications in the source code to be reflected automatically in the Docker. For that, we will create a bind mount.
Ad
Live source code update - Add nodemon dependency
Adding the nodemon
dependency will restart the node server automatically whenever any changes is made within the source code. This prevent the developers to see the changes reflected in the app without having to manually restart the container.
The image below shows which two lines of code we added in the package.json
file from the backend folder.
On Windows, for nodemon to work we need to add "start":"nodemon -L app.js"
Then, in the Dockerfile file inside the Backend folder, we have to modify the CMD line for the script to be executed, when the container is build from the image.
Since we changed the Dockerfile, we need to rebuild the image for this modification to apply and then, re-run the container with the same flags and options.
Adding a .dockerignore file
There are some files and folders that we may not want to be copied from localhost to the image when building the container. Here comes the utility of the .dockerignore
file.
In the example below, when building our backend image from the Dockerfile created, the instruction COPY
will copy the entire files and folders from the localhost backend folder. However, some files does not need to be copied within the newly created container. We can exclude some files/folders by specifying those in a .dockerignore
file. This should speed up the image creation.
On the same basis, we can also add a .dockerignore
file in the frontend folder with the same specified files and folders.
Last updated