Anyone who has had to work with other developers working with different Operating Systems knows the strange bugs and issues that come up due to OS specific issues.
Furthermore, managing the dependencies and configurations needed for production deployment can be extremely time consuming.
Docker fixes these issues by:
There's also the benefit of the ecosystem of available Docker images, that allow you to start from good baselines, in Docker Hub.
You can learn more about the history of containers here:
Due to the aforementioned configurability of the containers, and the power that Docker Compose allows, its easier to isolate different services of your application and integrate them together.
Docker Allows You To Bake In All Your Configurations
For instance, you could have a NGINX router that passes request either to your Gatsby client or API depending on the request type. Then serving your Gatsby client with another NGINX server that is configured to optimize serving static files. (Such as leveraging Brotli compression)
This application utilizes the following containers:
Which can be connected together via Docker Compose. Instead of having to create the NGINX setup via terminal on your production/development/testing server, you can bake them into the containers.
You can learn move about docker compose here.
Docker Makes It Easy To Scale
This containerization of all pieces of your application allow you to easily test and deploy them as part of a CI/CD pipeline. Especially considering the support Docker has across all cloud providers.
And if your application has intense requirements and needs to be highly scalable, Docker allows you to leverage Kubernetes. Kubernetes allows even more powerful composition and management of the services.
If you already have these installed you can skip to the Gatsby Overview section.
The following section focuses on going through the installation steps of everything we'll need for our purposes, walking through the options available in all of the major operating systems.
You can download Node and NPM either by stepping through the wizard form on Window installing packages that allow you to manage the specific versions of Node and NPM you're using.
There are two major packages that allow you to achieve this in Windows:
Then, you will have to install Docker on the computer you will be working from. On windows, you can install the Docker Desktop application from the Docker Docs.
Note On Docker For Windows
The installation of Docker for Windows requires the user to have the paid version of Windows, which many developers do not have or want.
In this case there are three options available to you to setup Docker in your computer:
In Mac you can also step through the wizard form similarly to on windows. But you can also install a node version package manager.
UNIX-based systems (such as Mac and Linux), you have available to another package manager not on windows.
Then you will have to install Docker, which depends heavily on the specific OS and distribution that you have.
Macs
If your working computer is a Mac, I would advise installing the Docker Desktop application, it will be easier to get setup and working.
Linux
If your working computer is a Linux, you can follow the installation from source steps for your specific Linux distribution.
Unlike Mac and Windows, you won't have a Docker Desktop application, and will only have the Docker server(which offers the same functionality simply without the GUI).
Once you have Node and NPM installed on your computer, you can install the Gatsby CLI by running the command:
1npm install -g gatsby-cli
Although we will run the actual Gatsby server on a Docker container, we will use the CLI to generate a basic site from the available Gatsby Starters.
We will be working off the default Gatsby Starter, which you can preview here.
First, we'll create a folder where we'll store the entire application and then generate the Gatsby Starter within that folder.
Open your terminal, move to your preferred working directory and type in the following commands.
Base Folder & Clone The Gatsby Starter
1mkdir docker-gatsby 2gatsby new client https://github.com/gatsbyjs/gatsby-starter-default
You can test to see if the installation went properly by running the following commands and you should see the default starter on localhost:8000
1cd client 2npm install 3npm start
For our purposes we want our development container to cover allow all of the features that Gatsby provides when running locally and keep the container as small as possible.
So we'll make sure follow these guidelines
Before we start that you'll have to make as small change to the develop script your package.json.
Change the script to this:
1"develop": "gatsby develop -H 0.0.0.0"
The -H configuration sets to host number that the Gatsby server will be listening on. The default host is localhost(127.0.0.1), but since our Gatsby server will be inside a docker container, it won't be accessible to the outside world.
Instead, the host number 0.0.0.0 will allow us to be able to access the Gatsby server running inside the container. Similarly it'll also allow other Docker containers to interact with the Gatsby Server. You can read more about the difference between localhost and 0.0.0.0 here.
Here's the development Dockerfile that we'll serve our needs, we'll walk through each part and what it achieves.
1FROM node:alpine # The minimal baseline we need for Nodejs 2WORKDIR /app 3 4# COPY the package.json file, update any deps and install them 5COPY package.json . 6RUN npm update 7RUN npm install 8 9# copy the whole source folder(the dir is relative to the Dockerfile 10COPY . . 11 12CMD [ "npm", "run", "start" ]
You can read more about the differences between using the RUN command and the CMD command here.
We'll pair the above Dockerfile with docker-compose for easy management of all of the docker containers we'll see for our application.
1version: "3" 2services: 3 client: 4 build: 5 context: . 6 dockerfile: Dockerfile.dev 7 volumes: 8 - ./src:/app/src # Links the source files to the running container 9 ports: 10 - "3000:8000"
The build configuration allows you to directly specific how you want Docker compose to build the container. In the above example, we give it a context and the container dockerfile.
The context specifies the working directory for that service. The dockerfile option allows us to specify the filename(including extension) where we defined our Docker image.
The volume section will link the src folder in the Docker container with your local version. This allows you to create live changes without having to restart your container.
Important Note On Volumes
There are a lot of different ways to setup a volume for development, with different trade-offs.
The one in the above only links the source files, so when you want to install a new package you will have to rebuild the container.
To do this you can simply run:
1docker-compose up --build
This is because the package.json and every other file isn't linked, and so any updates will not be reflected on the running container.
If you choose to instead make the whole client directory linked to the container, the package.json will be updated.
But in order to utilize the installed package you will have to run "npm install" inside the running container.
You can do this by running
1docker exec -it <INSERT CONTAINER ID> sh
And then executing npm install within that shell.
Why Only Link the Source
After trying every different approach I could find, ultimately for development, I decided to only link the source code for a few reasons.
There are valid reasons why you'd want to configure your volumes differently.
But I've found that the above configuration balances developer experience and best practices quite well when developing Gatsby applications.
If you think you'd found a way to configure your Docker Compose in a smoother and more intuitive way, let me know! Either below in the comments or reach out to me personally.
The ports section will take map the internal ports inside with the Docker container with externally accessible ones. So in our instead we will access our Gatsby application through localhost:3000.
Once you have all of your files setup, you can start your containers with the following command:
1docker-compose up
Or if you need to rebuild your Docker image(the Dockerfile.dev)
1docker-compose up --build
The first time you build the docker image and run docker-compose up, it will take a very long time. But every time afterward will be magnitudes faster.
A good heuristic when developing Docker containers is to assume that your host machine isn't able to either install the application's dependencies or build the application itself.
This will help you develop containers that can be easily re-used and deployed to production.
In order to move our website into production, we will have to serve the output of the Gatsby build via NGINX. For this we'll make two new files to be used in production:
Our docker compose file will remain mostly unchanged, most of the changes will be to the Dockerfile.
1FROM node:alpine as builder 2 3WORKDIR /app 4 5COPY package.json . 6RUN npm install 7COPY . . 8RUN ["npm", "run", "build"] 9 10FROM nginx 11EXPOSE 80 12COPY /app/public /usr/share/nginx/html
In Docker, you have specify different phases to have different base images. These stages are specified with the 'FROM' key word.
This allow us to easily build our website and then copy our files over to the NGINX server.
Gatsby will build the output over to the public folder in the working directory. Hence, we can just copy this over to the default directory that NGINX checks to server static files.
You can review more about the default directories (yes there are multiple) here.
We utilize NGINX since its an incredibly fast web server that's great for request routing and serving static files. Its also incredibly configurable, allowing you to use compression algorithms to even further optimize performance.
1version: "3" 2services: 3 client: 4 build: 5 context: . 6 dockerfile: Dockerfile 7 ports: 8 - "80:80"
There are very little changes to our compose files, only changing the dockerfile and port mapping.
Though for more complex applications, your production configuration may require more complex environmental variable and logging changes.
There are more configurations that you can keep in mind when using Docker Compose in production, which can be found in the official docs.
The port mapping change is required since port 80 is the default part that will allow the website to be accessed without having to specify the port.
Once you're done working on your Gatsby website, and finished with development, you run the production version of your website via:
1docker-compose -f docker-compose.prod.yml up
And make sure to add the build tag if you have updated your gatsby website.
1docker-compose -f docker-compose.prod.yml up --build
Congratulations! Now you should have a smooth development environment for your Gatsby applications and a production configuration ready for deployment.
1client/ 2 ... 3 Dockerfile.dev 4 Dockerfile 5docker-compose.yml 6docker-compose.prod.yml
The following should be the general structure that you should have in the end.
If you need more performance or have more complex applications you should look into other services and configurations that you can include to server your needs.
Here are some ways that you could expand on this setup:
Balanced Analysis
Explore various perspectives surrounding the controversial issue of the value of certificates in the tech community. Get an in-depth rundown of the pitfalls that arise from the use of professional certificates and how to use certificates optimally.
Architecture Breakdowns
Explore the principles that create scalable styling architecture in react and learn practices that can achieve this desired architecture.
Deep Dive
A retrospective outlook on our process of migrating to Netlify CMS and the challenges we faced.
Tutorial
Learn how to seed data into MongoDB using docker and docker compose and use the generated data in a React application, through step-by-step instructions.