Developing with dockers has many implications for the code and the repositories. This article suggests a pattern to resolve those issues
If you consider moving your development to use dockers or you are looking to find how to do it in a harmonic way both for development and delivery to production this is for you.
Motivation
Since I started playing with dockers several years ago and tried to develop with it on all levels I’ve faced many things like how to reuse and share the dockers of my local environment to prevent setups of 1–2 days again and again. That was my first question.
Another thing was when I was working with several repositories. How should I handle code from many repositories? As a developer I want all the team around me to look and work on the same code.
Today, In Linnovate, we have many projects of different types and I saw this question (and others) in my mind again and again. So I decided to put a lot of energy to create some method that will handle most of our cases (if not all of them). What I will show is the result of what we use in all of our projects with dockers and came after many projects setups and improvement iterations.
I will describe how to manage a microservices-based project with dockers in git and will cover the git repository designing, development, and delivery requirements.
Note: We work with Gitlab but the concept can be transitioned to other repository managers in a similar way
Challenges
- Microservices Integration — how to manage the relations between all microservices?
- Versioning — What will represent the final code version if there are now many repositories?
- Sharing the local environment setup — Working with dockers creates many assets that are not related to the code itself; Dockerfile, configuration files, docker-compose, scripts, and probably more. Where should we store it? How can we share it with others?
- DevOps — Where to store production assets like dockers/deployment scripts which are different from local development dockers
To handle these challenges we decided to base our solution on git submodules and docker-compose to describe and run the local environment
The GS3D Pattern — Git Submodules and Dockers Driven Development
In order to fetch all the repositories, we will use git submodules. Git submodule is a bit tricky to work with but there are ways working with as can be seen here
Besides fetching the code, each submodule is represented with a specific commit which indicates its version. We’ll use it on the integration repository.
Principles:
- One Main repository — one place to work with which gives everything in a clear way.
- Consolidation — Consolidation is first. When having a lot of projects with different technologies — one concept focuses on all developers on the same methodology of work. Easy to jump between projects, easy to set up new projects, and many more advantages when there’s an expected structure. That means keeping the same structure overall projects as much as possible.
- Easy setup — Setting up the local environment should be easy
- Separation — we wanted to have the option to reuse a microservice between several projects
The Integration repository
This is actually the project code repository — The final picture of the code. It represents each code from each microservice in a specific version and also the development environment. This repo will be cloned in order to get the code and set up the local environment in order to develop with its files and dockers.
- Microservices are being referenced using git-submodule for each repo.
- Microservices commit represents the version
- The local repository will be docker-compose based and will use the submodules code.
Structure:
- Submodules (ms1, ms2, ms3) — all microservices as submodules. As an example there can be 3 submodules WordPress, NodeJs, and React.
- Utils — all the files that are required in order to set up the local environment like Dockerfiles and configs. We are storing the Dockerfiles and configurations there.
- docker-compoe.yml — will represent all local services where each service will work with a cloned submodule code. It can also work with a custom Dockerfile from the utils directory.
- .env.example — an example of a working .env file.
- .data (ignored) — for DB files. Can also use virtual docker volume instead.
- .env (ignored)
To set up the local environment
git clone --recursive link-to-integration-repocp .env.example .envdocker-compose up -d
The DevOps Repository
All deployment related assets such as k8s files will be stored on another repository since they are not related to the code itself nor to the local environment. In this example, the remote environments will use docker-compose and the code will be fetched by sub-modeling the integration repository.
With CI we can push the microservices as images to some container registry and fetch it on the remote environments.
Real-life Example:
This example is of a WordPress website with react and a NodeJS with GraphQL API near it to work with the react components.
Step1 — Create the project and the micro-services in it.
Create the project and the repositories with the right structure:
- Create a group named “GS3D" as a container for all microservices
- Create all micro-services repositories in it — WordPress, React, GraphQL
- Create a project for the integration repository names also “GS3D”
- Create the “Devops” repository
Step 2 — Create the integration repository
The integration repository is actually the GS3D project you created onside the GS3D group.
- Add all submodules from GS3d group (The integration repository should integrate all submodules)
git submodule add [wordpress-repository-url]git submodule add [react-repository-url]git submodule add [graphql-repository-url]git commit git commit
- Add docker-compose to describe the local environment. Use the submodules as the code for the dockers and mount them to the container with volumes.
- Utils directory can be created to represent custom microservice dockers. This Docker (again, for the local environment only).
- Add .env and .env.example
- Add .gitignore with .env as an ignored file
How to arrange the project in GitLab
The Group/Group/Project convention
Since we have several clients and each client could have several projects each client needs to have its own zone (GitLab group). And since we are using microservices — each project should have its own microservices under it (GitLab group). The hierarchy we use on GitLab is the following :
- Client name (GitLab group) — Will store all clients projects
- Project Name (GitLab group) — Will store all microservices code and the project itself
- Project Name (GitLab project) — This is the project main code — “The Integration repository”
The first level of the hierarchy is optional if you are the client who manages its own projects
Conclusion
Today we have a clear pattern to set up our projects. The consolidation we made helps both our managers and developers to understand every project right away. Our local environments setup time was reduced from 1 or 2 days to several minutes.
Based on that pattern we implemented our CI, on all our projects:
It also supports the ability to integrate from other products that we have to our project by sub-moduling their repositories (easy reuse and integration)
Would love to hear if you find it helpful.
Related Articles:
Mono VS Poly repo