11

We've been growing as a company, our products are expanding and our DevOps related activities and efforts are growing as well - we've switched from Bamboo to a more flexible and configurable Jenkins, using deployment pipelines and other plugins; switched to Ansible and starting to use Docker here and there internally.

All of these things require some level of coding or configuration - Ansible scripts and configs, Jenkins groovy scripts, Dockerfiles and YAML configs.

For now, we've created a separate "ops" repository with high-level directories for jenkins, ansible, docker and other (which is a terrible name, but for now all "other" DevOps automation things are there).

Our approach does not feel right and might not scale, but what are the best practices and recommendations to keep DevOps-related code in a code repository or repositories?

alecxe
  • 849
  • 1
  • 16
  • 36

4 Answers4

6

I can answer bout docker, one of the best practice to use docker is to keep the docker file and the compose files in the same repository of the project, so wherever you clone the project you can build the docker image, and it is good to keep multiple versions of docker compose files for example (prod, staging, dev) so you can build the image and run the container with specific option for each env for example for dev machine you can use specific network and run more dependencies container or whatever.

Wissam Roujoulah
  • 422
  • 1
  • 3
  • 8
4

Each tool's code goes into its own repo. for e.g

  1. Jenkins Groovy template into a Jenkins repo
  2. Ansible YAML playbooks in its own repo (with the roles , tasks , inventory sub directories
  3. Cloudformation / Terrform templates in its own repo
  4. Docker files in its own 5.. And So on

This would help you scale better in terms of process orchestration and maintaining various branches for each environment

This would give you more granular control and offload all your versioning overhead to the version control systems. Also create separate branches for each environment and tag the code for every production release (as we do for application code base). Think Infra and process in terms of code. (Any change in process has to be codified and sent to QA, SIT , UAT and then to PROD) similar to application.

For e.g you may have V2.1 of Ansible running in Production ( master branch) but V2.0 of docker containers running in Prod (master branch)

Similarly keep your DB scripts / bash scripts in their own repositories and maybe you can have a healthcheck file (JSON/YAML) configured to show the versions of all the tools / parts in each deployed URL for tracking and automating purposes. (So that your webhooks reads the URL and automates deployments)

4

The current organisation of the code and configuration you describe is structured by the technical solutions involved. This is a bad design that will add a lot of overhead in our maintenance activities and will add a lot of traps in our way as well. Instead, that organisation should be structured around the artefacts we are deploying.

The reason for this is that we want to consider artefacts (e.g. a docker image or a software package) as the objects of the following verbs:

  • build
  • test
  • deploy

to consider a minimal set of automated tasks we want to perform. If we want to change something about how the test verb is implemented, it is easy to visit the folder corresponding to that artefact in the appropriate repository and then discover the jenkins-specific automation items that needs to be updated. Instead, if the automation recipes are structured around technical solutions, then we need to figure out of the blue that jenkins is involved in the test procedures and find there the artefact related automation items. In complex situations, the organisation around technical solutions makes updates very hard, because we have to know a priori all the technical solutions involved in some service to update them accordingly.

For instance a repository containing the code for a website and a micro-service “a” could have the following sub-directories dedicated to operations:

./ops/website
./ops/micro-service-a

each having three scripts called build, test and deploy. Now that the organisation of automation items has somehow been clarified, let's turn our attention to configuration.

The main conditions and requirements about the configuration organisation are set by the deploy verb when applied on a service-like artefact. The deploy verb should have the following parameters:

  • the version of the artefact to deploy,
  • the deployment target of the artefact, which describes the concrete environment where the deployed artefact will run (e.g. a cluster and endpoints it should talk to)
  • the credentials it should use to connect to other endpoints (e.g. databases)
  • the runtime configuration of (like how long cache entries should live, etc.)

From the operational perspective, this breakdown of the parametrisation matches the natural degrees of freedom of the deployment problem – aside from the credentials that could be bundled with the runtime configuration, but it is better to separate them to avoid spreading them carelessly.

Michaël Le Barbier
  • 2,670
  • 1
  • 16
  • 27
3

Making a distinction between Ops, Dev and DevOps promotes isolation and enforces a "throw it over the wall" mindset. To increase cooperation between teams one should put everything in a repository that is required to build and deploy the project.

Having said that, the answer to the question:

How to structure DevOps related code and configs in a code repository?

is that if config is required to run the project then one should put it in the same directory.

030
  • 13,383
  • 17
  • 76
  • 178