Continuing an effort to explain DevOps scenarios, today I’d like to show you a simple way to set up integration tests that I’m using in my sample repositories and Marten.
What are my assumptions? It has to be straightforward, so anyone can run it easily (including me, I’m a simple guy). I wrote why it’s important in How to get started with Open Source?. It has to be stable and allow easy change or add new configuration without much additional code. Also, I like to play with different stacks and technologies. I want easy and repeatable patterns when I play with new technology. It must also run in GitHub Actions, as this is my current Continuous Integration (CI) runner. That’s essential, as those machines are underprovisioned and that puts limits.
I’m running many of my tests as integration ones, as I’d like to trust that what I’m testing will actually work. You can read a longer explanation of my attitude in I tested it on production, and I’m not ashamed of it. Still, this post is not about why but how.
So, even though I’m not a massive fan of YAML, I like Docker Compose, as it allows me to set up advanced configurations in a declarative way quickly. Usually, that’s something that I only change sometimes. Once I have the default, recommended setup, I can just run docker-compose up and have my local environment ready.
The benefit of Docker Compose is that most of the tools provide their container images; you can run it on any operating system and find a troubleshooting guide on the Internet.
So, how to repeat that in GitHub actions? Actually, quite simple:
name: Build and Test
- name: Check Out Repo
- name: Start containers
run: docker-compose -f "docker-compose.ci.yml" up -d
# Do the other steps
- name: Stop containers
run: docker-compose -f "docker-compose.yml" down
It’s a simple template, we checkout code, start containers, and in the end, we stop containers. GitHub Actions runners already have Docker preinstalled. I’m using if: always(); it’s needed to ensure that the cleanup step will be run even if some step fails.
What to put in the Do the other steps part? In general, it’s: get dependencies, build them, and run tests. Of course, the details will depend on the environment you’re in and your application.
See the real-world examples in:
An important note is to trim down the Docker Compose configuration for CI. For local development, we might need containers with UIs like PgAdmin, Kibana or tools like Open Telemetry collectors etc. For CI, we don’t need them, and they will make setup slower and can even break it by eating too many resources. We can trim it by preparing a dedicated Docker Compose config for CI, but then we need to synchronise those configs. Better is to use Docker Compose profiles, which will allow us to exclude what we need by default and keep the configuration in a single file.
TestContainers are an intriguing tool. They try to simplify container-based integration testing. The promise is that you can get or configure your docker configuration in code, and the tool will handle all initialisation, clean up etc. It should also do needed optimisations and default recommended setup.
That’s the promise, but my reality was a bit different.
Docker resources initialisation costs a lot. You need to start the image, set up networks, volumes, etc. They will run extremely slowly if you try to do it for each test. You will get isolation, of course, but at a high cost. And you can achieve isolation differently and pay less. For instance, if you’re using a relational database, you can create a new schema for a test class or even a new database. It’ll still be cheaper than spinning up a new container.
If you run too many containers in the GitHub Actions machine, it may become infinitely idle or die. So you must be careful with your setup, as you may accidentally get false/positives after adding a new set of tests. So funnily, the more test isolation we have, the less isolated will be test run (because of eating shared resources on test runner).
Also, if you’re spinning up a fresh ephemeral container, and the test fails, you’ll want to get your hands on it to inspect the data as you troubleshoot. If a container is cleaned up automatically by TestContainers, that’s not going to be quite so easy.
Sometimes you also might want to use tests as the data setup for your local environment, not need to click through the UI.
Of course, TestContainers allow you to do most of the magic, but then you need to learn the tool deeply to do that, which is kinda opposite to the premise of a seamless setup. Also, each dev environment supports a different feature set and has different documentation with not always detailed breakdowns.
So, TLDR: TestContainers are nice, but they do not match my needs so far. I’ll try to go down that path and learn more to see if I can find a workable flow with it. I’ll keep you posted.
So far, vanilla Docker Compose works best for me; it’s simple, flexible, and causes the least friction.
Read also other articles around DevOps process:
- How to build an optimal Docker image for your application?
- How to create a Docker image for the Marten application
- A few tricks on how to set up related Docker images with docker-compose
- How to build and push Docker image with GitHub actions?
- How to create a custom GitHub Action?
p.s. Ukraine is still under brutal Russian invasion. A lot of Ukrainian people are hurt, without shelter and need help. You can help in various ways, for instance, directly helping refugees, spreading awareness, putting pressure on your local government or companies. You can also support Ukraine by donating e.g. to Red Cross, Ukraine humanitarian organisation or donate Ambulances for Ukraine.