Oskar Dudycz

Pragmatic about programming

Docker Compose Profiles, one the most useful and underrated features

2024-05-18 oskar dudyczDevOps

2024 05 18 cover

Erik Shafer asked me on the Emmett Discord if I could provide a sample of how to run the WebApi application using Emmett. Of course, I said: sure will! I already had WebApi sample in the repository I also explained here How to build and push Docker image with GitHub actions?. Easy peasy, then, right?

Indeed, it wasn’t that hard. Of course, I had to fight with ESModules quirks, but I also expanded the scope a bit, as I also decided to use one of the most underrated Docker Compose features: Profiles.

What are Docker Compose Profiles?

They’re a way to logically group services inside your Docker Compose definition, allowing you to run a subset of services. My original Docker Compose definition contained the EventStoreDB startup, which I use in my Emmett samples as the real event store example.

version: '3.5'

services:
  eventstoredb:
    image: eventstore/eventstore:23.10.0-bookworm-slim
    container_name: eventstoredb
    environment:
      - EVENTSTORE_CLUSTER_SIZE=1
      - EVENTSTORE_RUN_PROJECTIONS=All
      - EVENTSTORE_START_STANDARD_PROJECTIONS=true
      - EVENTSTORE_EXT_TCP_PORT=1113
      - EVENTSTORE_HTTP_PORT=2113
      - EVENTSTORE_INSECURE=true
      - EVENTSTORE_ENABLE_EXTERNAL_TCP=true
      - EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP=true
    ports:
      - '1113:1113'
      - '2113:2113'
    volumes:
      - type: volume
        source: eventstore-volume-data
        target: /var/lib/eventstore
      - type: volume
        source: eventstore-volume-logs
        target: /var/log/eventstore
    networks:
      - esdb_network

networks:
  esdb_network:
    driver: bridge

volumes:
  eventstore-volume-data:
  eventstore-volume-logs:

Nothing fancy here so far. You can just run it with:

docker compose up

It will start the database; then, you can run a sample application with

npm run start

And play with Emmett.

I wanted to keep the sample experience straightforward and use local development/debugging as the default. Docker image build and run would be optional (we could call it “Erik’s mode”!).

Now, profiles come in handy here, as they enable that, I just had to add:

version: '3.5'

services:
  app:
    build:
      dockerfile: Dockerfile
      context: .
    container_name: emmett_api
    profiles: [app]
    environment:
      - ESDB_CONNECTION_STRING=esdb://eventstoredb:2113?tls=false
    networks:
      - esdb_network
    ports:
      - '3000:3000'

  # (...) EventStoreDB Definition

The setup is pretty straightforward.

We’re stating which Docker file to use and where it is located.

    build:
      dockerfile: Dockerfile
      context: .

Used . means that the build context will be the folder as the location of the docker-compose file. dockerfile tells where the Docker file is located. In our case, it’s the same folder as the docker-compose file, and it’s named Dockerfile. That also opens more options. We could also define it as:

    build:
      dockerfile: src/app/Dockerfile
      context: .

That’d allow us to use some common dependencies outside the project folder, e.g. src/build. Essentially, we could expand the image-building process’s access to additional locations and allow project files to access parent folders to use, for instance, shared configurations or dependencies. Thanks go to Jakub Gutkowski for pointing that out!.

We ensure that we have a connection to EventStoreDB by placing it in the same network and passing the connection string as an environment variable.

    environment:
      - ESDB_CONNECTION_STRING=esdb://eventstoredb:2113?tls=false
    networks:
      - esdb_network

The new thing is the profile definition:

    profiles: [app]

Thanks to that, we’re saying that this service will only be used if we explicitly specify that in the command line. We can, for instance build the image by running

docker compose --profile app build

Or run both EventStoreDB and Emmett WebApi by calling:

docker compose --profile app up

And let’s stop here for a moment! Why both if I specified the app profile? Docker Compose will run, in this case, specified profile AND all services that don’t have a profile specified. That’s quite neat, as we can define the set of default services (e.g. databases, messaging systems, etc.) and add others as optional. Ah, and you can specify multiple profiles by, e.g.:

docker compose --profile backend --profile frontend up

You can also group multiple services into a single profile. Why would you do it? Let’s go to…

Docker Profiles advanced scenario

In my Event Sourcing .NET samples repository, I’m trying to cover multiple aspects, tools, ways to build Event Sourcing, CQRS and Event-Driven systems. I’m using:

  • Marten (so Postgres) and EventStoreDB as example event stores,
  • Postgres and Elasticsearch as read model stores,
  • Kafka used for integration between services,
  • UI tools like PgAdmin and Kafka UI to easier investigate sample data.

Multiple samples are using those services in various configurations.

I’m also using them in my Event Sourcing workshops, so I’d like to ensure the setup is smooth and we can focus on learning and not fighting with Docker.

Initially, I kept multiple Docker Compose files for:

  • default configuration with all services,
  • continuous integration pipeline configuration without UI components, as they’re not needed for tests. They’d just eat resources and make pipeline runs longer. They also don’t have Kafka, as I’m just testing inner modules functionalities,
  • sample web API Docker Image build (similar to the one explained above),
  • only containing Postgres-related configurations,
  • accordingly, only with EventStoreDB,
  • etc.

I’m sure that you can relate that to your projects. Now, how can Docker Compose Profiles help us with that? It could definitely help us merge multiple configurations into one and easier manage updating versions, etc.

Let’s see the config I ended up with and then explain the reasoning. I’ll trim the detailed service configuration; you can check the whole file here.

version: "3"
services:
    #######################################################
    #  Postgres
    #######################################################
    postgres:
        profiles: [ postgres, postgres-all, all, all-no-ui, ci ]
        image: postgres:15.1-alpine
	      # (...) rest of the config

    pgadmin:
        profiles: [ postgres-ui, postgres-all, all ]
        image: dpage/pgadmin4
	      # (...) rest of the config

    #######################################################
    #  EventStoreDB
    #######################################################
    eventstore.db:
        image: eventstore/eventstore:23.10.0-bookworm-slim
        profiles: [ eventstoredb, eventstoredb-all, all, all-no-ui, ci ]
	      # (...) rest of the config

    #######################################################
    #  Elastic Search
    #######################################################
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:8.13.2
        profiles: [ elastic, elastic-all, all, all-no-ui, ci ]
	      # (...) rest of the config

    kibana:
        image: docker.elastic.co/kibana/kibana:8.13.2
        profiles: [ elastic-ui, elastic-all, all ]
	      # (...) rest of the config

    #######################################################
    #  Kafka
    #######################################################
    kafka:
        image: confluentinc/confluent-local:7.6.1
        profiles: [kafka, kafka-all, all, all-no-ui]
	      # (...) rest of the config

    init-kafka:
        image: confluentinc/confluent-local:7.6.1
        profiles: [ kafka, kafka-all, all, all-no-ui ]
        command: "#shell script to setup Kafka topics"
	      # (...) rest of the config
        
    schema_registry:
        image: confluentinc/cp-schema-registry:7.6.1
        profiles: [ kafka-ui, kafka-all, all ]        
	      # (...) rest of the config

    kafka_topics_ui:
        image: provectuslabs/kafka-ui:latest
        profiles: [ kafka-ui, kafka-all, all ]
        depends_on:
            - kafka
	      # (...) rest of the config

    #######################################################
    #  Open Telemetry
    #######################################################
    jaeger:
        image: jaegertracing/all-in-one:latest
        profiles: [ otel, otel-all, all ]
	      # (...) rest of the config

    #######################################################
    #  Test Backend Service
    #######################################################
    backend:
        build:
            dockerfile: Dockerfile
            context: .
        profiles: [build]
	    # (...) rest of the config

## (...) Network and Volumes config

As you see, we have a few general profiles:

  • postgres
  • elastic
  • kafka
  • eventstoredb
  • otel
  • build

They group the needed tooling containers.

Each of them has the additional profiles with prefixes:

  • {profile}-all (e.g. postgres-all) - will start all needed tooling containers plus supportive like ui,
  • {profile}-all-no-ui - will start just the needed tooling without UI components. There’s no {profile}-all-ui, as starting UI without actual components doesn’t make sense.

I also defined additional profiles:

  • all - that’ll run all components,
  • ci - only components needed for the CI pipeline (so no UI and Kafka).

So by default, if I don’t mind my RAM being eaten by all containers, I’d run:

docker compose --profile all up

If I’d like to run the Marten sample with Elasticsearch read models, I could just run:

docker compose --profile postgres --profile elastic up

In the CI, I can run:

docker compose --profile ci up

It’s important to find balance and conventions for profile names. If you have too many of them, it’ll be challenging for people to memorise all of them. That’s why grouping them and adding standard conventions can be helpful. We should always consider intended usage and make it accessible. I could potentially provide profiles for dedicated samples.

Read more in the official Docker Compose Profiles guide.

See also the Pull Requests where I introduced explained changes to:

If you get to this place, then you may also like my other articles around Docker and Continuous Integration:

Cheers!

Oskar

p.s. Ukraine is still under brutal Russian invasion. A lot of Ukrainian people are hurt, without shelter and need help. You can help in various ways, for instance, directly helping refugees, spreading awareness, putting pressure on your local government or companies. You can also support Ukraine by donating e.g. to Red Cross, Ukraine humanitarian organisation or donate Ambulances for Ukraine.

đź‘‹ If you found this article helpful and want to get notification about the next one, subscribe to Architecture Weekly.

✉️ Join over 6500 subscribers, get the best resources to boost your skills, and stay updated with Software Architecture trends!

Loading...
Event-Driven by Oskar Dudycz
Oskar Dudycz For over 15 years, I have been creating IT systems close to the business. I started my career when StackOverflow didn't exist yet. I am a programmer, technical leader, architect. I like to create well-thought-out systems, tools and frameworks that are used in production and make people's lives easier. I believe Event Sourcing, CQRS, and in general, Event-Driven Architectures are a good foundation by which this can be achieved.