How to Optimize Docker-based CI Runners with Shared Package Caches

    Colin O'Dell

    At Unleashed Technologies we use Gitlab CI with Docker runners for our continuous integration testing. We’ve put significant effort into speeding up the build execution speeds. One of the optimizations we made was to share a cache volume across all the CI jobs, allowing them to share files like package download caches.

    Configuring the Docker runner was really simple — we simply dropped volumes = ["/srv/cache:/cache:rw"] into our config.toml file:

    concurrent = 6
    check_interval = 0
      name = "ut-ci01"
      url = ""
      token = "xxxxxxxxxxxxx"
      executor = "docker"
        tls_verify = false
        image = "unleashed/php:7.1"
        privileged = false
        disable_cache = false
        volumes = ["/srv/cache:/cache:rw"]

    As a result, all CI jobs will have a /cache directory available (which is mapped to /srv/cache on the Docker host).

    The next step was making the package managers utilize this cache directory whenever jobs run commands like composer install or yarn install. Luckily, these package managers allow us to configure their cache directories using environment variables:

    • Composer: COMPOSER_CACHE_DIR
    • bower: bower_storage__packages
    • RubyGems: GEM_SPEC_CACHE

    So we simply added these ENV directives in the Dockerfiles for our base images:

    ENV COMPOSER_CACHE_DIR /cache/composer
    ENV YARN_CACHE_FOLDER /cache/yarn
    ENV NPM_CONFIG_CACHE /cache/npm
    ENV bower_storage__packages /cache/bower
    ENV GEM_SPEC_CACHE /cache/gem

    Now, whenever a job needs a package installed, it’ll pull from our local cache instead of downloading from a remote server! This provides a noticeable speed improvement for our builds.

    This quick tip was originally published on Colin’s blog, and republished here with the author’s permission.