Search by Tags

How to Speed-up Docker Image Builds on Linux

 

Article updated at 12 Feb 2021
Compare with Revision


Select the version of your OS from the tabs below. If you don't know the version you are using, run the command cat /etc/os-release or cat /etc/issue on the board.



Remember that you can always refer to the Torizon Documentation, there you can find a lot of relevant articles that might help you in the application development.

Torizon 5.0.0

Introduction

Building Docker containers can be time-consuming, mostly when you have a container that downloads packages to set-up a build or execution environment for your applications.

This article gathers tips and tricks to make your life easier - and faster:

  • Use a local proxy for Debian packages.
  • Dockerfile tricks on specific use-cases.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

Run a Local Proxy

If you run apt-get or another package manager to download and install those packages and then change the package list, docker will re-run the whole command and perform again all the downloads, taking time and connection bandwidth. This issue may be mitigated by configuring a proxy for your docker containers. This proxy will cache download requests and avoid multiple downloads of the same packages.

Since you already have Docker installed on your machine it would be easy to run your proxy inside a container:

  • We will use a pre-existing image and configure it to suit our needs.
  • We will configure a proxy that will not require authentication and will serve requests from all the clients on your local network.
  • The proxy used is squid, feel free to customize the configuration to suit your needs.

Configure Your Local Proxy

Create a local folder on your machine to store the configuration files and the cache for your proxy and enter that folder:

$ mkdir squid && cd squid

Create two sub-folders named "cfg" and "cache":

$ mkdir cfg cache

Download the squid container:

$ docker pull woahbase/alpine-squid:x86_64

Run it for the first time to populate the configuration folder:

$ docker run -it --rm -v $(pwd)/cfg:/etc/squid -v $(pwd):/var/cache/squid woahbase/alpine-squid:x86_64

This will print out some messages. When squid startup has been completed, press Ctrl+C and the container will shutdown. You will notice that a file under cgf/squid.conf has been created and:

  • You can configure the proxy by editing this file.
  • You must edit as root because it has been created inside the container.

See a sample of the configuration for a proxy with:

  • No authentication
  • Using 20GB of cache
  • Accessible to all clients on the 192.168.1.xxx network.

It's a long file, click on the collapsible section to see it:

squid.conf

Start Your Local Proxy

Run the proxy container:

$ docker run -d \
  --restart always \
  --name squid --hostname squid \
  -c 256 -m 256m \
  -e PGID=1000 -e PUID=1000 \
  -p 3128:3128 -p 3129:3129 \
  -v $(pwd)/cfg:/etc/squid \
  -v $(pwd)/cache:/var/cache/squid \
  -v /etc/hosts:/etc/hosts:ro \
  -v /etc/localtime:/etc/localtime:ro \
  woahbase/alpine-squid:x86_64

Configure Docker to Use Your New Proxy

Create a local file on your PC under $HOME/.docker/config.yaml:

$ touch $HOME/.docker/config.yaml

Add your proxy IP address to the file. Don't use your machine name because it may not be resolved correctly inside a container:

$HOME/.docker/config.yaml
"proxies":
{
    "default": {
        "httpProxy": "http://192.168.1.5:3128",
        "httpsProxy": "http://192.168.1.5:3128",
        "noProxy": "localhost,127.0.0.1,*.local,192.168.*" 
    }
}

From now on, you should notice that packages are downloaded only once and your builds will be much faster.

Dockerfile Tip - Cache NPM Packages

When creating a Node.js project you will most likely describe the project in a package.json file, including the project's dependencies. Check it out how to:

  • Prevent npm install from being run every time you modify your source-code.
  • Isolate the build of npm packages that require native build steps.

For this example, let's use the theoretical package.json below. We plan to build a theoretical Express.js REST API + SQLite for storage:

package.json
{
  "name": "myproject",
  "version": "1.0.0",
  "description": "My own project",
  "main": "index.js",
  "dependencies": {
    "body-parser": "^1.18.3",
    "cookie-parser": "^1.4.5",
    "express": "^4.16.4",
    "jsonwebtoken": "^8.5.1",
    "sqlite3": "^4.0.4"
  }
}

Isolate <em>npm install</em>

Create a dependencies stage that installs the dependencies from the package.json before the deploy stage:

Dockerfile
# Install npm dependencies
ARG NODE_VERSION=14-buster
FROM arm64v8/node:${NODE_VERSION} AS dependencies
 
WORKDIR /app
COPY --chown=node:node ./package.json /app
RUN npm config set jobs max && npm install

Then just copy node_modules to your final deploy stage:

Dockerfile
# Prepare the final container
FROM arm64v8/node:${NODE_VERSION}-slim AS deploy
 
# Add application source-code
USER node
WORKDIR /home/node/app
COPY --from=dependencies --chown=node:node /app/node_modules /home/node/app/node_modules
COPY --chown=node:node . /home/node/app/
 
# Run Node.js app with Express listening on port 8000
EXPOSE 8000
CMD [ "node", "/home/node/app/index.js" ]

Notice a few interesting points:

  • The dependencies stage uses the full Node.js Docker image, whereas the deploy stage uses the slim version. This is convenient:
    • You can easily run your npm install commands without having to figure out build dependencies, etc.
    • You have a lean image in the end, using less flash storage.
  • In the deply stage, we choose to run as user node instead of root.
    • The user node is provided by default in the official Node.js Docker images.
    • You may need to copy some files explicitly using COPY --chown=node:node, depending if your app will modify any of those files or directory contents.

Isolate the Build of NPM Packages with Native Dependencies

Building native packages may take some time, especially if you opt to use Arm emulation (qemu) instead of cross-builds. To prevent those packages from building whenever you change something in your package.json you can add an extra native-deps stage before dependencies.

This is a good idea for the package sqlite3 from our example, because unless explicitly stated, it builds libsqlite from source instead of linking against a system-installed version. Add the stage before dependencies:

Dockerfile
# Install sqlite3 from source, takes a while to build so doing it isolated
ARG NODE_VERSION=14-buster
FROM arm64v8/node:${NODE_VERSION} AS native-deps
 
# Build node module sqlite3
WORKDIR /app
RUN npm install sqlite3

And on dependencies copy the pre-built sqlite3 npm package before running npm install:

Dockerfile
# Install npm dependencies
ARG NODE_VERSION=14-buster
FROM arm64v8/node:${NODE_VERSION} AS dependencies
 
WORKDIR /app
COPY --from=native-deps /app/node_modules /app/node_modules
COPY --chown=node:node ./package.json /app
RUN npm config set jobs max && npm install

Build sqlite3 Linking to libsqlite from Debian Feeds

This is not exactly a tip on improving build speed, but I bet you are curious on how to do it.

In the native-deps stage use the arguments --build-from-source --sqlite=/usr as described in its documentation. You don't need to install libsqlite using apt because it's there by deafult in the full version of the Node.js Docker image:

Dockerfile
# Only update this line
RUN npm install --build-from-source --sqlite=/usr sqlite3

In the deploy stage install libsqlite, since it's not available in the slim version:

Dockerfile
# Install dependencies from Debian feeds
RUN apt-get update && apt-get install -y --no-install-recommends \
    libsqlite3-0 \
    && rm -rf /var/lib/apt/lists/*

Torizon 4.0.0

Introduction

Building Docker containers can be time-consuming, mostly when you have a container that downloads packages to set-up a build or execution environment for your applications.

This article gathers tips and tricks to make your life easier - and faster:

  • Use a local proxy for Debian packages.
  • Dockerfile tricks on specific use-cases.

This article complies to the Typographic Conventions for Torizon Documentation.

Prerequisites

Run a Local Proxy

If you run apt-get or another package manager to download and install those packages and then change the package list, docker will re-run the whole command and perform again all the downloads, taking time and connection bandwidth. This issue may be mitigated by configuring a proxy for your docker containers. This proxy will cache download requests and avoid multiple downloads of the same packages.

Since you already have Docker installed on your machine it would be easy to run your proxy inside a container:

  • We will use a pre-existing image and configure it to suit our needs.
  • We will configure a proxy that will not require authentication and will serve requests from all the clients on your local network.
  • The proxy used is squid, feel free to customize the configuration to suit your needs.

Configure Your Local Proxy

Create a local folder on your machine to store the configuration files and the cache for your proxy and enter that folder:

$ mkdir squid && cd squid

Create two sub-folders named "cfg" and "cache":

$ mkdir cfg cache

Download the squid container:

$ docker pull woahbase/alpine-squid:x86_64

Run it for the first time to populate the configuration folder:

$ docker run -it --rm -v $(pwd)/cfg:/etc/squid -v $(pwd):/var/cache/squid woahbase/alpine-squid:x86_64

This will print out some messages. When squid startup has been completed, press Ctrl+C and the container will shutdown. You will notice that a file under cgf/squid.conf has been created and:

  • You can configure the proxy by editing this file.
  • You must edit as root because it has been created inside the container.

See a sample of the configuration for a proxy with:

  • No authentication
  • Using 20GB of cache
  • Accessible to all clients on the 192.168.1.xxx network.

It's a long file, click on the collapsible section to see it:

squid.conf

Start Your Local Proxy

Run the proxy container:

$ docker run -d \
  --restart always \
  --name squid --hostname squid \
  -c 256 -m 256m \
  -e PGID=1000 -e PUID=1000 \
  -p 3128:3128 -p 3129:3129 \
  -v $(pwd)/cfg:/etc/squid \
  -v $(pwd)/cache:/var/cache/squid \
  -v /etc/hosts:/etc/hosts:ro \
  -v /etc/localtime:/etc/localtime:ro \
  woahbase/alpine-squid:x86_64

Configure Docker to Use Your New Proxy

Create a local file on your PC under $HOME/.docker/config.yaml:

$ touch $HOME/.docker/config.yaml

Add your proxy IP address to the file. Don't use your machine name because it may not be resolved correctly inside a container:

$HOME/.docker/config.yaml
"proxies":
{
    "default": {
        "httpProxy": "http://192.168.1.5:3128",
        "httpsProxy": "http://192.168.1.5:3128",
        "noProxy": "localhost,127.0.0.1,*.local,192.168.*" 
    }
}

From now on, you should notice that packages are downloaded only once and your builds will be much faster.

Dockerfile Tip - Cache NPM Packages

When creating a Node.js project you will most likely describe the project in a package.json file, including the project's dependencies. Check it out how to:

  • Prevent npm install from being run every time you modify your source-code.
  • Isolate the build of npm packages that require native build steps.

For this example, let's use the theoretical package.json below. We plan to build a theoretical Express.js REST API + SQLite for storage:

package.json
{
  "name": "myproject",
  "version": "1.0.0",
  "description": "My own project",
  "main": "index.js",
  "dependencies": {
    "body-parser": "^1.18.3",
    "cookie-parser": "^1.4.5",
    "express": "^4.16.4",
    "jsonwebtoken": "^8.5.1",
    "sqlite3": "^4.0.4"
  }
}

Isolate <em>npm install</em>

Create a dependencies stage that installs the dependencies from the package.json before the deploy stage:

Dockerfile
# Install npm dependencies
ARG NODE_VERSION=14-buster
FROM arm64v8/node:${NODE_VERSION} AS dependencies
 
WORKDIR /app
COPY --chown=node:node ./package.json /app
RUN npm config set jobs max && npm install

Then just copy node_modules to your final deploy stage:

Dockerfile
# Prepare the final container
FROM arm64v8/node:${NODE_VERSION}-slim AS deploy
 
# Add application source-code
USER node
WORKDIR /home/node/app
COPY --from=dependencies --chown=node:node /app/node_modules /home/node/app/node_modules
COPY --chown=node:node . /home/node/app/
 
# Run Node.js app with Express listening on port 8000
EXPOSE 8000
CMD [ "node", "/home/node/app/index.js" ]

Notice a few interesting points:

  • The dependencies stage uses the full Node.js Docker image, whereas the deploy stage uses the slim version. This is convenient:
    • You can easily run your npm install commands without having to figure out build dependencies, etc.
    • You have a lean image in the end, using less flash storage.
  • In the deply stage, we choose to run as user node instead of root.
    • The user node is provided by default in the official Node.js Docker images.
    • You may need to copy some files explicitly using COPY --chown=node:node, depending if your app will modify any of those files or directory contents.

Isolate the Build of NPM Packages with Native Dependencies

Building native packages may take some time, especially if you opt to use Arm emulation (qemu) instead of cross-builds. To prevent those packages from building whenever you change something in your package.json you can add an extra native-deps stage before dependencies.

This is a good idea for the package sqlite3 from our example, because unless explicitly stated, it builds libsqlite from source instead of linking against a system-installed version. Add the stage before dependencies:

Dockerfile
# Install sqlite3 from source, takes a while to build so doing it isolated
ARG NODE_VERSION=14-buster
FROM arm64v8/node:${NODE_VERSION} AS native-deps
 
# Build node module sqlite3
WORKDIR /app
RUN npm install sqlite3

And on dependencies copy the pre-built sqlite3 npm package before running npm install:

Dockerfile
# Install npm dependencies
ARG NODE_VERSION=14-buster
FROM arm64v8/node:${NODE_VERSION} AS dependencies
 
WORKDIR /app
COPY --from=native-deps /app/node_modules /app/node_modules
COPY --chown=node:node ./package.json /app
RUN npm config set jobs max && npm install

Build sqlite3 Linking to libsqlite from Debian Feeds

This is not exactly a tip on improving build speed, but I bet you are curious on how to do it.

In the native-deps stage use the arguments --build-from-source --sqlite=/usr as described in its documentation. You don't need to install libsqlite using apt because it's there by deafult in the full version of the Node.js Docker image:

Dockerfile
# Only update this line
RUN npm install --build-from-source --sqlite=/usr sqlite3

In the deploy stage install libsqlite, since it's not available in the slim version:

Dockerfile
# Install dependencies from Debian feeds
RUN apt-get update && apt-get install -y --no-install-recommends \
    libsqlite3-0 \
    && rm -rf /var/lib/apt/lists/*