Skip to main content
Version: 6

How to Speed-up Docker Image Builds on Linux


Building Docker containers can be time-consuming, mostly when you have a container that relies on packages to set up a build or execution environment for your applications.

This article gathers tips and tricks to make your life easier - and faster:

  • Use a local proxy for Debian packages.
  • Dockerfile tricks on specific use-cases.

This article complies with the Typographic Conventions for Torizon Documentation.


Run a Local Proxy

If you run apt-get or another package manager to download and install those packages and then change the package list, docker will re-run the whole command and perform again all the downloads, taking time and connection bandwidth. This issue may be mitigated by configuring a proxy for your docker containers. This proxy will cache download requests and avoid multiple downloads of the same packages.

Since you already have Docker installed on your machine it would be easy to run your proxy inside a container:

  • We will use a pre-existing image and configure it to suit our needs.
  • We will configure a proxy that will not require authentication and will serve requests from all the clients on your local network.
  • The proxy used is squid, feel free to customize the configuration to suit your needs.

Configure Your Local Proxy

Create a local folder on your machine to store the configuration files and the cache for your proxy and enter that folder:

$ mkdir squid && cd squid

Create two sub-folders named "cfg" and "cache":

$ mkdir cfg cache

Download the squid container:

$ docker pull woahbase/alpine-squid:x86_64

Run it for the first time to populate the configuration folder:

$ docker run -it --rm -v $(pwd)/cfg:/etc/squid -v $(pwd):/var/cache/squid woahbase/alpine-squid:x86_64

This will print out some messages. When squid startup has been completed, press Ctrl+C and the container will shutdown. You will notice that a file under cgf/squid.conf has been created and:

  • You can configure the proxy by editing this file.
  • You must edit as root because it has been created inside the container.

See a sample of the configuration for a proxy with:

  • No authentication
  • Using 20GB of cache
  • Accessible to all clients on the network.

It's a long file, click on the collapsible section to see it:

# Recommended minimum configuration:
visible_hostname squid

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
# acl localnet src # RFC1918 possible internal network
# acl localnet src # RFC1918 possible internal network
# acl localnet src # RFC1918 possible internal network
# acl localnet src fc00::/7 # RFC 4193 local private network range
# acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines

acl localnet src # docker
acl localnet src # internal (CHANGE IT TO MATCH YOUR LAN)

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http

acl purge method PURGE

# authenticated proxy
# auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/.htpasswd
# auth_param basic realm proxy
# acl authenticated proxy_auth REQUIRED

# Recommended minimum Access Permission configuration:
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
http_access deny purge

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost


# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localhost

# enable this bit if using without authentication
http_access allow localnet
http_reply_access allow localnet
icp_access allow localnet
always_direct allow localnet

# otherwise use htpasswd authentication for hosts
#http_access allow authenticated localnet
#http_reply_access allow authenticated localnet
#icp_access allow authenticated localnet
#always_direct allow authenticated localnet

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 3129 intercept

# Uncomment and adjust the following to add a disk cache directory.
cache_dir aufs /var/cache/squid 20000 16 256 # HERE WE COǸFIGURE 20GB OF CACHE
cache_replacement_policy heap LFUDAcache_mem 128 MB

maximum_object_size 1024 MB
maximum_object_size_in_memory 10240 KB

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

allow_underscore on

dns_defnames on
dns_v4_first on

access_log /dev/stdout
cache_log /dev/stdout
cache_store_log /dev/stdout

httpd_suppress_version_string on
shutdown_lifetime 5 seconds

# forwarded_for transparent
forwarded_for delete
via off

# from
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access User-Agent allow all
request_header_access Cookie allow all
request_header_access All deny all

# Response Headers Spoofing
reply_header_access Via deny all
reply_header_access X-Cache deny all
reply_header_access X-Cache-Lookup deny all
# Add any of your own refresh_pattern entries above these.
refresh_pattern -i .rpm$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern -i .iso$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern -i .deb$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

Start Your Local Proxy

Run the proxy container:

$ docker run -d \
--restart always \
--name squid --hostname squid \
-c 256 -m 256m \
-e PGID=1000 -e PUID=1000 \
-p 3128:3128 -p 3129:3129 \
-v $(pwd)/cfg:/etc/squid \
-v $(pwd)/cache:/var/cache/squid \
-v /etc/hosts:/etc/hosts:ro \
-v /etc/localtime:/etc/localtime:ro \

Configure Docker to Use Your New Proxy

Create a local file on your PC under $HOME/.docker/config.yaml:

$ touch $HOME/.docker/config.yaml

Add your proxy IP address to the file. Don't use your machine name because it may not be resolved correctly inside a container:

"default": {
"httpProxy": "",
"httpsProxy": "",
"noProxy": "localhost,,*.local,192.168.*"

From now on, you should notice that packages are downloaded only once and your builds will be much faster.

Dockerfile Tip - Cache NPM Packages

When creating a Node.js project you will most likely describe the project in a package.json file, including the project's dependencies. Check it out how to:

  • Prevent npm install from being run every time you modify your source-code.
  • Isolate the build of npm packages that require native build steps.

For this example, let's use the theoretical package.json below. We plan to build a theoretical Express.js REST API + SQLite for storage:

"name": "myproject",
"version": "1.0.0",
"description": "My own project",
"main": "index.js",
"dependencies": {
"body-parser": "^1.18.3",
"cookie-parser": "^1.4.5",
"express": "^4.16.4",
"jsonwebtoken": "^8.5.1",
"sqlite3": "^4.0.4"

Isolate npm install

Create a dependencies stage that installs the dependencies from the package.json before the deploy stage:

# Install npm dependencies
FROM arm64v8/node:${NODE_VERSION} AS dependencies

COPY --chown=node:node ./package.json /app
RUN npm config set jobs max && npm install

Then just copy node_modules to your final deploy stage:

# Prepare the final container
FROM arm64v8/node:${NODE_VERSION}-slim AS deploy

# Add application source-code
USER node
WORKDIR /home/node/app
COPY --from=dependencies --chown=node:node /app/node_modules /home/node/app/node_modules
COPY --chown=node:node . /home/node/app/

# Run Node.js app with Express listening on port 8000
CMD [ "node", "/home/node/app/index.js" ]

Notice a few interesting points:

  • The dependencies stage uses the full Node.js Docker image, whereas the deploy stage uses the slim version. This is convenient:
    • You can easily run your npm install commands without having to figure out build dependencies, etc.
    • You have a lean image in the end, using less flash storage.
  • In the deploy stage, we choose to run as user node instead of root.
    • The user node is provided by default in the official Node.js Docker images.
    • You may need to copy some files explicitly using COPY --chown=node:node, depending if your app will modify any of those files or directory contents.

Isolate the Build of NPM Packages with Native Dependencies

Building native packages may take some time, especially if you opt to use Arm emulation (qemu) instead of cross-builds. To prevent those packages from building whenever you change something in your package.json you can add an extra native-deps stage before dependencies.

This is a good idea for the package sqlite3 from our example, because unless explicitly stated, it builds libsqlite from source instead of linking against a system-installed version. Add the stage before dependencies:

# Install sqlite3 from source, takes a while to build so doing it isolated
FROM arm64v8/node:${NODE_VERSION} AS native-deps

# Build node module sqlite3
RUN npm install sqlite3

And on dependencies copy the pre-built sqlite3 npm package before running npm install:

# Install npm dependencies
FROM arm64v8/node:${NODE_VERSION} AS dependencies

COPY --from=native-deps /app/node_modules /app/node_modules
COPY --chown=node:node ./package.json /app
RUN npm config set jobs max && npm install

Build sqlite3 Linking to libsqlite from Debian Feeds

This is not exactly a tip on improving build speed, but I bet you are curious on how to do it.

In the native-deps stage use the arguments --build-from-source --sqlite=/usr as described in its documentation. You don't need to install libsqlite using apt because it's there by deafult in the full version of the Node.js Docker image:

# Only update this line
RUN npm install --build-from-source --sqlite=/usr sqlite3

In the deploy stage install libsqlite, since it's not available in the slim version:

# Install dependencies from Debian feeds
RUN apt-get update && apt-get install -y --no-install-recommends \
libsqlite3-0 \
&& rm -rf /var/lib/apt/lists/*

Send Feedback!