Getting started with Docker

You may have heard about Docker and how it’s a “container for application / services” as well as how it’s more efficient than using Virtual Machines. Of course, all the cool web tech people have adopted it, but what about everyday normal people?

Turns out docker is for everyone. The following is my journey with docker on a Debian VM.


I primarily use Windows… I only recently upgraded to Windows 10 from 7 in February because I was ‘forced’ to work from home due to Covid-19. After spending about a day reinstalling all my applications MANUALLY using exe and msi files downloaded off the web, I finally felt ready to do some work.

Automating the deployment and provisioning of machines is something that either can be done with a package manager (eg choclatey) or say a program called Ninite.

I’m pretty sure you could use docker for Windows to do this (but that’s another story), but my previous experience with docker on Windows was a very bad initial one. In the past Docker for Windows was just a Linux VM hosted by what looked to be a re-branded Oracle Virtual Box. It seemed like a really bad way to do it… fortunately, much has changed since and I have heard that it is a lot better.

That still didn’t convince me to use it on Windows though. So I use Debian VM on a Windows 10 host using Virtual Box.


There are some guides on installing docker here:

With debian, there are several methods to install. Interestingly the easiest option (Install via convenience script is at the bottom.

There’s probably good reason for that as it can be seen as a security issue. It’s always important to look at the scripts before running them…

$ curl -fsSL -o
$ sudo sh

Once installed you can invoke docker, and get a list of ‘options’

$ sudo docker

As you know having to type sudo all the time is a pain, so be sure to add your user to the docker group. (-aG for add to group)

$ sudo usermod -aG docker <your-user>

Use Case 1 – Build Environment Image

It’s a pain to have to keep re-installing versions of gcc and cmake each time you deploy a build environment. We can use Dockerfile to declare how a new image should be created.

Take a look at:

Lets examine each line in the Dockerfile:

FROM debian:bullseye-slim
LABEL Description="Image for building embedded ARM on metal"

RUN apt-get update && apt-get upgrade -y && \ 
    apt-get install -y build-essential \

ARG CMAKE_DIR="/usr/local"

RUN curl -L | tar -xjv -C ${ARMGCC_DIR}
RUN curl -L > \
            && chmod +x && \
            ./ --skip-license --prefix=${CMAKE_DIR}

ENV PATH "${ARMGCC_DIR}/gcc-arm-none-eabi-9-2019-q4-major/bin:$PATH"

ENV ARMGCC_DIR="${ARMGCC_DIR}/gcc-arm-none-eabi-9-2019-q4-major"
We use debian as the base image. This will have all the /bin, /sbin, /usr/bin etc all populated with some defaults.
FROM debian:bullseye-slim

We use the ‘slim’ version as it’s only 40MB or so. You might be thinking, why I don’t use Alpine linux as the base image as that’s only 6MB. I did initially try the Alpine Linux images but they were so lightweight they did not have the lib64 shared libraries, so I couldn’t run any of the 64bit binaries properly.

I did try install lib6-compat, it did get a bit further, but failed at another point, at which point I decided that the 30MB saving was not worth it!

LABEL Description="Image for building embedded ARM on metal"

It’s always good to give your image a brief description of what it does.

RUN apt-get update && apt-get upgrade -y && \ 
    apt-get install -y build-essential \

We need curl to download and install the 3rd party packages that we want. Sometime we want to do this independently of the package manager so we have more control over the exact version.

build-essential, This contains gcc, make as well as other libraries required to run compiled output.

ARG CMAKE_DIR="/usr/local"

Because we are installing 3rd party packages, we want to place them in a specific location so our build scripts / make files can access / reference them. We place gcc-arm in the ‘opt’ directory as opposed to /usr/bin as we might want to have multiple versions installed, eg 2019-q4 alongside 2018-q2.

With CMake however, we generally don’t have multiple versions of that installed, so it can go into /usr.

RUN curl -L | tar -xjv -C ${ARMGCC_DIR}
RUN curl -L > \
            && chmod +x && \
            ./ --skip-license --prefix=${CMAKE_DIR}

These commands download and extract the packages to the locations we gave before.

Executing shell scripts downloaded is always risky, so be aware!

Finally, add the path of our ARM GCC to the Environment PATH by:

ENV PATH "${ARMGCC_DIR}/gcc-arm-none-eabi-9-2019-q4-major/bin:$PATH"


Build the Image

We build the Dockerfile image with this command:

docker build -f Dockerfile -t arm-gcc-build:latest .

-f to specify the Dockerfile
-t to give output image the name ‘arm-gcc-build:latest’
. , provide the current directory as the ‘context’ – we don’t really use this

In some cases you can use this so the Image can access certain files. However when we run the container we will mount a folder so it can access our files.

You should see the following.


Upload the Image to Dockerhub

If you sign up to a Dockerhub account, we can upload our image to their repo. That way your image will be available to everyone on the internet.

First, create a dockerhub account, then run the command:

docker login

We need to rename the image we built to include the name of our personal repo that we have write access to.

docker tag gcc-arm-none-build:latest adriangin/gcc-arm-none-build:latest
docker push adriangin/gcc-arm-none-build:latest

Run a container based off the Image

We don’t execute images directly per-se. We instead start a container based off an image. This way we can deploy multiple containers of the SAME image to scale operations if needed.

In our use case, we want a build environment, however we can run a container for each project.

To run the container use:

docker run --rm -it -v ~:/home/$USER -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro --workdir="/home/$USER" --user $(id -u):$(id -g) gcc-arm-none-build:latest

–rm – Delete the container after we exit / are done with it

-it – start the container interactively and allow use to use stdin/out via the terminal – this effectively gives us a cmd prompt
-v ~:/home/$USER – Mount our current home directory to the container /home/<your user>, this will make us feel at home where we can access all our files.
-v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro
Allow our container to use our host for all username/passwords. This way the container can use our hosts users/pass credentials
–workdir=”/home/$USER” – Assign to ‘default’ working directory for the container
–user $(id -u):$(id -g) – Use our current user and not root to login
gcc-arm-none-build:latest – use this image (previously created) to start the container.

Test the container

After running the container, you should be able to see all your home dir files as well as have access to cmake and arm-gcc. Doing a whereis will show that arm-gcc is indeed installed at the correct location we specified in our Dockerfile.
You can then stop the container with ‘exit’ and because we started the container with –rm, the container will be removed once we quit the interactive session.


Now you can share this Dockerfile with your dev team, and be sure that you’re all working off the same build environment. Or better yet, because you uploaded your image to dockerhub, they can just start the container using your image.






One thought on “Getting started with Docker

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s