You may have heard about Docker and how it’s a “container for application / services” as well as how it’s more efficient than using Virtual Machines. Of course, all the cool web tech people have adopted it, but what about everyday normal people?
Turns out docker is for everyone. The following is my journey with docker on a Debian VM.
Environment
I primarily use Windows… I only recently upgraded to Windows 10 from 7 in February because I was ‘forced’ to work from home due to Covid-19. After spending about a day reinstalling all my applications MANUALLY using exe and msi files downloaded off the web, I finally felt ready to do some work.
Automating the deployment and provisioning of machines is something that either can be done with a package manager (eg choclatey) or say a program called Ninite.
I’m pretty sure you could use docker for Windows to do this (but that’s another story), but my previous experience with docker on Windows was a very bad initial one. In the past Docker for Windows was just a Linux VM hosted by what looked to be a re-branded Oracle Virtual Box. It seemed like a really bad way to do it… fortunately, much has changed since and I have heard that it is a lot better.
That still didn’t convince me to use it on Windows though. So I use Debian VM on a Windows 10 host using Virtual Box.
Installing
There are some guides on installing docker here:
https://docs.docker.com/engine/install/
With debian, there are several methods to install. Interestingly the easiest option (Install via convenience script https://docs.docker.com/engine/install/debian/#install-using-the-convenience-script) is at the bottom.
There’s probably good reason for that as it can be seen as a security issue. It’s always important to look at the scripts before running them…
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
Once installed you can invoke docker, and get a list of ‘options’
$ sudo docker
As you know having to type sudo all the time is a pain, so be sure to add your user to the docker group. (-aG for add to group)
$ sudo usermod -aG docker <your-user>
Use Case 1 – Build Environment Image
It’s a pain to have to keep re-installing versions of gcc and cmake each time you deploy a build environment. We can use Dockerfile to declare how a new image should be created.
Take a look at:
https://github.com/AdrianGin/dockerfiles/tree/master/arm-gcc-build
Lets examine each line in the Dockerfile:
FROM debian:bullseye-slim LABEL Description="Image for building embedded ARM on metal" RUN apt-get update && apt-get upgrade -y && \ apt-get install -y build-essential \ curl ARG ARMGCC_DIR="/opt" ARG CMAKE_DIR="/usr/local" RUN curl -L https://developer.arm.com/-/media/Files/downloads/gnu-rm/9-2019q4/gcc-arm-none-eabi-9-2019-q4-major-x86_64-linux.tar.bz2 | tar -xjv -C ${ARMGCC_DIR} RUN curl -L https://github.com/Kitware/CMake/releases/download/v3.17.2/cmake-3.17.2-Linux-x86_64.sh > cmake-install.sh \ && chmod +x cmake-install.sh && \ ./cmake-install.sh --skip-license --prefix=${CMAKE_DIR} ENV PATH "${ARMGCC_DIR}/gcc-arm-none-eabi-9-2019-q4-major/bin:$PATH" ENV ARMGCC_DIR="${ARMGCC_DIR}/gcc-arm-none-eabi-9-2019-q4-major"
FROM debian:bullseye-slim
We use the ‘slim’ version as it’s only 40MB or so. You might be thinking, why I don’t use Alpine linux as the base image as that’s only 6MB. I did initially try the Alpine Linux images but they were so lightweight they did not have the lib64 shared libraries, so I couldn’t run any of the 64bit binaries properly.
I did try install lib6-compat, it did get a bit further, but failed at another point, at which point I decided that the 30MB saving was not worth it!
LABEL Description="Image for building embedded ARM on metal"
It’s always good to give your image a brief description of what it does.
RUN apt-get update && apt-get upgrade -y && \ apt-get install -y build-essential \ curl
We need curl to download and install the 3rd party packages that we want. Sometime we want to do this independently of the package manager so we have more control over the exact version.
build-essential, This contains gcc, make as well as other libraries required to run compiled output.
ARG ARMGCC_DIR="/opt" ARG CMAKE_DIR="/usr/local"
Because we are installing 3rd party packages, we want to place them in a specific location so our build scripts / make files can access / reference them. We place gcc-arm in the ‘opt’ directory as opposed to /usr/bin as we might want to have multiple versions installed, eg 2019-q4 alongside 2018-q2.
With CMake however, we generally don’t have multiple versions of that installed, so it can go into /usr.
RUN curl -L https://developer.arm.com/-/media/Files/downloads/gnu-rm/9-2019q4/gcc-arm-none-eabi-9-2019-q4-major-x86_64-linux.tar.bz2 | tar -xjv -C ${ARMGCC_DIR} RUN curl -L https://github.com/Kitware/CMake/releases/download/v3.17.2/cmake-3.17.2-Linux-x86_64.sh > cmake-install.sh \ && chmod +x cmake-install.sh && \ ./cmake-install.sh --skip-license --prefix=${CMAKE_DIR}
These commands download and extract the packages to the locations we gave before.
Executing shell scripts downloaded is always risky, so be aware!
Finally, add the path of our ARM GCC to the Environment PATH by:
ENV PATH "${ARMGCC_DIR}/gcc-arm-none-eabi-9-2019-q4-major/bin:$PATH"
Build the Image
We build the Dockerfile image with this command:
docker build -f Dockerfile -t arm-gcc-build:latest .
-f to specify the Dockerfile
-t to give output image the name ‘arm-gcc-build:latest’
. , provide the current directory as the ‘context’ – we don’t really use this
In some cases you can use this so the Image can access certain files. However when we run the container we will mount a folder so it can access our files.
You should see the following.
Upload the Image to Dockerhub
If you sign up to a Dockerhub account, we can upload our image to their repo. That way your image will be available to everyone on the internet.
First, create a dockerhub account, then run the command:
docker login
We need to rename the image we built to include the name of our personal repo that we have write access to.
docker tag gcc-arm-none-build:latest adriangin/gcc-arm-none-build:latest docker push adriangin/gcc-arm-none-build:latest
Run a container based off the Image
We don’t execute images directly per-se. We instead start a container based off an image. This way we can deploy multiple containers of the SAME image to scale operations if needed.
In our use case, we want a build environment, however we can run a container for each project.
To run the container use:
docker run --rm -it -v ~:/home/$USER -v /etc/group:/etc/group:ro -v /etc/passwd:/etc/passwd:ro --workdir="/home/$USER" --user $(id -u):$(id -g) gcc-arm-none-build:latest
–rm – Delete the container after we exit / are done with it
Test the container

Summary
Now you can share this Dockerfile with your dev team, and be sure that you’re all working off the same build environment. Or better yet, because you uploaded your image to dockerhub, they can just start the container using your image.
[…] you were able to follow part 1 (https://adriangin.wordpress.com/2020/05/07/getting-started-with-docker/) read on to see how you can use Docker to setup other useful […]