Getting started with Containers — Part-II

PARAB
8 min readAug 5, 2021

Setting up raw Container

Hello guys 🙋🏻‍♂️🙋🏻‍♂️…

In this article, we will be setting up our raw containers so that you’ll know that how the services like ‘Docker’ has made our life super-easy by abstracting plethora of repetitive work that used to be done by developers to setup even a simple container. Also, it has made it easy to manage containers in a very efficient manner.

But again here we will not be using docker we are setting up our container from scratch😄

Note :- This article is in continuation with my previous article on Basics of Containers. If you are new and want to know What-Why-How of Containers then I would advice you to take a look at my previous article by clicking here.

Let’s first understand, How we are gonna set-up our new containers ?

This can broadly be seen in 3 steps (which are the basis of concept of Containers) —

  1. Use change root to isolate root directory of our new process.{In this case our Container(which, you will see, is nothing but a folder)}.
  2. Use namespaces to hide processes from other processes.
  3. Use control groups for resource planning.

Now It’s time to see all steps one-by-one in action 🙌

1. Change Root

P.S. - Run every command as root.
P.S. - All commands are w.r.t. Linux environment.

→ Make a new folder.

  • This will eventually become our container.
  • This folder will act as “root” folder for our container.
mkdir raw-container

But this folder has nothing in it, so if we access it this will show error because it is just empty … it doesn’t even have bash….

Try to access bash our “chroot-ed” environment…

Let’s solve this problem…

→ Copy the bash dependency library in our new folder.

All the commands that you run are kind of programs which are stored in /bin folder and for execution these programs depend on libraries that are stored in /lib folder … So we have to copy those programs as well as libraries to our container’s /bin and /lib folder.

Let’s first try with setting up ‘bash’ and ‘ls’ in our container.

— Make ‘bin’ ‘lib’ and ‘lib64’ folder in our container folder

mkdir raw-container/bin /raw-container/lib /raw-container/lib64

— Copy bash and ls programs to our container

cp /bin/bash /bin/ls /raw-container/bin

To see what libraries bash and ls depend, execute

ldd /bin/bash /bin/ls

We will only copy those whose full path is given in above image (because those are actually required others can be ignored).

cp /lib/x86_64-linux-gnu/libtinfo.so.5 /lib/x86_64-linux-gnu/libdl.so.2 /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libselinux.so.1 /lib/x86_64-linux-gnu/libpcre.so.3 /lib/x86_64-linux-gnu/libpthread.so.0 /raw-container/lib/

For lib64

cp /lib64/ld-linux-x86-64.so.2 /raw-container/lib64

Now you can chroot your container by using the command

chroot raw-container bash

Now, this only knows bash and ls (also some very bare minimum commands like pwd, ls, cd, echo for navigation) commands because we copied those in this folder.

Make a file

bash-4.4# echo "Hello Container" >> FirstFile.txt

— Now try to cat it

If you try pwd command than you will see something like this -

This confirms that now you are in root directory of your new environment. This is why bash or anything was not working before we copied programs and libraries. At that time if we chroot into this folder and we are saying to run bash so it chroot-ed into the directory and tries to find /bin/bash inside that directory because it has no access to directories or files outside that individual container. Therefore, it was showing error.

Fun_Fact — Before being called as Containers, processes ran in ‘chroot-ed’ environments are called jailed processes (and we can now totally understand why😁).

Now, make some programs to work in your environment like cat. It will give you hands-on experience with /bin and /lib stuffs.

At this point, you have successfully isolated a process in our chroot-ed environment so it can’t see its outside world (means it cannot access files/directories outside its root folder which is our raw-container folder).

But, let’s say if we are running multiple chroot environments then even though they cannot see outside their respective root folders but they can see all processes running on the host machine (because after all, host is same for all). This can be a huge security issue and we will see how namespaces will solve this problem of ours in the next step.

2. Hiding processes from other processes (Implementing namespaces)

The namespaces is that feature in linux which allows us to make it believe the processes that they have their own isolated instance of the global resources. Examples of such resources are process IDs, hostnames, user IDs, file names, and some names associated with network access, and interprocess communication.

Note :- Now, as we know that how a chroot environment is made, it is quite overwhelming to copy and paste every program and dependency of that program one by one. So, by God’s grace we have tools for our chores. One of them is, debootstrap. This will give us bare minimum chroot environment, so that we can proceed further.

  • Installing debootstrap
apt-get update -y
apt-get install debootstrap -y
  • Create Container
debootstrap --variant=minbase bionic /good-container

This will provide us with a ready made container which is ready to be ‘chroot’-ed.

But we will not directly chroot in it….

We will use the command ‘unshare’ to create new namespaces for our chroot environment. So that our “Container” thinks that it has its own global resources.

unshare --mount --uts --ipc --net --pid --fork --user --map-root-user chroot /good-container bash 

— mount -to unshare filesystem
— uts -UTS namespaces provide isolation of two system identifiers: the
hostname and the NIS domain name.
— ipc -to unshare inter process communication
— net -to unshare network
— pid -PID namespaces isolate the process ID number space
— fork -creates a new process by duplicating the calling process
— user -user namespace allows a process (that is unprivileged outside the namespace) to have root privileges while at the same time limiting the scope of that privilege to the namespace
— map-root-

Now, we are in our “good-container”. As you can see, debootstrap has done its job so good that our container seems just like a normal linux environment.

At this stage, if you want to see processes running on machine then it will show an error to mount filesystem in the container. Hurray !!! our problem is solved…🙌🙌

But we need to mount some folders to make our container work in a nice and good way. Therefore,

mount -t proc none /proc
mount -t sysfs none /sys
mount -t tmpfs none /tmp

Hence, we used namespaces to limit capabilities of containers. Now, our processes cannot see each other, cannot talk to each other, cannot interfare with each other, and can’t interact with each other.

But, there is one more problem …🤔🤔🤔

The problem is that even though our containers can’t interact with each other but they share all the physical resources with no caps set on them. So, if a container uses high cpu intensive task and host allocates all resources to it and immediately if other container also need more cpu resources then the later one will not be able to get more because resources are allocated.

This can cause some serious issues like DoS attacks. So, for solving this problem we will be using “cgroups” in the next section.

3. Resource allocation (Using cgroups)

Note:- For making control groups (cgroups) we will be using a tool named cgroup-tools and htop to see resoueces being used with nice visuals.

As this is a complex process and I don’t want to confuse all beginners we will not be seeing all the commands in details.

  • Install these tools at the host level (outside the unshare chroot environment).
apt-get install -y cgroup-tools htop
  • Follow the steps below to make memory and cpu limited to your container.
//To create a control group named "limited-container"
cgcreate -g cpu,memory:/limited-container

//To see processes running
ps aux
cgclassify -g cpu,memory:limited-container 7479//To see which process you have selected. This should show a PID of process named unsharecat /sys/fs/cgroup/cpu/limited-container/tasks //To see cpu shared
cat /sys/fs/cgroup/cpu/limited-container/cpu.shares
//To set the cpu share to 5% of available cpucgset -r cpu.cfs_period_us=100000 -r cpu.cfs_quota_us=$[ 5000 * $(getconf _NPROCESSORS_ONLN) ] limited-container//To set memory limit
cgset -r memory.limit_in_bytes=80M limited-container

And Finally you’ve done it.

Congratulations 🥂🥂 ……. You have made your container with bare hands in this article. We have not dived into deep topic like networking or setting up a file system because as you can now think, if doing this small piece of work take so much of time and also it is complex enough for a developer who just want to ship code.

Wouldn’t it be great that there should be someone to do this repetitive work for us. Thus, now abstracting this all core complexity docker and other open source container players provides us to make containers in a easy and simpler manner.

Hence, we will be setting up a simple container using docker in our future article which will contain a simple node server application.

Till then Bye 🙋🏻‍♂️

And Happy Learning ✌✌

Connect with me -

GitHub — @Parab-Mishra

LinkedIn — @parab-mishra

Twitter — @ParvMishra

--

--

PARAB

Cybersecurity Enthusiast || AWS || GCP || Full Stack Developer || Docker