IOT (Inception Of Things)
https://cdn.intra.42.fr/pdf/pdf/122428/en.subject.pdf
The system administration project Inception of things will use K3S and K3D to enhance our knowledge and provide an excellent introduction to K8S.
The prerequisites to exploit this project to the fullest IMO are knowledge in :
Basic network knowledge
K8S concepts -> https://kubernetes.io/docs/concepts/
Vagrant manipulation -> https://developer.hashicorp.com/vagrant/docs
A hypervisor that enables nested virtualization, if you want to follow my way to do it, so this prerequisite is not mandatory.
The IOT topic talks about server and worker nodes, terms used interchangeably here with master for server and agent for worker, aligning with traditional Kubernetes (K8S) vocabulary.
As a vagrant box I use CentOS (a derivative of RHEL), it is a popular choice for servers and production environments, theses distributions are highly tested before each release and ensure due to its stability, security and long-term support.
I started this project using 'centos/7' as Vagrant box, i just learned that its EOL dated June 30, 2024 consequently the mirror no longer exists, therefore we upgrade to the vagrant box 'generic/centos8', it's compatible with our configuration and dependency. For those who want to stay on this version, there is a nice ticket: https://serverfault.com/questions/1161816/mirrorlist-centos-org-no-longer-resolve.
PART 1
The first part of the project requires us to set up a cluster with a master node (called K3S server in this context) and an agent node, here is the K3S architecture.
Master Node: The master nodes host the control plane components of Kubernetes, which are responsible for the overall management of the cluster.
Agent Node: The agent nodes execute the cluster workloads, that is to say the containers grouped in pods.
Both of them run the kubelet instance which is a node-level agent that is in charge of executing pod requirements, managing resources, and guaranteeing cluster health. We can see in this diagram that it uses containerd, low-level runtime that uses the kubernetes & docker engine. Containerd is a software responsible for running and managing containers on a host system. It is a resource manager which manages the container processes, image, snapshots, container metadata and its dependencies.

In order to create the expected environment, we will instantiate VMs which will act as cluster nodes with a Vagrantfile.
I'll let you discover the configuration specifics with the vagrant documentation, as you can see i provide some scripts to the servers in control.vm.provision. I mainly use it to run the necessary configurations on the respective nodes. Let's do a breakdown to see things more clearly.
In the Master Node s_provisioner.sh.
In the Agent Node sw_provisioner.sh
In these last scripts we do a very classic configuration recommended by the K3S documentation. What is interesting here is how the agent node connects to the master. We serve a seed to the master node --token 12345 so that it generates a token, this will be copied into a folder shared between the host and my VMs. Like this, the agent node will have access to the token to join the network.
By default, Vagrant will share your project directory (the directory with the Vagrantfile) to /vagrant. But for learning purposes we used https://github.com/dotless-de/vagrant-vbguest for more modularity. In this first part we configured the most trivial cluster to grasp the K3S architecture.
PART 2
This second part asks us to set up 3 web services in the K3S instance, for this we can only use the master node, because it has access to kubelet, we will instantiate one pod per web service. 2 pods will have 1 replica, and the 3rd as much as its place, replicas are crucial for ensuring high availability, load balancing, scalability.

I chose NGINX as a web server for its robustness. Here are the K8S resources that I used to offer a web service. It consists of the following manifests :
configMap (Stores key-value pairs for configuration data, It will be used to store the HTML page to be rendered, be careful the scope of a configMap cover the cluster namespace)
deployment (Manages the desired state of pod replicas)
ingress (Manages external access to services)
service (Defines a logical set of pods and a policy to access them)
Don't forget to add these lines to your /etc/hosts so you can access your web services. The purpose of DNS is to translate a domain name into the appropriate IP address, try to access app1.com without overwriting it.
Now we have 3 web services that can be accessed externally from the cluster and traffic is properly directed to the right service through ingress.
PART 3
This third part focuses on the GitOPS practice, which will be exploited with ArgoCD.
In a traditional project, what happens is that you have a base repository, of your project source code, CI / CD software like jenkins detects changes over time and activates the appropriate pipelines, it will mainly generate an artifact of your project, like a docker image for example and deploy it on a private or public hub, moreover if we follow good practices (Separate the source code and configuration files on different repos), it will update the configuration file which specifies the new image a pull from the hub.
This is exactly where the ArgoCD agent comes into play, it runs within our cluster and is configured to stay in sync with the configuration repo and apply these manifests to keep the deployed application up to date.

In this section, we will use k3d, a lightweight tool for running k3s (Rancher Labs’ minimal Kubernetes distribution) within Docker containers. Essentially, instead of relying on virtual machines (VMs) and hypervisors to create nodes, we will use Docker containers that will function as nodes within our Kubernetes clusters.
The idea here is not to re-simulate the generation of the artifact as explained in my traditional example but the modification of the repo that uses the concept of IaC, you can take a look at https://www.redhat.com/en/topics/automation/what-is-infrastructure-as-code-iac.
Therefore ArgoCD will detect the change on this repo to deploy the right infrastructure.
In order to configure the architecture of this diagram, we will dissect the essential commands together, and understand what they do, in order to reproduce the entire project, you will need the complete code with dependencies available onhttps://github.com/vmmon-th0/Inception_OT/blob/main/p3/scripts/init.sh.
Init Cluster & Argo CD
This will create a cluster, and maps port 8081 on the host to port 30080 on the cluster's load balancer (The port mapping will be used to access the deployed application "WIL APP" on the schema).
Here is our created cluster, we can also observe our nodes.
Here we will create our argocd service, and accordingly install this technology in this scope.
We will execute ArgoCD application.yaml, you can see that we have configured a source, which will contain the repo containing the IaC. This is the one that will be in sync permanently.
Now everything is ready we can test this trivial system. go to localhost:8080, connect with :
login: admin
password: should be generated in /tmp/argo-credentials.txt
Arriving on this beautiful interface you can notice several components that represent their workflow. Check https://argo-cd.readthedocs.io/en/stable/ for more informations.

Let's test what we saw above, let's modify the repo acting as infrastructure as code. By switching from v2 to v1 of the image hosted on docker you will notice that it takes care of the adequate deployment while keeping the old version in cache for optimization purposes.

Conclusion
In short, we have gone over 3 concepts that allow us to better familiarize ourselves with the use cases that Kubernetes can offer us or in which it is involved, it is a very interesting technology.
Last updated