Hello everyone, welcome to my latest article about Kubernetes, often referred to as “K8s”.
I understand that the world of container orchestration can be a bit intimidating, but don’t worry, I’m here to break it down for you in simple terms. In this article, I’ll be explaining what Kubernetes is and how it can help you manage your containers. We’ll also take a look at Rancher and how it can be used in conjunction with Kubernetes to make managing your containers even easier. So, whether you’re new to the world of container orchestration or just looking to learn more, grab a cup of coffee and let’s dive in!
Okay, so you may have heard of Docker before and know that it’s a way to run containers. (See my article about Docker). Kubernetes is like a step up from Docker, and it’s all about managing and coordinating those containers.
Think of it like this: imagine you have a bunch of different applications running in their own containers, and you want to make sure they’re all running smoothly and communicating with each other properly. That’s where Kubernetes comes in – it’s a tool that helps you manage and organize all of those containers, so you can make sure everything is running the way it should be. It also helps with scaling your applications and self-healing in case of any failures. It’s kind of like a traffic controller for your containers, making sure they’re all running efficiently and effectively together.”
Kubernetes, has a pretty interesting history. It all started back in the early days of Google, when the company was looking for a way to manage their massive fleet of containers. They needed a system that could scale easily and handle all the different types of applications they were running. So, a group of engineers at Google began developing a container orchestration system that would eventually become Kubernetes. As the project grew and matured, Google decided to open source the technology, making it available for anyone to use in 2014. And that’s how Kubernetes came to be – born out of a need at Google, and now widely used by companies and organizations all around the world.
At the heart of Kubernetes are these core concepts: nodes, clusters, pods, services, and replication controllers.
nodes & clusters
In Kubernetes, a node is a single machine in a cluster. Think of it like a single computer that is part of a larger network. A cluster is a group of nodes working together to run your containers.
Pods are the smallest deployable units in Kubernetes, which is the basic building block for running applications. A pod is a group of one or more containers that are deployed together on the same host. Pods are used to run the containers that make up your application.
Pods are created and managed by the Kubernetes API and can be defined using YAML or JSON configuration files.
Here’s an example of a simple app deployment YAML:
apiVersion: apps/v1 kind: Deployment metadata: name: my-app-deployment spec: replicas: 3 selector: matchLabels: app: my-app template: metadata: labels: app: my-app spec: containers: - name: my-app image: my-app-docker-image:latest ports: - containerPort: 80 resources: limits: memory: "256Mi" cpu: "500m"
The Deployment resource in this YAML file creates 3 replicas of the “my-app” pod.
Services are a way to expose a set of pods (which is an application) to the network. A Service allows you to access those pods by a stable IP address and a DNS name, even if the pods themselves are created and destroyed dynamically. This means that you can access your application without having to worry about the specific pod IP addresses changing. Services also allow you to load balance traffic between multiple pods, and can be used to expose your application to the internet or to other parts of your infrastructure. There are several types of Services, each with their own characteristics. For example, a ClusterIP service exposes the pods only within the cluster, while a NodePort service exposes them on a specific port on each node, and a LoadBalancer service creates a load balancer in the cloud provider.
Here is an example of a Kubernetes Service in YAML:
apiVersion: v1 kind: Service metadata: name: my-app-service spec: selector: app: my-app ports: - name: http port: 80 targetPort: 80 type: NodePort
The service type is NodePort which creates a static endpoint to the service on each Node in the cluster, so you can access the service using the node’s IP address and the configured port.
Scaling and self-healing are two important features that Kubernetes provides to help you manage your containers. Scaling allows you to increase or decrease the number of replicas of your application, depending on how much traffic it’s receiving. This means that if you have a sudden spike in traffic, you can easily scale up your application to handle it, and then scale it back down once the traffic subsides.
Self-healing is another important feature that ensures that your application is always running as expected. If for any reason a pod (a running instance of your application) goes down, Kubernetes will automatically create a new one to replace it. This means that you don’t have to constantly monitor your application to make sure it’s running, Kubernetes will do it for you.
Both of these features are controlled through a Replication Controller, which is a Kubernetes resource that ensures that a specified number of replicas of a pod are running at any given time.
The remote control
Kubernetes has a command-line tool that allows you to interact with your Kubernetes cluster: kubectl. It allows you to control and manage your containers, pods, and services. With kubectl, you can create, update, and delete resources, check the status of your cluster, and even troubleshoot issues.
Installing and configuring kubectl is quite simple. First, you’ll need to download the appropriate version for your operating system from the Kubernetes website and then run install command.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
You will need to configure kubectl to communicate with your cluster by providing it with the appropriate credentials and endpoint. This can be done by creating a kubeconfig file, which typically includes information like the cluster’s API endpoint, the username and password for an account that has access to the cluster, and the cluster’s certificate authority data.
if you are running Kubernetes under Rancher, you can download the kubeconfig file directly from the web interface.
Ok so now what is Rancher ?
Rancher is a management tool that sits on top of Kubernetes and provides additional functionality. It provides a simple web-based interface for creating and managing clusters, deploying applications, and monitoring the health of your containers. Think of it like a simplified version of Kubernetes, with a user-friendly interface that makes it easy to get started.
Installing Rancher with Kubernetes can be done with one single command line:
sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
And voilà, Rancher UI is accessible on port 443. However, I would suggest changing the port mapping if you have Reverse Proxy Infront of it & make sure to have WebSocket enabled.
Thank you for taking the time to read this article. I hope you found it informative and helpful in understanding the basics of Kubernetes and Rancher. If you have any further questions or want to learn more, feel free to reach out to me. I’d be happy to help!
Leave a Reply