< Back to home

Kubernetes

What is kubernetes?

need of kubernetes? what problem is solves? features?

what are the tasks of an orchestration tool?

k8s components

💡
you can use data from configmap or secret inside of your app pod using as environment variables or as a properties file

volumes

deployment

StatefulSet

💡
Deployment for stateLESS apps and statefulSet for stateFul apps or databases

Kubernetes architecture

Q. So, how do you interact with this cluster

how to:

  • schedule pod?
  • monitor?
  • re-schedule/restart pod?
  • join a new node?

Answer : All the managing processes are done by master nodes

master node / master processes

4 processes run on every master node! - api server, scheduler, controller manager, etcd

example cluster setup

a basic cluster with 2 master and 3 worker

as complexity grows we can add new master and workers

Minikube and kubectl local setup

setup mini-kube cluster

but how to test locally?

use minikube - one node cluster where the master and node processes run on ONE machine[docker pre installed]

what is kubectl?

  • helps with interacting with the cluster
  • command line tool for k8s cluster

kubectl is not just for minikube.. it is also used for any type of k8 cluster setup eg cloud clusters etc

installing minikube [kubectl is a dependency]

follow any blog on internet. it’s easy

some commands

→ create minikube cluster

minikube start

minikube start --vm-driver=hyperkit

kubectl get nodes

minikube status

kubectl version

you might need to use minikube before kubectl for some commands to work or simply set alias

basic kubectl commands

kubectl get nodes

kubectl get pod

kubectl get pod —all-namespaces

kubectl get services

kubectl get all

kubectl get configmap

kubectl get secrets

kubectl get secrets --all-namespaces

kubectl create deployment NAME --image=image

eg

kubectl create deployment nginx-deploy --image=nginx

kubectl delete deployment nginx-deploy

kubectl get deployment

kubectl get deployment --all-namespaces

kubectl get replicaset

kubectl get events

kubectl cluster-info

for getting running pods

kubectl get pods --field-selector=status.phase=Running

create a new namespace with unique name

kubectl create ns hello-there

layers of abstration:

everything below deployment is handled by kubernetes

deployment manages a … → replicaset

replicaset manages a … → pod

pod is an abstraction of … → container

kubectl edit deployment [NAME]

eg.

kubectl edit deployment nginx-deploy

sample deployment configuration

A sample auto generated configuration file with default values of a deployment created with

kubectl create deployment nginx-deploy --image=nginx command.

Go through all the properties/options/whatever to understand.

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2022-08-16T08:22:45Z"
  generation: 1
  labels:
    app: nginx-deploy
  name: nginx-deploy
  namespace: default
  resourceVersion: "12513"
  uid: de257b93-7c4d-4ee2-a099-cf873c89be7c
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx-deploy
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-deploy
    spec:
      containers:
      - image: nginx
				imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-08-16T08:23:07Z"
    lastUpdateTime: "2022-08-16T08:23:07Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-08-16T08:22:45Z"
    lastUpdateTime: "2022-08-16T08:23:07Z"
    message: ReplicaSet "nginx-deploy-99976564d" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

debugging pods

  • logs

    kubectl logs [podName]

    e.g

    kubectl logs nginx-deploy-6b8ccdcfd4-dc6hv

    this might not give any result in our case as nginx in our test didn’t log anything

    let’s create a mongo deployment to see logs

    kubectl create deployment mongo-depl --image=mongo

    and now we can hopefully see some logs

    kubectl logs mongo-depl-85dcbc595b-2fbls

  • describe pods

    additional information about a pod

    kubectl describe pod [podName]

    kubectl describe pod mongo-depl-85dcbc595b-2fbls

  • describe node

    additional information about a node

    kubectl describe node [nodeName]

    eg

    kubectl describe node minikube

  • get pods on a specific node

    kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=[nodeName]

    eg

    kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=minikube

  • enter inside a container

    kubectl exec -it [podName] [command]

    e.g

    kubectl exec -it nginx-deploy-6b8ccdcfd4-dc6hv bin/bash

  • kubectl delete deployment [deploymentName]

    kubectl delete replicaset nginx-deploy-99976564d

    kubectl apply

    • apply manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply . This is the recommended way of managing Kubernetes applications on production.

    kubectl apply -f [fileName]

    e.g kubectl apply -f config-file.yaml

    we can also delete using yaml like this[pods , replicasets will also be deleted]

    kubectl delete -f config-file.yaml

    an example manifest for nginx deployment. Let’s take this as an example of a manifest for Nginx deployment into the K8 cluster

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-nginx
      labels:
        app: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.16
            ports:
            - containerPort: 80

    As we could see it has 4 main objects such as apiVersion, kind, metadata & spec,  but how do we find these objects and where we could get the values corresponding to each object? This is where the role of kubectl api-versions, kubectl api-resources & kubectl explain command comes into play (eg kubectl explain --api-version=apps/v1 Deployment)

    k8s YAML configuration file

    YAML - strict indentation syntax!

    • the three parts of the configuration file
    • connecting deployments to service to pods
    • each configuration file has 3 parts
      1. metadata
      1. specification - attributes of the “spec” are specific to the kind
      1. status - inserted by kubernetes itself. kubernetes compares desired state and actual state. and if the desire state is not equal to the actual state then kubernetes know that something is not correct and it tries to fix it. this is the basis of self-healing feature that kubernetes provide.

        where does k8s get this status data? etcd ofcourse!

    • store the config file with your code
    • template has it’s own “metadata” and “spec” section - configuration within a configuration. this template configuration applies to a pod.

connecting components(labels & selectors & ports)

metadata part contains labels and the spec part contains selectors

  • e.g connecting deployment to pods
    • -any key-value pair for component as label
    • and it is matched with selector
    • pods get the label through the template blueprint

  • e.g connecting services to deployments
    • in the spec of service we define a selector which basically makes a connection b/w the service and the deployment or its pods because service must know which pods are kind of registered with it so which pods belong to that service. this connection is made through the selector of the label.

    ports

    service has a port where the service itself is accessible at so if other service sends a request to nginx service here it needs to send it on port 80 but this service needs to know to which pod it should forward the request but also at which port is that pod listening and that is the target port.

    nginx-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
        name: nginx-deployment
        labels:
            app: nginx
    spec:
        replicas: 2
        selector:
            matchLabels: 
                app: nginx
        template:
            metadata:
                labels:
                    app: nginx
            spec:
                containers:
                - name: nginx
                  image: nginx:1.16
                  ports:
                  - containerPort: 8080

    nginx-service.yaml

    apiVersion: v1
    kind: Service
    metadata: 
        name: nginx-service
    spec:
        selector:
            app: nginx
        ports:
            - protocol: TCP
              port: 80
              targetPort: 8080

    kubectl apply -f nginx-deployment.yaml

    kubectl apply -f nginx-service.yaml

    kubectl get pod

    check port of service

    kubectl get service

    check endpoints with this command

    kubectl describe service nginx-service

    verify endpoint / IP

    kubectl get pod -o wide

    get deployment in a YAML format (resides in etcd!)

    kubectl get deployment [deploymentName] -o yaml

    e.g

    kubectl get deployment nginx-deployment -o yaml

    similarly,

    get service in a YAML format( resides in etcd!)

    kubectl get service nginx-service -o yaml

Complete application setup with kubernetes component - mongo and mongo-express with secret and configmap

Timestamp - start: https://youtu.be/X48VuDVv0do?t=4593

Timestamp - end: https://youtu.be/X48VuDVv0do?t=6374

git repo for code : https://gitlab.com/nanuchi/youtube-tutorial-series/-/tree/master/demo-kubernetes-components

basic flow of our setup:

💡
to run this deployments, services, configmap and secret see below after codes

mongo.yaml - contains deployment and service for mongo db

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-username
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom: 
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017

mongo-secret.yaml

username and password are in base64

apiVersion: v1
kind: Secret
metadata:
    name: mongodb-secret
type: Opaque
data:
    mongo-root-username: dXNlcm5hbWU=
    mongo-root-password: cGFzc3dvcmQ=

mongo-express.yaml

deployment and service for mongo-express

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo-express
  labels:
    app: mongo-express
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo-express
  template:
    metadata:
      labels:
        app: mongo-express
    spec:
      containers:
      - name: mongo-express
        image: mongo-express
        ports:
        - containerPort: 8081
        env:
        - name: ME_CONFIG_MONGODB_ADMINUSERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-username
        - name: ME_CONFIG_MONGODB_ADMINPASSWORD
          valueFrom: 
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-password
        - name: ME_CONFIG_MONGODB_SERVER
          valueFrom: 
            configMapKeyRef:
              name: mongodb-configmap
              key: database_url
---
apiVersion: v1
kind: Service
metadata:
  name: mongo-express-service
spec:
  selector:
    app: mongo-express
  type: LoadBalancer  
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      nodePort: 30000
💡
ports available to nodePort are in the 30,000 to 32,767 range

mongo-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb-configmap
data:
  database_url: mongodb-service

working:

kubectl apply commands in order

order is important as secret and configmap should be applied/present before it can be referenced in deployments/services etc.
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo.yaml
kubectl apply -f mongo-configmap.yaml
kubectl apply -f mongo-express.yaml

kubectl get commands

kubectl get pod
kubectl get pod --watch
kubectl get pod -o wide
kubectl get service
kubectl get secret
kubectl get all | grep mongodb

kubectl debugging commands

kubectl describe pod mongodb-deployment-xxxxxx
kubectl describe service mongodb-service
kubectl logs mongo-express-xxxxxx

because we are working with minikube, the external IP/service will be shown as <pending>. we can give url to external service in minikube using this command

give a URL to external service in minikube

minikube service mongo-express-service

we can also run kubectl describe pod [mongo-express-pod-name] e.g kubectl describe pod mongo-express-98c6ff4b4-9v88d in my case and check node there

result/output to check:

Node:         minikube/192.168.49.2

if you now go to 192.168.49.2:[nodeport] → nodeport is 30000 in our case

then we will see our mongo-express web app

Organizing your components with K8s Namespaces

In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).

Note: Avoid creating namespaces with the prefix kube-, since it is reserved for Kubernetes system namespaces.

You can list the current namespaces in a cluster using:

kubectl get namespace

output:

NAME              STATUS   AGE
default           Active   1d
kube-node-lease   Active   1d
kube-public       Active   1d
kube-system       Active   1d

Kubernetes starts with four initial namespaces:

kubernetes-dashboard namespace is shipped only with minikube, not with standard cluster.

Setting the namespace for a request

To set the namespace for a current request, use the --namespace flag.

For example:

kubectl run nginx --image=nginx --namespace=<insert-namespace-name-here>

kubectl get pods --namespace=<insert-namespace-name-here>

create namespace with cli

kubectl create namespace my-namespace

kubectl get namespace

create namespace with a configuration file

e.g

apiVersion: v1
kind: ConfigMap
metadata:
	name: mysql-configmap
  namespace: my-namespace
data:
	db_url: mysql-service.database

kubectl apply -f mysql-configmap.yaml

kubectl get configmap - will check in default namespace but need to check in my-namespace

kubectl get configmap -n my-namespace

use cases of namespaces:

characteristics of namespaces

to access database in another namespace we provide namespace name (here it is database after service name)

so in db_url: mysql-service.database

mysql-service is service name and database is the namespace in which the db is present

we can change active namespace with kubens that need to be installed separately

K8s ingress explained

https://youtu.be/X48VuDVv0do?t=7312

external service vs ingress?

in external service my-app service is external and port is opened to outside.

in ingress my-app service is internal and no need to open port.

when user visit my-app.com. it hits my-app ingress which then forwards request to my-app internal service and that forwards request to my-app pod.

example YAML file: External service

apiVersion: v1
kind: Service
metadata:
	name: myapp-external-service
spec:
	selector:
		app: myapp
	type: LoadBalancer
	ports:
		- protocol: TCP
			port: 8080
			targetPort: 8080
			nodePort: 35010
	

someone will have to visit http://ip:35010 to browser it. that’s not very user friendly. it’s okay for testing purpose

example YAML file: ingress

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
	name: myapp-ingress
spec:
	rules:
	- host: myapp.com
		http:
			paths:
			- backend:
					serviceName: myapp-internal-service
					servicePort: 8080

internal service to which ingress forwards request

example internal service:

apiVersion: v1
kind: Service
metadata:
	name: myapp-internal-service
spec:
	selector:
		app: myapp
	ports:
		- protocol: TCP
			port: 8080
			targetPort: 8080

note the difference b/w external and ingress: no nodePort is in internal service. also instead of Loadbalancer, the type is default type i.e clusterIP

or if you configure a server outside of a kubernetes cluster that is acting as entrypoint to your kubernetes cluster then you should map this hostname to the IP address of that server

How to configure ingress in your K8s cluster?

you need an implementation for ingress! which is ingress controller. so step one will be to install an ingress controller which is basically another pod or another set of pods that run on your node in your kubernetes cluster and does evaluation, manages redirections, entrypoint to the cluster and processes of ingress rules.

ingress controller is installed. many third-party implementaions like k8s nginx ingress controller etc.

my-app ingress is the yaml file part

configure ingress controller in minikube

the actual ingress controller setup on eks etc will be different

step 1. install ingress controller in minikube

minikube addons enable ingress

this automatically starts the k8s nginx implementation of ingress controller

verify with

kubectl get pod -n kube-system

if you can’t see that mean in newer version ingress controller is namespaced to ingress-nginx or something else

check namespaces with

kubectl get namespaces
kubectl get pod -n ingress-nginx

step 2. now create ingress rule

for our ease we will use kubernetes-dashboard cluster as it exists out of the box but not accessible externally! it also has internal service and pod already.

kubectl get all -n kubernetes-dashboard

note the service - kubernetes-dashboard and pod kubernetes-dashboard-5fd5574d9f-jw525

now let’s create ingress rule for dashboard

dashboard-ingress.yaml

namespace will be same as service and pod!

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
  namespace: kubernetes-dashboard
spec:
  rules:
  - host: dashboard.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service: 
            name: kubernetes-dashboard
            port: 
              number: 80

kubectl apply -f dashboard-ingress.yaml

kubectl get ingress -n kubernetes-dashboard

copy the ip address of dashboard-ingress and add it to /etc/hosts

sudo vim /etc/hosts

192.168.49.2 dashboard.com

now you can visit dashboard.com from your browser

ingress default backend

kubectl describe ingress dashboard-ingress -n kubernetes-dashboard

continue from here:

note Default backend: <default>

multiple path for same host with ingress

example google

when user visit myapp.com/analytics it will be forwarded to analytics service and then to analytics pod

when user visit myapp.com/shopping it will be forwarded to shopping service and then to shopping pod

multiple sub-domains or domains with ingress

configuring TLS certificate - https//

💡
the value of tls.cert and tls.key needs to be actual file contents not file paths/location

Helm package manager

k8s statefulSet

better to watch video as it is complex topic : https://youtu.be/X48VuDVv0do?t=10724

what is statefulSet?

why statefulSet is used?

How statefulSet works and how it’s different from Deployment?

kubernetes services

different service types in kubernetes:

when we create a service, kubernetes creates a service endpoint object with the same name as service - keep tracks of , which pod are members/endpoints of the service. the endpoints get updated whenever the pod dies or recreated.

kubectl get endpoints

new worker added

Port forwarding

Use Port Forwarding to Access Applications in a Cluster

💡
kubectl port-forwardforwards connections to a local port to a port on a pod. Compared to kubectl proxykubectl port-forwardis more generic as it can forward TCP traffic while kubectl proxycan only forward HTTP traffic.

Creating MongoDB deployment and service

  1. Create a Deployment that runs MongoDB:

    kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-deployment.yaml

    The output of a successful command verifies that the deployment was created:

    deployment.apps/mongo created
    

    View the pod status to check that it is ready:

    kubectl get pods

    The output displays the pod created:

    NAME                     READY   STATUS    RESTARTS   AGE
    mongo-75f59d57f4-4nd6q   1/1     Running   0          2m4s
    

    View the Deployment's status:

    kubectl get deployment

    The output displays that the Deployment was created:

    NAME    READY   UP-TO-DATE   AVAILABLE   AGE
    mongo   1/1     1            1           2m21s
    

    The Deployment automatically manages a ReplicaSet. View the ReplicaSet status using:

    kubectl get replicaset

    The output displays that the ReplicaSet was created:

    NAME               DESIRED   CURRENT   READY   AGE
    mongo-75f59d57f4   1         1         1       3m12s
    
  1. Create a Service to expose MongoDB on the network:

    kubectl apply -f https://k8s.io/examples/application/mongodb/mongo-service.yaml

    The output of a successful command verifies that the Service was created:

    service/mongo created
    

    Check the Service created:

    kubectl get service mongo

    The output displays the service created:

    NAME    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
    mongo   ClusterIP   10.96.41.183   <none>        27017/TCP   11s
    
  1. Verify that the MongoDB server is running in the Pod, and listening on port 27017:

    # Change mongo-75f59d57f4-4nd6q to the name of the Pod kubectl get pod mongo-75f59d57f4-4nd6q --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'

    The output displays the port for MongoDB in that Pod:

    27017
    

    27017 is the TCP port allocated to MongoDB on the internet.

Forward a local port to a port on the Pod

  1. kubectl port-forward allows using resource name, such as a pod name, to select a matching pod to port forward to.

    # Change mongo-75f59d57f4-4nd6q to the name of the Pod kubectl port-forward mongo-75f59d57f4-4nd6q 28015:27017

    which is the same as

    kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017

    or

    kubectl port-forward deployment/mongo 28015:27017

    or

    kubectl port-forward replicaset/mongo-75f59d57f4 28015:27017

    or

    kubectl port-forward service/mongo 28015:27017

    Any of the above commands works. The output is similar to this:

    Forwarding from 127.0.0.1:28015 -> 27017
    Forwarding from [::1]:28015 -> 27017
    

    Note: kubectl port-forward does not return. To continue with the exercises, you will need to open another terminal.

  1. Start the MongoDB command line interface:

    mongosh --port 28015

  1. At the MongoDB command line prompt, enter the ping command:
    db.runCommand( { ping: 1 } )
    

    A successful ping request returns:

    { ok: 1 }
    

Optionally let kubectl choose the local port

If you don't need a specific local port, you can let kubectl choose and allocate the local port and thus relieve you from having to manage local port conflicts, with the slightly simpler syntax:

kubectl port-forward deployment/mongo :27017

The kubectl tool finds a local port number that is not in use (avoiding low ports numbers, because these might be used by other applications). The output is similar to:

Forwarding from 127.0.0.1:63753 -> 27017
Forwarding from [::1]:63753 -> 27017