Kubernetes knowledge

 


Master + Cluster (Node 1 + Node 2….)


Master:

1. Kube-apiserver

2. Etcd - key value store

3. Scheduler - distribute work

4. Controller 

a. Node controller - start new container and stop broken one

b. Replication controller


Cluster

1. Kubelet - agent check container status

2. Kube-proxy: communicate between nodes


Etcd 

key - value store


Default port: 2379


Download: https://github.com/etcd-io/etcd/releases/tag/v3.3.13

Extract #tar xzvf <file>

Start #./etcd

Set #./etcdctl set key1 value1

Retrieve #./etcdctl get key1


Etcd in K8s

Save all related information for K8S cluster

(Nodes / Pods / Configs / Secrets / Accounts / Roles / Bindings / Others)


Check #kubectl get pods -n kube-system


Kube - apiserver

Only kube-apiserver to update ETCD 


Create pod process

1. Authenticate User

2. Validate Request

3. Retrieve data

4. Update ETCD

5. Scheduler

6. Node - Kubelet


Check #kubectl get pods --namespace kube-system


Controller Manager

Watch status

Remediate Situation


Node controller --> kube -apiserver --> kubelet (Monitor node)


Node Monitor Period = 5s

Node Monitor Grace Period = 40s (wait 40s to mark node as unreachable)

POD Eviction Timeout = 5m (wait 5m to remove the pods in the node)


Replication Controller / Replica Set --> kube-apiserver --> kubelet (Monitor pod / replicaset)

Using label / selector to monitor the replicaset status


Scheduler 

Decide which pod should deploy to which node

1. Filter Nodes (hardware / software)

2. Rand Nodes


Set pod directly to specific node

In pod.yml add below in spec

nodeName: <specific node name>

https://github.com/tomshenhao/kubernetes-learning/blob/master/pod_define_node.yml


Hard requirement for pods by default

CPU: 0.5

Memory: 256MB


manually define initial hard requirement

spec:

    resources:

        requests:

             memory: "1Gi"

             cpu: 1


cpu min set: 0.1

memory setting



k8s resource limits by default

CPU: 1

memory: 512Mi


manually define max hard requirement

spec:

    resources:

        limits:

              memory: "2Gi"

              cpu: 2


pod use over memory limit --> terminated by K8S 


Monitor

Metrics server - in memory solution


Setup

#git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git

#cd kubernetes-metrics-server

#kubectl create -f .


Check

#kubectl top node

#kubectl top pod


Logs

Check events #kubectl get events

Check log #kubectl logs -f <pod name> <container name>


Label and selector

Define label in yaml file

Search #kubectl get pods --selector <label define>

search with more condition #kubectl get pods --selector <label define>,<label define>


For example

Yaml file

labels:

     app: App1


Selector

#kubectl get pods --selector app=App1


Daemon sets 

Similar as replicaset, but pod will be create to node automatically and remove when node removed.

Use case: monitor solution / log viewer / kube-proxy


Check #kubectl get daemonsets

Create #kubectl create -f <yaml file>


https://github.com/tomshenhao/kubernetes-learning/blob/master/daemonset.yml


Kubelet

Register node

Create PODS

Monitor nodes / pods


Kubeadm does not deploy kubelet


Kube-proxy

Run on each node 

Monitor service, create rule in each node make pod communication.


Kubectl

1. Use --dry-run to test 

2. -o yaml: to output the resource definition in yaml

Yaml File 


Get the yaml file from existing item

kubectl get <item type> <item name> -o yaml --export


Example: kubectl get deploy webapp -o yaml --export


Yaml Lint to verify yaml

www.yamllint.com


K8S must contains

1. apiVersion: string

2. kind: string

3. metadata: dictionary

4. spec: dictionary


Dictionary - unordered

List  - ordered


Using kubectl run command to generate the example yaml file

#kubectl run nginx --image=nginx -o yaml

#kubectl run nginx --image=nginx --replicas=3 -o yaml


Create yaml file from existing pod

#kubectl get pod <pod name> -o yaml > <yaml file>


Replication Controller  (Replica Set)

1. high availability

2. Load balancing & Scaling


https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/


Replication Controller: old technology

Replica Set: new technology 


Create #kubectl create -f <yaml file>

Check #kubectl get replicaset

Check #kubectl describe replicaset

Scale #kubectl scale --replicas=6 -f <yaml file>

Delete #kubectl delete replicaset <replicaset name>

Edit replicaset  directly #kubectl edit replicaset <replicaset name>


If the replicaset broken, but no yaml file.

1. Edit replicaset directly

2. Manually delete the pod


When delete one, it automatically create one

If manually create one, it block the activity. 


Example yaml file

https://github.com/tomshenhao/kubernetes-learning/blob/master/replica.yml


Commands & Arguments


docker file: ubuntu-sleeper

FROM Ubuntu

ENTRYPOINT ["sleep"]

CMD ["5"]


Pod yaml file

spec:

    containers:

    - name: ubuntu-sleeper

      image: ubuntu-sleeper

      command:

      - "sleep"

      - "5000"


Monitor tool

https://github.com/kodekloudhub/kubernetes-metrics-server.git



Comments

Popular posts from this blog

Nginx Proxy & Load Balance & LNMP

Snort+barnyard2+Snorby CentOS 6.5_64 Installation

ORACLE Error