Posts

Kubernetes Mock Exam 3

 Task 1 1. Create service account #kubectl create serviceaccount pvviewer 2. Create cluster role - list pv #kubectl create clusterrole pvviewer-role --resource=persistentvolumes --verb=list 3. Create cluster rolebinding  #kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer 4. Create pod with service account #kubectl run --generator=run-pod/v1 pvviewer --image=redis --dry-run -o yaml > pod.yaml Add yaml file serviceAccountName: pvviewer #kubectl create -f pod.yaml Document: configure service account Task 2 Document: cheatsheet - get ExternalIP of all nodes #kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type="InternalIP")].address}'  > /root/node_ips Task 3 Create pod with two containers with different images #kubectl run --generator=run-pod/v1 multi-pod --image=nginx --dry-run -o yaml > multi-pod.yaml Change the yaml file Add second pod an...

Kubernetes Mock Exam 2

 Etcd db backup 1. Check etcd version: #ECTDCTL_API=3 etcdctl --version 2. Check etcd config file: /etc/kubernetes/manifests 3. Copy the file key information 4. Make the backup:   ECTDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db 5. Verify the backup: ECTDCTL_API=3 <key information> etcdctl snapshot status <destination> -w table Create pod with specific CPU and Memory #kubectl run --generator=run-pod/v1 elephant --image=redis --dry-run -o yaml > elephant.yaml #vi elephant.yaml add resources section containers:   - image: redis     name: elephant     resources:         cpu: "1"         memory: "200Mi" #kubectl create -f elephant.yaml #kubectl describe pod elephant Create pod with volume #kubectl run --generator=run-pod/v1 redis-storage --image=redis --dry-run -o yaml > redis.yaml #vi redis.yaml Add volumn section spec:    containers:   - image: redis:alpi...

Kubernetes Mock Exam 1

 use command line create pod https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run #kubectl run --generator=run-pod/v1 nginx-pod --image=nginx:alpine use command line create pod with labels tier=msg #kubectl run --generator=run-pod/v1 messaging --image=redis:alpine --dry-run -o yaml > messaging.yaml edit the yaml file add labels metadata:   labels:     tier: msg #kubectl create -f messaging.yaml create namespace with name apx-x9984574 https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#create-new-namespaces {   "apiVersion": "v1",   "kind": "Namespace",   "metadata": {     "name": "apx-x9984574"   } } list node in json format https://kubernetes.io/docs/reference/kubectl/jsonpath/ #kubectl get nodes -o json > /opt/outputs/nodes-z3444kd9.json create service with command line https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster...

Kubernetes json path

 Use json path to filter data. #kubectl get nodes #kubectl get nodes -o wide 1. Identify the kubectl command 2. Familiarize with json output  #kubectl get nodes -o json #kubectl get pods -o json 3. Form the json path query .items[0].spec.containers.[0].image 4. Use the json path query with kubectl command #kubectl get nodes -o jsonpath='{.items[0].spec.containers.[0].image}' Combine two query. And separate by {"\n"} or {"\t"} #kubectl get nodes -o jsonpath='{.items[*].metadata.name}{"\n"}{.items[*].status.capacity.cpu}' Result: master node01 4           4 Format the display Start loop --> '{range .item[*]} Operation --> {.metadata.name}{"\t"}{.status.capacity.cpu}{"\n"} End loop --> {end}' #kubectl get nodes -o jsonpath='{range .item[*]}{.metadata.name}{"\t"}{.status.capacity.cpu}{"\n"}{end}' Result master  4 node01 4 Format the display with custom column...

Kubernetes troubleshooting

Application Failure Check service #curl http://<appname> #kubectl describe service <service name> Check pod #kubectl get pod #kubectl describe pod <pod name> #kubectl logs <pod name>  https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/ Control Plane Failure Use kubeadm setup cluster #kubectl get pod -n kube-system #kubectl logs <pod name> -n kube-system Deploy as service #service kube-apiserver status #service kube-controller-manager status #service kube-scheduler status #service kubelet status #service kube-proxy status #journalctl -u kube-apiserver https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/ Work Node Failure #kubectl get node #kubectl describe node <node name> ssh to node  #service kubelet status #systemctl start kubelet.service #journalctl -u kubelet Check kubelet config file /var/lib/kubelet/config.yaml Check certificate #openssl x509 -in <certificate crt file> -text Network ...

Kubernetes test

 Manual test 1. Nodes status 2. Pods status 3. System pod status 4. Service status 5. Try deploy pod / replicaset …. Automatic test  test-infra: 1000 test cases https://github.com/kubernetes/test-infra Full Need 12 hours Performance need 1.5 hours #go get -u k8s.io/test-infra/kubetest #kubetest --extract=v1.11.3 #cd kubernetes #export KUBE_MASTER_IP="192.168.26.10:6443" #export KUBE_MASTER=kube-master Full test #kubetest --test --provider=skelenton > testout.txt Conformance test #kubetest --test --provider=skeleton --test-args="--ginkgo.focus=\[Conformance\]" > testout.txt

Kubernetes master node HA

 Master node HA ETCD / API Server / Controller manager / Scheduler API Server: active - active / loadbalance before it Controller manager / scheduler: active - standby kube-controller-manager --leader-elect true Other options 1. --leader-elect-lease-duration 15s 2. --leader-elect-renew-deadline 10s 3. --leader-elect-retry-period 2s ETCD:  active - standby Distributed: RAFT to choose the leader in cluster Leader will handle write request and sync to others Leader will define as quorum = n/2+1.  If more than quorum, write is successful Define cluster, better use odd. 3 / 5