Posts

Showing posts from August, 2021

Kubernetes Mock Exam 3

 Task 1 1. Create service account #kubectl create serviceaccount pvviewer 2. Create cluster role - list pv #kubectl create clusterrole pvviewer-role --resource=persistentvolumes --verb=list 3. Create cluster rolebinding  #kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer 4. Create pod with service account #kubectl run --generator=run-pod/v1 pvviewer --image=redis --dry-run -o yaml > pod.yaml Add yaml file serviceAccountName: pvviewer #kubectl create -f pod.yaml Document: configure service account Task 2 Document: cheatsheet - get ExternalIP of all nodes #kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type="InternalIP")].address}'  > /root/node_ips Task 3 Create pod with two containers with different images #kubectl run --generator=run-pod/v1 multi-pod --image=nginx --dry-run -o yaml > multi-pod.yaml Change the yaml file Add second pod an...

Kubernetes Mock Exam 2

 Etcd db backup 1. Check etcd version: #ECTDCTL_API=3 etcdctl --version 2. Check etcd config file: /etc/kubernetes/manifests 3. Copy the file key information 4. Make the backup:   ECTDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db 5. Verify the backup: ECTDCTL_API=3 <key information> etcdctl snapshot status <destination> -w table Create pod with specific CPU and Memory #kubectl run --generator=run-pod/v1 elephant --image=redis --dry-run -o yaml > elephant.yaml #vi elephant.yaml add resources section containers:   - image: redis     name: elephant     resources:         cpu: "1"         memory: "200Mi" #kubectl create -f elephant.yaml #kubectl describe pod elephant Create pod with volume #kubectl run --generator=run-pod/v1 redis-storage --image=redis --dry-run -o yaml > redis.yaml #vi redis.yaml Add volumn section spec:    containers:   - image: redis:alpi...

Kubernetes Mock Exam 1

 use command line create pod https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run #kubectl run --generator=run-pod/v1 nginx-pod --image=nginx:alpine use command line create pod with labels tier=msg #kubectl run --generator=run-pod/v1 messaging --image=redis:alpine --dry-run -o yaml > messaging.yaml edit the yaml file add labels metadata:   labels:     tier: msg #kubectl create -f messaging.yaml create namespace with name apx-x9984574 https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#create-new-namespaces {   "apiVersion": "v1",   "kind": "Namespace",   "metadata": {     "name": "apx-x9984574"   } } list node in json format https://kubernetes.io/docs/reference/kubectl/jsonpath/ #kubectl get nodes -o json > /opt/outputs/nodes-z3444kd9.json create service with command line https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster...

Kubernetes json path

 Use json path to filter data. #kubectl get nodes #kubectl get nodes -o wide 1. Identify the kubectl command 2. Familiarize with json output  #kubectl get nodes -o json #kubectl get pods -o json 3. Form the json path query .items[0].spec.containers.[0].image 4. Use the json path query with kubectl command #kubectl get nodes -o jsonpath='{.items[0].spec.containers.[0].image}' Combine two query. And separate by {"\n"} or {"\t"} #kubectl get nodes -o jsonpath='{.items[*].metadata.name}{"\n"}{.items[*].status.capacity.cpu}' Result: master node01 4           4 Format the display Start loop --> '{range .item[*]} Operation --> {.metadata.name}{"\t"}{.status.capacity.cpu}{"\n"} End loop --> {end}' #kubectl get nodes -o jsonpath='{range .item[*]}{.metadata.name}{"\t"}{.status.capacity.cpu}{"\n"}{end}' Result master  4 node01 4 Format the display with custom column...

Kubernetes troubleshooting

Application Failure Check service #curl http://<appname> #kubectl describe service <service name> Check pod #kubectl get pod #kubectl describe pod <pod name> #kubectl logs <pod name>  https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/ Control Plane Failure Use kubeadm setup cluster #kubectl get pod -n kube-system #kubectl logs <pod name> -n kube-system Deploy as service #service kube-apiserver status #service kube-controller-manager status #service kube-scheduler status #service kubelet status #service kube-proxy status #journalctl -u kube-apiserver https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/ Work Node Failure #kubectl get node #kubectl describe node <node name> ssh to node  #service kubelet status #systemctl start kubelet.service #journalctl -u kubelet Check kubelet config file /var/lib/kubelet/config.yaml Check certificate #openssl x509 -in <certificate crt file> -text Network ...

Kubernetes test

 Manual test 1. Nodes status 2. Pods status 3. System pod status 4. Service status 5. Try deploy pod / replicaset …. Automatic test  test-infra: 1000 test cases https://github.com/kubernetes/test-infra Full Need 12 hours Performance need 1.5 hours #go get -u k8s.io/test-infra/kubetest #kubetest --extract=v1.11.3 #cd kubernetes #export KUBE_MASTER_IP="192.168.26.10:6443" #export KUBE_MASTER=kube-master Full test #kubetest --test --provider=skelenton > testout.txt Conformance test #kubetest --test --provider=skeleton --test-args="--ginkgo.focus=\[Conformance\]" > testout.txt

Kubernetes master node HA

 Master node HA ETCD / API Server / Controller manager / Scheduler API Server: active - active / loadbalance before it Controller manager / scheduler: active - standby kube-controller-manager --leader-elect true Other options 1. --leader-elect-lease-duration 15s 2. --leader-elect-renew-deadline 10s 3. --leader-elect-retry-period 2s ETCD:  active - standby Distributed: RAFT to choose the leader in cluster Leader will handle write request and sync to others Leader will define as quorum = n/2+1.  If more than quorum, write is successful Define cluster, better use odd. 3 / 5

Kubernetes environment parameter

Environment parameter 1. Plain key value: put the environment parameter in yaml file pod yaml file spec:     containers:         env:         - name: APP_COLOR            value: pink 2. configMap: put the environment parameter in config map create #kubectl create -f <configMap file> check #kubectl get configmap check #kubectl describe configmap <configmap name> Example yaml file: https://github.com/tomshenhao/kubernetes-learning/blob/master/configmap.yaml inject the config map to pod pod yaml file spec:   containers:   - name: webapp-color     image: kodekloud/webapp-color     envFrom:     - configMapRef:         name: webapp-config-map 1. Secrets: put the environment parameter in secret in hash format create #kubectl create -f <secret file> check #kubectl get secret...

Kubernetes deployment

 Deployment Similar to replicaset yaml file Kind change to deployment Deploy #kubectl create -f <yaml file> --record Check #kubectl get deployments Check #kubectl get all Check #kubectl describe deployment Example yaml file https://github.com/tomshenhao/kubernetes-learning/blob/master/deployment.yml No file create deployment #kubectl run <pod name> --image=<image name> Example: kubectl run nginx --image=nginx Deployment - update / rollback Check #kubectl rollout status deployment/<deployment name> Check #kubectl rollout history deployment/<deployment name> Deployment strategy 1. Recreate 2. Rolling update - default Update #kubectl apply -f <yaml file>  Rollback #kubectl rollout undo deployment/<deployment name> Nginx as example to test https://github.com/tomshenhao/kubernetes-learning/blob/master/deployment_1.0.yml https://github.com/tomshenhao/kubernetes-learning/blob/master/deployment_1.1.yml

Kubernetes setup

 Setup K8S Test environments: https://labs.play-with-k8s.com / google cloud 1. Click left side "ADD NEW INSTANCE" - create master  2. Following guide run command  a. Initializes cluster master node: #kubeadm init --apiserver-advertise-address $(hostname -i) After this one finish, it should provide the command to add nodes #kubeadm join 192.168.0.18:6443 --token 4kr7q0.ryhgmuy4bdlbm6rg \     --discovery-token-ca-cert-hash sha256:5bc5c451e627851321bcce694c486a7aa0852103c72d19cdcd327b4f01d8a127 b. Initialize cluster networking: #kubectl apply -n kube-system -f \     "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')" 3. Click left side "ADD NEW INSTANCE" to add nodes 4. Run command to add to cluster #kubeadm join 192.168.0.18:6443 --token 4kr7q0.ryhgmuy4bdlbm6rg \     --discovery-token-ca-cert-hash sha256:5bc5c451e627851321bcce694c486a7aa0852103c72d19cdcd327b4f01d8...

Kubernetes namespace

 Namespace Cluster start auto create following name space 1. Default namespace 2. Kube-system  3. Kube-public Use namespace to separate the environments connect mysql default namespace mysql.connect("db-service") other namespace - dev mysql.connect("db-service.dev.svc.cluster.local"  list pod check default namespace #kubectl get pods check other namespace #kubectl get pods --namespace=kube-system Create pod create in default namespace #kubectl create -f <yaml file> create in other namespace #kubectl create -f <yaml file> --namespace =dev Define namespace in the yaml file in metadata part namespace: <namespace> create namespace #kubectl create -f <yaml file> switch namespace #kubectl config set-context $(kubectl config current-context) --namespace=<namespace> check namespace list #kubectl get namespaces check items in all namespace #kubectl get all --all-namespaces https://github.com/tomshenhao/kubernetes-learni...

Kubernetes storage

Volumes pod in one node mount volume  https://github.com/tomshenhao/kubernetes-learning/blob/master/pod_hostpath.yaml replicaset / deployment on cluster mount on volume as aws EBS Persistent Volume admin create persistent volume create pv #kubectl create -f <yaml file> check pv #kubectl get persistentvolume https://github.com/tomshenhao/kubernetes-learning/blob/master/persistentvolume.yaml Persistent Volume Claims user create persistent volume claims to use persistent volume create pvc #kubectl create -f <yaml file> check pvc #kubectl get persistentvolumeclaim delete pvc #kubectl delete persistentvolumeclaim <name> https://github.com/tomshenhao/kubernetes-learning/blob/master/persistentvolumeclaim.yaml Pod use persistent volume claim https://github.com/tomshenhao/kubernetes-learning/blob/master/pod_pvc.yaml

Kubernetes security

Image
 Security primitives - Authentication User - managed by kube-apiserver 1. Admins 2. Developers Static file methods - not recommended authentication 1. static password file 2. static token file (hash the password) Example static password file # User File Contents password123,user1,u0001 password123,user2,u0002 password123,user3,u0003 password123,user4,u0004 password123,user5,u0005 define specific user in curl: curl -v -k https://<service> -u "user1:password" TLS Certificates CSR: certificate signing request Public key: *.crt / *.pem Private key: *.key / *-key.pem CA: root certificate - 10 years Server: server certificate - 1 year Client: client certificate - 1 year CA certificate: ca.crt / ca.key server certificate 1. Master node Kube-apiserver: apiserver.crt / apiserver.key Etcd server: etcdsever.crt / etcdserver.key 1. Work node kubelet server: kubelet.crt / kubectl.key Client certificate 1. admin: admin.crt / admin.key 2. kube scheduler: schedule...

Kubernetes system upgrade

 Node upgrade Node down > 5m:  pod will be terminated  in it Replicaset will be automatically recreate on other nodes Single node will be terminated If node down < 5m: directly do it If node down > 5m:  Before #kubectl drain <node name> --ignore-daemonsets After #kubectl uncordon <node name> Description: #kubectl drain <node name>: recreated pods in other nods (only replicaset / no single pod) and mask node as unavailable for new pods #kubectl cordon <node name>: mask node as unavailable for new pods #kubectl uncordon <node name>: mask node as available for new pods Best solution: 1. Check pod type: single pod / replicaset 2. If single pod try to create in other nodes (cordon the node plan for maintenance) 3. Drain the node for maintenance 4. After change finish 5. Uncordon the node for maintenance K8S upgrade Release: https://github.com/kubernetes/kubernetes/releases Cluster upgrade Master 1. kube-apiserver shou...

Kubernetes Network

Image
 CNI (Container Network Interface ) Steps of container network setup 1. Create network namespace 2. Create bridge network/interface 3. Create VETH Pairs (Pipe, Virtual Cable) 4. Attach vEth to Namespace 5. Attach Other vEth to Bridge 6. Assign IP Address  7. Bring the interfaces up 8. Enable NAT-IP Masquerade All the container system are using the same logic, it combine to a standard - CNI. Used by rkt / Mesos / k8s 1. Container Runtime must create network namespace 2. Identify network the container must attach to 3. Container runtime to invoke network plugin (bridge) when container is added 4. Container runtime to invoke network plugin (bridge) when container is deleted 5. Json format of the network configuration 6. Must support command line arguments add / del / check 7. Must support parameters container id, network ns etc… 8. Must manage ip address assignment to pods 9. Must return results in a specific format K8S Cluster Network Master + W...

Kubernetes pod

Image
 Static pods In pod yaml file in /etc/kubernetes/manifests folder Pod will create automatically depending on the file. If file deleted, pod also got delete 1. Search "kubelet" in system: find / -name kubelet 2. Go to the path find config.yaml file 3. In the file find "staticPodPath" check #docker ps check #kubectl get pods --all-namespaces (check pod name contains node) Create pod by run #kubectl run --restart=Never --image=busybox static-busybox --dry-run -o yaml --command -- sleep 1000 > /etc/kubernetes/manifests/static-busybox.yaml Create pod by run #kubectl run --generator=run-pod/v1 nginx-pod --image=nginx:alpine Create service by run #kubectl expose pod redis --port=6379 --name redis-service Create deployment by run #kubectl create deployment webapp --image=kodekloud/webapp-color #kubectl scale deployment/webapp --replicas=3 Search pod by label #kubectl get all --selector env=prod,bu=finance,tier=frontend Use case: create master host ...

Kubernetes knowledge

  Master + Cluster (Node 1 + Node 2….) Master: 1. Kube-apiserver 2. Etcd - key value store 3. Scheduler - distribute work 4. Controller  a. Node controller - start new container and stop broken one b. Replication controller Cluster 1. Kubelet - agent check container status 2. Kube-proxy: communicate between nodes Etcd  key - value store Default port: 2379 Download: https://github.com/etcd-io/etcd/releases/tag/v3.3.13 Extract #tar xzvf <file> Start #./etcd Set #./etcdctl set key1 value1 Retrieve #./etcdctl get key1 Etcd in K8s Save all related information for K8S cluster (Nodes / Pods / Configs / Secrets / Accounts / Roles / Bindings / Others) Check #kubectl get pods -n kube-system Kube - apiserver Only kube-apiserver to update ETCD  Create pod process 1. Authenticate User 2. Validate Request 3. Retrieve data 4. Update ETCD 5. Scheduler 6. Node - Kubelet Check #kubectl get pods --namespace kube-system Controller Manager ...

Docker cmd and entrypoint

 Docker - container only exist if the service alive Example: ubuntu If run docker run ubuntu and check docker ps -a, you find it stopped. The reason is the CMD is only bash To make it live create another dockerfile ubuntu-sleeper FROM Ubuntu CMD sleep 5 #docker build -t ubuntu-sleeper . #docker run ubuntu-sleeper If you want to change sleep time, you need to run #docker run ubuntu-sleeper sleep 10 If want to remove sleep in command, you need to change to use ENTRYPOINT FROM Ubuntu ENTRYPOINT ["sleep"] #docker build -t ubuntu-sleeper #docker run ubuntu-sleeper 10 If you want to setup default value of sleep FROM Ubuntu ENTRYPOINT ["sleep"] CMD ["5"]

Docker Network

Image
 Single docker host host ip: 192.168.1.10 NONE network #docker run --network none nginx no entry to the image Host network #docker run --network host nginx host 80 port will be used by docker.  Only can start one nginx Bridge network(default) #docker run nginx every docker container will have one namespace in it  docker0 172.17.0.1 Bridge network check the network after starting docker service: #systemctl start docker.service start docker container: #docker run nginx check docker container id #docker ps -a inspect the container find network information #docker inspect <container id> in the docker host visit works #curl http://<containerip>:80 outside docker host visit need mapping when start docker container #docker run -p 8080:80 nginx #curl http://<hostip>:8080

Docker Compose

 Docker compose file for LNMP 1. Prepare the images (Reference Docker - LNMP) tom/centos7.4 tom/php-fpm:1.0 tom/mysql:1.0 tom/wordpress:1.0 2. Docker compose file version: "2"  services:   db:     image: "tom/mysql:1.0"     container_name: "mydb"     ports:       - "3306:3306"     networks:       static-network:         ipv4_address: 10.0.0.2   web:     image: "tom/wordpress:1.0"     container_name: "wordpress"     ports:       - "80:80"     environment:       - WORDPRESS_DB_HOST=10.0.0.2       - WORDPRESS_DB_USER=admin       - WORDPRESS_DB_PASSWORD=admin     networks:       static-network:         ipv4_address: 10.0.0.3   networks:   static-network:   ...

Docker Registry

 1. Docker registry setup docker pull registry:2.6.2 docker run -d -p 5000:5000 --name registry registry:2.6.2  2. Verify Visit link http://localhost:5000/v2/_catalog 3. Registry web console setup docker pull hyper/docker-registry-web docker run -d -p 8080:8080 --name registry-web --link registry -e REGISTRY_URL=http://registry:5000/v2 -e REGISTRY_NAME=localhost:5000 hyper/docker-registry-web 4. Verify Visit link http://localhost:8080

Docker - LNMP

 Material: https://github.com/tomshenhao/docker-training Create centos as base cd <path>/docker-training/first/centos7 docker build -t tom/centos:7.4 . Create php-fpm cd <path>/docker-training/first/php-fpm docker build -t tom/php-fpm:1.0 . Start php-fpm docker run -d -p 8080:80 --name webserver tom/php-fpm:1.0 Verify open browser to visit 127.0.0.1:8080 Create mysql cd <path>/docker-training/first/mysql docker build -t tom/mysql:1.0 . Start mysql docket run -d -p 3306:3306 -v <host path>:<docker container path> --name mydb tom/mysql:1.0 Find mysql container IP address ss -tnlop | grep 3306 --> pid of mysql container ps -ef | grep <pid> --> find related ip address. for example: 172.17.0.2 Create wordpress cd <path>/docker-training/first/wordpress docker build -t tom/wordpress:1.0 . Start wordpress docker run -d -p 80:80 -e WORDPRESS_DB_HOST=172.17.0.2 -e WORDPRESS_DB_USER=admin -e WORDPRESS_DB_PASSWORD=admin --name wordpress tom/wordp...

Docker - Tomcat

 Need to download apache-tomcat-8.5.24.tar.gz Dockerfile FROM tom/jdk:8u141 ENV CATALINA_HOME /appl/online/products/apache/tomcat8 ENV PATH $CATALINA_HOME/bin:$PATH COPY apache-tomcat-8.5.24.tar.gz /tmp RUN mkdir -p /appl/online/products/apache && \     cd /tmp && \     tar xvf apache-tomcat-8.5.24.tar.gz -C /appl/online/products/apache && \     rm -rf apache-tomcat-8.5.24.tar.gz && \     cd /appl/online/products/apache && \     mv apache-tomcat-8.5.24 tomcat8 EXPOSE 8080 ENTRYPOINT ["catalina.sh","run"]

Docker - JDK

 Need to download  jdk-8u141-linux-x64.tar.gz Dockerfile FROM tom/centos:7.4.1708 #Install oracle java 1.8 build 141 COPY jdk-8u141-linux-x64.tar.gz /tmp RUN cd /tmp && \     tar -xvf jdk-8u141-linux-x64.tar.gz -C /usr/local && \     rm -rf /tmp/jdk-8u141-linux-x64.tar.gz && \     alternatives --install /usr/bin/java java /usr/local/jdk1.8.0_141/bin/java 2 ENTRYPOINT ["/usr/sbin/init"]

Docker - Apache

 Dockerfile FROM tom/centos:7.4.1708 RUN yum install -y httpd && \     yum clean all  COPY httpd-foreground /usr/local/bin RUN chmod +x /usr/local/bin/httpd-foreground EXPOSE 80 ENTRYPOINT ["httpd-foreground"]

Docker - CentOS

 Dockerfile FROM centos:latest #Install base package RUN yum install -y curl wget tar bzip2 unzip vim-enhanced passwd sudo yum-utils hostname net-tools rsync man && \     yum install -y gcc gcc-c++ git make automake cmake patch logrotate python-devel libpng-devel libjpeg-devel && \     yum clean all ENTRYPOINT ["/usr/sbin/init"]

Docker Command

 Manage image: docker image Manage container: docker container List not started container: docker container ls -a Delete image: docker image rm <tag> Delete container: docker container rm <tag> Delete all container: docker rm -f $(docker ps -a -q) Delete all image: docker rmi -f $(docker images -q)

Docker Setup

Centos 7 yum remove docker  yum remove docker-common yum remove docker-selinux yum remove docker-engine yum install -y yum-utils yum install -y device-mapper-persistent-data yum install -y lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum-config-manager --enable docker-ce-edge yum-config-manager --enable docker-ce-test yum install docker-ce systemctl start docker systemctl status docker docker run hello-world docker --version