SaevOps
[CKA] 자격증 문제 본문
반응형
2024년 11월 시험으로 CKA 자격증 확보
아래 자료는 해당 자격증 확보할때 주로 나오는 문제들을 정리한 것
* CKA 시험 문제들이 리뉴얼 될 예정
* 문제될 시 삭제 예정
#1 ETCD 데이터 복구
# 문제
First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to /data/etcd-snapthot.db
Next, restore an existing, previous snapshot located at /data/etcd-snapshot-previous.db
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
CA certificate: /etc/kubernetes/pki/etcd/ca.crt
Client certificate: /etc/kubernetes/pki/etcd/server.crt
Client key: /etc/kubernetes/pki/etcd/server.key
# 답안
$ ETCDCTL_API=3 etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /data/etcd-snapthot.db
$ ETCDCTL_API=3 etcdctl\
--data-dir /var/lib/etcd-previous\
snapshot restore /data/etcd-snapshot-previous.db
# up 상태 확인
$ sudo docker ps -a | grep etcd
# vi /etc/kubernetes/manifests/etcd.yaml 파일 내 hostPath_Path 수정(data-dir)
#2 네임스페이스 & 파드 생성
# 문제
Create a new namespace and create a pod in the namespace
cluster: k8s
namespace name: ecommerce
pod Name: eshop-main
image: nginx:1.17
env: DB=mysql
# 답안
$ kubectl config use-context k8s
$ sudo kubectl create namespace ecommerce
$ sudo kubectl run eshop-main --image=nginx:1.17 --namespace ecommerce --env="DB=mysql"
#3 kubelet 호스팅
# 문제
Configure kubelet hosting to start a pod on the node
Node: k8s-w1
pod Name: web
image: nginx
# 답안
$ kubectl run web --image=nginx --dry-run=client -o yaml
$ ssh k8s-wl # ssh 로 node에 접속
$ sudo cat /var/lib/kubelet/config.yaml 로 확인
-> staticPodPath: /etc/kubernetes/manifests
$ cd /etc/kubernetes/manifests
$ sudo vi web.yaml
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- image: nginx
name: web
#4 파드 생성 (멀티컨테이너)
# 문제
create pod
작업 클러스터 : hk8s
Create a pod name lab004 with 3 containers running, nginx, redis, memcached
# 답안
$ kubectl config use-text k8s
$ sudo kubectl run lab004 --image=nginx --dry-run=client -o yaml > multi.yaml
$ sudo vi ./multi.yaml
apiVersion: v1
kind: Pod
metadata:
name: lab004
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
$ sudo kubectl apply -f multi.yaml
$ sudo kubectl get pods
$ sudo kubectl describe pods lab004
#5 사이드카 생성
# 문제
# An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e.g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.
- Add a sidecar container named sidecar, using busybox image, to existing Pod eshop-cart-app
- The new sidecar container has to run the following command: /bin/sh, -c, "tail -n+1 -F /var/log/cart-app.log
- Use a volume, mounted at /var/log, to make the log file cart-app.log available to the sidecar container.
- Don't modify the cart-app
# 검색 키워드 : logging
# 현재 존재하는 pod 정보 yaml 파일로 생성
$ sudo kubectl get pods eshop-cart-app -o yaml > eshop.yaml
# eshop.yaml 수정
$ sudo vi ./eshop.yaml 를 다음과 같이 수정
apiVersion: v1
kind: Pod
metadata:
name: eshop-cart-app
spec:
containers:
- command:
- /bin/sh
- -c
- 'i=1;while :;do echo -e "$i: Price: $((RANDOM % 10000 + 1))" >> /var/log/cart-app.log;
i=$((i+1)); sleep 2; done'
image: busybox
name: cart-app
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
# 풀이를 위한 공식 문서 주소
# https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-logging-agent
# 아래대로 수정
apiVersion: v1
kind: Pod
metadata:
name: eshop-cart-app
spec:
containers:
- command:
- /bin/sh
- -c
- 'i=1;while :;do echo -e "$i: Price: $((RANDOM % 10000 + 1))" >> /var/log/cart-app.log;
i=$((i+1)); sleep 2; done'
image: busybox
name: cart-app
volumeMounts:
- mountPath: /var/log
name: varlog
- name: sidecar
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -F /var/log/cart-app.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- emptyDir: {}
name: varlog
k
# 기존에 동작중인 pod 제거(선삭제 후 생성)
$ sudo kubectl delete pods eshop-cart-app --force
# 새로 작성한 yaml 파일로 pod 생성
$ sudo kubectl apply -f eshop.yaml
# 정상 동작 확인
$ sudo kubectl get pods
#6 레플리카셋 늘리기
# 문제
작업 클러스터 : k8s
Expand the number of running Pods in "eshop-order" to 5
namespace : devops
deployment : eshop-order
# 답안
$ kubectl config use-context k8s
$ sudo kubectl scale deployment eshop-order -n devops --replicas=5
# Deployment 생성 & 스켈링
# 문제
Create a deployme
nt as follows:
Task:
name : webserver
2 replicas
label : app_env_stage=dev
container name : webserver
container image : nginx:1.14
Scale Out Depolyment
Scale the deployment webserver to 3 pods
# 답안
$ sudo kubectl create deployment webserver --image=nginx:1.14 --replicas=2 --dry-run=client -o yaml > webserver.yaml
$ sudo vi webserver.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
spec:
replicas: 2
selector:
matchLabels:
app_env_stage: dev
template:
metadata:
labels:
app_env_stage: dev
spec:
containers:
- image: nginx:1.14
name: webserver
$ sudo kubectl apply -f webserver.yaml
$ sudo kubectl scale deployment webserver --replicas=3
#7 Rolling Update & Roll Back
#문제
작업 클러스터: k8s
Create a deployment as follows:
TASK:
name : nginx-app
Using container nginx with version 1.11.10-alpine
The deployment should contain 3 replicas
Next, deploy the application with new version 1.11.13-alpine, by performing a rolling update
Finally, rollback that update to the previous version 1.11.10-alpine
# 답안
$ sudo kubectl config use-context k8s
$ sudo kubectl create deployment nginx-app --image=nginx:11.10-alpine --replicas=3
$ sudo kubectl set image deployment nginx-app nginx=nginx:1.11.13-alpine --record
$ sudo kubectl rollout status deployment nginx-app
$ sudo kubectl rollout history deployment nginx-app
$ sudo kubectl rollout undo deployment nginx-app
#8 node selector
# 특정 pod를 특정 node에서 실행
# 특정 node에 label 설정 후 pod 생성 정보에 nodeselector 설정시 원하는 노드에 pod 생성 가능
# 검색 키워드 : assign-pods-nodes 혹은 disktype
# 레퍼런스 주소 : https://kubernetes.io/ko/docs/concepts/scheduling-eviction/assign-pod-node/
# 문제
작업 클러스터 : k8s
Schedule a pod as follows:
Name : eshop-store
Image : nginx
Node Selector : disktype=ssd
# 답안
$ sudo kubectl get nodes -L disktype
$ sudo kubectl run eshop-store --image=nginx --dry-run=client -o yaml > eshop.yaml
$ sudo vi eshop.yaml
apiVersion: v1
kind: Pod
metadata:
name: eshop-store
spec:
containers:
- image: nginx
name: eshop-store
nodeSelector:
disktype: ssd
$ sudo kubectl create -f eshop.yaml
$ sudo kubectl get pods -o wide eshop-store
#9 노드 관리(drain & cordon)
# 문제
작업 클러스터 : node1.example.com
Set the node named node1.example.com as unavailable and reschedule all the pods running on it.
# 답안
$ sudo kubectl drain node1.example.com --ignore-daemonsets --force --delete-emptydir-data
#10 노드 정보 수집
# 문제1
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /var/CKA2024/RN0001
# 답안
$ sudo kubectl get nodes | grep -iw ready
$ sudo kubectl describe node [노드 명] | grep -i noschedule
# 문제2
Determine how many nodes in the cluster are ready to run normal workloads (i.e. workloads that do not gave any special tolerations).
Output this number to the file /var/CKA2024/NODE-Count
# 답안
$ sudo kubectl get nodes | grep -iw ready | wc -l > /var/CKA2024/NODE-Count
#11 Deployment & Expose the service
# 문제
Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx.
Create a new service named front-end-svc exposing the container port http
Configure the new service to also expose the individual Pods visa a NodePort on the nodes on which they are scheduled
# 답안
$ sudo kubectl config use-context k8s
$ sudo kubectl get deployments.apps front-end -o yaml > front-end.yaml
$ sudo vi front-end.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-end
spec:
replicas: 2
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: http
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: front-end-svc
spec:
type: NodePort
selector:
run: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
$ sudo kubectl delete deployments.apps front-end
$ sudo kubectl apply -f front-end.yaml
#12 Pod 로그 추출 & CPU 사용량 높은 Pod 검
# 문제1
Monitor the logs of pod custom-app and: Extract log lines corresponding to error file not fount Write them to /var/CKA2024/podlog.
# 답안
$ sudo kubectl logs custom-app | grep 'file not found' > /var/CKA2024/podlog
$ cat /var/CKA2024/podlog
# 문제2
From the pod label name=overloaded-cpu, find pods running high CPU workloads and name of the pod consuming most CPU to the file /var/CKA2024/cpu_load_pod.txt
# 답안
$ sudo kubectl top nodes
$ sudo kubectl top pods
$ sudo kubectl top pods -l name=overload=cpu --sort-by=cpu
$ sudo echo "POD_NAME" > /var/CKA2024/cpu_load_pod.txt
#13 init container
# 검색 : sidecar container
# 문제
Tasks:Add an init container to web-pod(which has been defined in spec file /data/cka/webpod.yaml).
The init container should create an empty file named /workdir/data.txt.
If /workdir/data.txt is not detected the Pod should exit.
Once the spec file has been updated with the init container definition, the Pod should be created.
# 답안
$ sudo vi /data/workdir/webpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-pod
spec:
containers:
- image: busybox:1.28
name: main
command: ['sh','-c','if [ !-f /workdir/data.txt ];then exit 1;else sleep 300;fi']
volumeMounts:
- name: workdir
mountPath: "/workdir"
initContainers:
- name: init
image: busybox:1.28
command: ['sh', '-c', "touch /workdir/data.txt"]
volumeMounts:
- name: workdir
mountPath: "/workdir"
volumes:
- name: workdir
emptyDir: {}
$ sudo kubectl apply -f /data/cka/webpod.yaml
$ sudo kubectl get pods
$ sudo kubectl exec web-pod -c main -- ls -l /workdir/data.txt
#14 NodePort 서비스 생성
# 참고 공식 문서 : https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
# 문제
Create the service as type NodePort with the port 32767 for the nginx pod with the
pod selector app:webui
# 답안
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: webui
ports:
- port: 80
targetPort: 80
nodePort: 32767
$ sudo kubectl apply -f myservice.yaml
$ curl node1.example.com:32767
#15 configmap
# 문제
Expose Configuration settings
Task:
1. All operations in this question should be performed in the ckad namespace
2. Create a ConfigMap called web-config that contains the following two entries:
- connection_string=localhost:80
- external_url=cncf.io
3. Run a pod called web-pod with a single container running the nginx:1.19.8-alpine image, and expose these configuration settings as environment variables inside the container.
# 답안
$ sudo kubectl create namespace ckad
$ sudo kubectl create configmap web-config -n ckad --from-literal=connection_string=localhost:80 --from-literal=external_url=cncf.io
$ sudo kubectl run web-pod --image=nginx:1.19.8-alpine --port=80 --dry-run=client -o yaml > web-pod.yaml
$ sudo vi web-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-pod
namespace: ckad
spec:
containers:
- image: nginx:1.19.8-alpine
name: web-pod
envFrom:
- configMapRef:
name: web-config
$ sudo kubectl apply -f web-pod.yaml
#16 Secret
# 검색 : Optional Secrets
# 참고사이트 : https://kubernetes.io/docs/concepts/configuration/secret/#use-cases
# 문제
Create a kubernetes secret and expose using a file in the pod.
1. Create a kubernetes Secret as follows:
- Name : super-secret
- DATA : password=secretpass
2. Create a Pod named pod-secrets-via-file, using the redis image, which mounts a secret named super-secret at /secrets.
3. Create a second Pod named pod-secrets-via-env, using the redis image, which exports password as PASSWORD
# 답안
$ sudo kubectl create secret generic super-secret --from-literal=password=secretpass
$ sudo vi pod-secret-via-file.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-secrets-via-file
spec:
volumes:
- name: foo
secret:
secretName: super-secret
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/secrets"
$ sudo kubectl apply -f pod-secret-via-file.yaml
$ sudo vi pod-secret-via-env.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-secrets-via-env
spec:
containers:
- name: mycontainer
image: redis
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: super-secret
key: password
$ sudo kubectl apply -f pod-secret-via-env.yaml
#17 ingress
# 문제
1. Application Service 운영
ingress-nginx namespace에 nginx 이미지를 app=nginx 레이블을 가지고 실행하는 nginx Pod를 구성하세요.
앞서 생성한 nginx Pod 를 서비스 하는 nginx-service를 생성하시오.
현재 appjs Service는 이미 동작중입니다. 별도 구성이 필요 없습니다.
2. Ingress 구성
app-ingess.yaml 파일을 생성하여 다음 조건의 ingress 서비스를 구성하시오.
ingress name: app-ingress
NODE_PORT:30080/ 접속했을 때 nginx 서비스로 연결
NODE_PORT:30080/app 으로 접속했을 때 appjs 서비스로 연결
Ingress 구성에 다음의 annotations를 포함시키시오.
annotations:
kubernetes.io/ingress.class: nginx
$ sudo kubectl run nginx --image=nginx --labels=app=nginx -n ingress-nginx
$ sudo kubectl expose -n ingress-nginx pod nginx --port=80 --target-port=80
$ sudo vi app-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: ingress-nginx
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
- path: /app
pathType: Prefix
backend:
service:
name: appjs
port:
number: 80
$ sudo kubectl apply -f app-ingress.yaml
# 18 pv 생성
Create a persistent volume with name app-data , of capacity 1Gi and access mode
ReadOnlyMany. The type of volume is hostPath and its location is /var/app-data
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /var/app-config
# 원본
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
반응형
'클라우드 > Kubernetes' 카테고리의 다른 글
[트러블슈팅] nodelocaldns 에러 (0) | 2025.05.19 |
---|---|
[RKE2] KTCloud 서버 ElasticSearch 설치 (1) | 2024.12.31 |
[k8s] Kubernetes에 ELK(Elasticsearch+Kibana) 구축하기 (2) | 2024.03.27 |
[ELK] ElasticSearch 설치 (1) | 2023.10.17 |
[k8s] 명령어 자동 완성 / 별칭 / 약어 정리 (0) | 2023.02.16 |
Comments