인프런 커뮤니티 질문&답변

최의상님의 프로필 이미지
최의상

작성한 질문수

조훈 님의 책 컨테이너를 다루는 표준 아키텍처. 쿠버네티스 로 실습중입니다.

작성

·

582

0

안녕하세요 고수님들

현재 쿠버네티스 공부를 하고있는데 막히는 부분이 있어서 고수 선배님들의 도움을 받고자 질문올립니다.

현재 조훈님의 책으로 google gcp에 vm 3대를 (마스터1대, 워커2대)로 학습환경을 만들어 진행중입니다.

 

잘 공부중인데 막히는 부분이 ingress-nginx 컨트롤러 서비스를 구성하는데 잘 안되어 질문올려봅니다.

 

현상

root@k8s-m:/home/rsa-key-20220321# kubectl apply -f /home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2/ingress-nginx.yaml

namespace/ingress-nginx created

configmap/nginx-configuration created

configmap/tcp-services created

configmap/udp-services created

serviceaccount/nginx-ingress-serviceaccount created

deployment.apps/nginx-ingress-controller created

limitrange/ingress-nginx created

unable to recognize "/home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2/ingress-nginx.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"

unable to recognize "/home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2/ingress-nginx.yaml": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"

unable to recognize "/home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2/ingress-nginx.yaml": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"

unable to recognize "/home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2/ingress-nginx.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"

root@k8s-m:/home/rsa-key-20220321# kubectl get pods -n ingress-nginx

NAME                                        READY   STATUS             RESTARTS      AGE

nginx-ingress-controller-668959df88-8hmt6   0/1     CrashLoopBackOff   35 (6s ago)   101m

 

 

root@k8s-m:/home/rsa-key-20220321# kubectl get pod -n ingress-nginx

NAME                                        READY   STATUS             RESTARTS       AGE

nginx-ingress-controller-668959df88-8hmt6   0/1     CrashLoopBackOff   35 (25s ago)   101m

root@k8s-m:/home/rsa-key-20220321# kubectl describe pod nginx-ingress-controller-668959df88-8hmt6 -n ingress-nginx

Name:         nginx-ingress-controller-668959df88-8hmt6

Namespace:    ingress-nginx

Priority:     0

Node:         k8s-w3/10.178.0.5

Start Time:   Wed, 30 Mar 2022 06:07:05 +0000

Labels:       app.kubernetes.io/name=ingress-nginx

              app.kubernetes.io/part-of=ingress-nginx

              pod-template-hash=668959df88

Annotations:  cni.projectcalico.org/containerID: fab04986c5e06c07191e376ab04b5ebc7c66ba3a92e4ee393c6dfa01bedbb38d

              cni.projectcalico.org/podIP: 10.233.84.34/32

              cni.projectcalico.org/podIPs: 10.233.84.34/32

              kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container nginx-ingress-controller

              prometheus.io/port: 10254

              prometheus.io/scrape: true

Status:       Running

IP:           10.233.84.34

IPs:

  IP:           10.233.84.34

Controlled By:  ReplicaSet/nginx-ingress-controller-668959df88

Containers:

  nginx-ingress-controller:

    Container ID:  containerd://208fdba282a51fc6b5f3b5e2fbb0e660e0f99622547e1ce8ee63fe834b5e7571

    Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0

    Image ID:      quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:b312c91d0de688a21075078982b5e3a48b13b46eda4df743317d3059fc3ca0d9

    Ports:         80/TCP, 443/TCP

    Host Ports:    0/TCP, 0/TCP

    Args:

      /nginx-ingress-controller

      --configmap=$(POD_NAMESPACE)/nginx-configuration

      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services

      --udp-services-configmap=$(POD_NAMESPACE)/udp-services

      --publish-service=$(POD_NAMESPACE)/ingress-nginx

      --annotations-prefix=nginx.ingress.kubernetes.io

    State:          Waiting

      Reason:       CrashLoopBackOff

    Last State:     Terminated

      Reason:       Error

      Exit Code:    1

      Started:      Wed, 30 Mar 2022 07:47:25 +0000

      Finished:     Wed, 30 Mar 2022 07:48:05 +0000

    Ready:          False

    Restart Count:  35

    Requests:

      cpu:      100m

      memory:   90Mi

    Liveness:   http-get http://:10254/healthz delay=10s timeout=10s period=10s #success=1 #failure=3

    Readiness:  http-get http://:10254/healthz delay=0s timeout=10s period=10s #success=1 #failure=3

    Environment:

      POD_NAME:       nginx-ingress-controller-668959df88-8hmt6 (v1:metadata.name)

      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f8wg (ro)

Conditions:

  Type              Status

  Initialized       True

  Ready             False

  ContainersReady   False

  PodScheduled      True

Volumes:

  kube-api-access-6f8wg:

    Type:                    Projected (a volume that contains injected data from multiple sources)

    TokenExpirationSeconds:  3607

    ConfigMapName:           kube-root-ca.crt

    ConfigMapOptional:       <nil>

    DownwardAPI:             true

QoS Class:                   Burstable

Node-Selectors:              kubernetes.io/os=linux

Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s

                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Events:

  Type     Reason            Age                     From     Message

  ----     ------            ----                    ----     -------

  Normal   Started           46m (x22 over 101m)     kubelet  Started container nginx-ingress-controller

  Warning  DNSConfigForming  6m34s (x459 over 101m)  kubelet  Search Line limits were exceeded, some search paths have been omitted, the applied search line is: ingress-nginx.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local asia-northeast3-a.c.master-plane-344801.internal c.master-plane-344801.internal

  Warning  Unhealthy         92s (x247 over 101m)    kubelet  Readiness probe failed: HTTP probe failed with statuscode: 500

 

root@k8s-m:/home/rsa-key-20220321# kubectl apply -f /home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2/ingress-config.yaml

error: error validating "/home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2/ingress-config.yaml": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend.service.port): invalid type for io.k8s.api.networking.v1.ServiceBackendPort: got "string", expected "map", ValidationError(Ingress.spec.rules[0].http.paths[1].backend.service.port): invalid type for io.k8s.api.networking.v1.ServiceBackendPort: got "string", expected "map", ValidationError(Ingress.spec.rules[0].http.paths[2].backend.service.port): invalid type for io.k8s.api.networking.v1.ServiceBackendPort: got "integer", expected "map"]; if you choose to ignore these errors, turn validation off with --validate=false

 

 

ㅠㅠ 초보이다 보니 너무 힘들어서 도움 부탁드립니다.

미리 감사드립니다.

 

 

 

 

 

 

답변 2

0

최의상님의 프로필 이미지
최의상
질문자

nginx 인그레스 컨트롤러의 파드가 배포되었는지 확인하면 안되고있네요

root@k8s-m:/home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2# kubectl get pods -n ingress-nginx

NAME                                        READY   STATUS    RESTARTS   AGE

nginx-ingress-controller-668959df88-5s9xd   0/1     Pending   0          3m6s

 

describe로 확인

 

root@k8s-m:/home/rsa-key-20220321/_Book_k8sInfra/ch3/3.3.2# kubectl describe pod nginx-ingress-controller-668959df88-5s9xd -n ingress-nginx

Name:           nginx-ingress-controller-668959df88-5s9xd

Namespace:      ingress-nginx

Priority:       0

Node:           <none>

Labels:         app.kubernetes.io/name=ingress-nginx

                app.kubernetes.io/part-of=ingress-nginx

                pod-template-hash=668959df88

Annotations:    kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container nginx-ingress-controller

                prometheus.io/port: 10254

                prometheus.io/scrape: true

Status:         Pending

IP:

IPs:            <none>

Controlled By:  ReplicaSet/nginx-ingress-controller-668959df88

Containers:

  nginx-ingress-controller:

    Image:       quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0

    Ports:       80/TCP, 443/TCP

    Host Ports:  0/TCP, 0/TCP

    Args:

      /nginx-ingress-controller

      --configmap=$(POD_NAMESPACE)/nginx-configuration

      --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services

      --udp-services-configmap=$(POD_NAMESPACE)/udp-services

      --publish-service=$(POD_NAMESPACE)/ingress-nginx

      --annotations-prefix=nginx.ingress.kubernetes.io

    Requests:

      cpu:      100m

      memory:   90Mi

    Liveness:   http-get http://:10254/healthz delay=10s timeout=10s period=10s #success=1 #failure=3

    Readiness:  http-get http://:10254/healthz delay=0s timeout=10s period=10s #success=1 #failure=3

    Environment:

      POD_NAME:       nginx-ingress-controller-668959df88-5s9xd (v1:metadata.name)

      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)

    Mounts:

      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvcht (ro)

Conditions:

  Type           Status

  PodScheduled   False

Volumes:

  kube-api-access-kvcht:

    Type:                    Projected (a volume that contains injected data from multiple sources)

    TokenExpirationSeconds:  3607

    ConfigMapName:           kube-root-ca.crt

    ConfigMapOptional:       <nil>

    DownwardAPI:             true

QoS Class:                   Burstable

Node-Selectors:              kubernetes.io/os=linux

Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s

                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Events:

  Type     Reason            Age                    From               Message

  ----     ------            ----                   ----               -------

  Warning  FailedScheduling  4m54s                  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.

  Warning  FailedScheduling  2m43s (x1 over 3m43s)  default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) had taint {node.kubernetes.io/disk-pressure: }, that the pod didn't tolerate.

 

 

 

 

 

0

최의상님의 프로필 이미지
최의상
질문자

일단 ingress-nginx.yaml 파일에서 "rbac.authorization.k8s.io/v1beta1 와같이 v1beta1을 v1으로

모두변경 했습니다.

따라서 아래의 에러는 발생안함

 

unable to recognize "ingress-nginx.yaml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"

unable to recognize "ingress-nginx.yaml": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"

unable to recognize "ingress-nginx.yaml": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"

 

unable to recognize "ingress-nginx.yaml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"

 

 

최의상님의 프로필 이미지
최의상

작성한 질문수

질문하기