게시글
질문&답변
쿠버네티스 클러스터 구축시 CRI 에러
kubelet 실행시 systemd로 하면 kubelet cgroup 을systemd로하면 다시 cgroupfs로 바뀌면서 에러가 나는 거라 판단해서 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 파일을 수정했었습니다. 일단 cgroupfs 로 도커랑 kubenetes cgroupfs로 맞춰주었습니다 그리고 kubeadm init --pod-network-cidr=20.96.0.0/12 vm 돌려서 network 브릿지로 만들어서 이렇게 실행하였습니다 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused. kubelet 만 cli에 치면 Failed to get system container stats for "/user.slice/user-0.slice/session-3180.scope": failed to get cgroup stats for "/user.slice/user-0.slice/session-3180.scope": failed to get container info for "/user.slice/user-0.slice/session-3180.scope": unknown container "/user.slice/user-0.slice/session-3180.scope" 로그보니 이렇게 나오는데 이것도 문제가 있는지 구글링해도 안나와서 질문합니다
- 0
- 5
- 3K
질문&답변
쿠버네티스 클러스터 구축시 CRI 에러
centos 7.8 version 입니다 마스터,워커 노드 같은 버전입니다 . 마스터 워커 os centos7.8 centos7.8 docker 19.03.12 19.03.12 minikube 1.9.0 1.11 kubectl 1.18.4 1.18.4 kubelet 1.18.4 1.18.4 버전 사용하고 있습니다 어떤 센토스 어떤거 말씀하시는지 잘 모르겠어서 일단 이렇게 올립니다 전에 공부했을때 minikube 를사용했었는데 이것도 문제 가능성이 있는지 궁금합니다 vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 여기 설정 파일을 수정해도 다시 kubelet 설정이 다시돌아가서 문제 인거같은데 cgroupfs 를 systemd로 수정해도 다시 cgroupfs로 돌아가면서 kubelet 에러가 나는거같습니다
- 0
- 5
- 3K
질문&답변
안녕하세요 kubelet 실행시 cgroup driver가 cgroupfs로 나옵니다
답변 감사합니다 systemctl status kubelet 명령어 실행해보니 이렇게 나오고 말씀해주신 디렉토리 위치에도 10-kubeadm.conf 파일이 있었습니다 ㅠ 감사합니다 다시 밀고 차근차근 해야될거같습니다 답변 정말감사합니다 ㅎ [root@k8shost kubelet.service.d]# systemctl status kubelet * kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d `-10-kubeadm.conf Active: active (running) since ¿ù 2020-06-29 18:17:48 KST; 5ms ago Docs: http://kubernetes.io/docs/ Main PID: 14214 (kubelet) Tasks: 1 Memory: 364.0K CGroup: /system.slice/kubelet.service `-14214 /var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-...
- 1
- 5
- 5.2K
질문&답변
안녕하세요 kubelet 실행시 cgroup driver가 cgroupfs로 나옵니다
답변 감사합니다 ! 블로그에 올려주신 내용순서 대로 따라했는데 잘 됬었습니다 아 가이드는 내PC + VirtualBox (Network: Bridge) 이거 보고 하였습니다 worker노드랑 master 노드 생성해 보려고 kubeadm reset 후 다시 kubeadm init --pod-network-cidr=20.96.0.0/12 명령어를 실행하니 이 로그(아래에 적어 두었습니다)가 나와서 kubelet 에 문제가 있어보여 해보니 cgroup driver 설정이 다르더라구요 그래서 맞추고 다시해도 같은에러가 나와서 질문하게 되었습니다. kubelet /etc/systemd/syste/kubelet.service.d/10-kubeadm.conf 에서 systemd 수정후 minikube start --driver=none 실행후 확인하면 다시 cgroupfs 로 바뀌는거 같습니다 --------------------------------------------------------------------------------- [root@k8shost etcd]# kubeadm init --pod-network-cidr=20.96.0.0/12 W0629 14:31:27.403504 13298 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.5 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8shost kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.53] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8shost localhost] and IPs [192.168.0.53 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8shost localhost] and IPs [192.168.0.53 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0629 14:31:32.543009 13298 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0629 14:31:32.544957 13298 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
- 1
- 5
- 5.2K