안녕하세요 열심히 수강하고 있는 수강생입니다.
Local 환경에서 설치 하고 테스트 하다보니 회사 내 보안적인 이슈 때문에 테스트가 쉽지 않아 구글 클라우드 플랫폼 환경에서 VM을 이용하여 마스터 서버와 work 노드를 구성하려고 합니다.
알려주신 절차대로 수행 중에 kubeadm init를 수행하면 아래와 같은 오류가 발생합니다.
여러번 반복해 보았는데도 오류가 계속 발생되는데 도움이 될 만한 사항이 있으면 알려주세요..
ps) 내부적인 보안 때문에 네트워크 문제가 아닌가 의심은 가는데 어떻게 조치를 해야 되는지 알 수가 없네요.
감사합니다.
========================================================================
Hit:2 http://asia-northeast3.gce.archive.ubuntu.com/ubuntu bionic InRelease Hit:3 http://asia-northeast3.gce.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:4 http://asia-northeast3.gce.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Fetched 88.7 kB in 1s (72.9 kB/s) Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Donecurl is already the newest version (7.58.0-2ubuntu3.14).apt-transport-https is already the newest version (1.6.14).The following package was automatically installed and is no longer required: libnuma1Use 'sudo apt autoremove' to remove it.0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.OKdeb https://apt.kubernetes.io/ kubernetes-xenial mainHit:1 http://asia-northeast3.gce.archive.ubuntu.com/ubuntu bionic InReleaseHit:2 http://asia-northeast3.gce.archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:3 http://asia-northeast3.gce.archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes clusterTo see the stack trace of this error execute with --v=5 or higherroot@kube-master-1:~#