작성
·
3K
0
[preflight] Running pre-flight checks.
[preflight] Some fatal errors occurred:
[ERROR CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: fork/exec /usr/bin/crictl -r /var/run/dockershim.sock info: no such file or directory
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
centos7 환경에서 했구요 vm으로 구축중입니다
kubeadm join 을 워커노드 만들려고 하는 곳에 했는데 이런 에러가 발생 합니다
로그는 워커노드에서 복사한것입니다
--ignore 옵션을 줘서 했는데 마스터노드에서 kubectl get nodes 를 했을때 worker노드가 안나와서
이 문제를 해결해야될거 같아서 질문합니다
일단 kubelet 했을때 에러가
I0625 11:25:10.209563 4571 feature_gate.go:226] feature gates: &{{} map[]}
W0625 11:25:10.214818 4571 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
I0625 11:25:10.235874 4571 server.go:376] Version: v1.10.0
I0625 11:25:10.235929 4571 feature_gate.go:226] feature gates: &{{} map[]}
I0625 11:25:10.236103 4571 plugins.go:89] No cloud provider specified.
W0625 11:25:10.236145 4571 server.go:517] standalone mode, no API client
W0625 11:25:10.266236 4571 server.go:433] No api server defined - no events will be sent to API server.
I0625 11:25:10.266260 4571 server.go:613] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0625 11:25:10.266507 4571 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
I0625 11:25:10.266519 4571 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
I0625 11:25:10.266628 4571 container_manager_linux.go:266] Creating device plugin manager: true
I0625 11:25:10.266672 4571 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0625 11:25:10.266735 4571 state_mem.go:87] [cpumanager] updated default cpuset: ""
I0625 11:25:10.266749 4571 state_mem.go:95] [cpumanager] updated cpuset assignments: "map[]"
W0625 11:25:10.269731 4571 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0625 11:25:10.269757 4571 kubelet.go:556] Hairpin mode set to "hairpin-veth"
I0625 11:25:10.271450 4571 client.go:75] Connecting to docker on unix:///var/run/docker.sock
I0625 11:25:10.271468 4571 client.go:104] Start docker client with request timeout=2m0s
W0625 11:25:10.272624 4571 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
I0625 11:25:10.276863 4571 docker_service.go:244] Docker cri networking managed by kubernetes.io/no-op
I0625 11:25:10.287597 4571 docker_service.go:249] Docker Info: &{ID:2WIO:IMHB:ODVY:WZQO:O2HO:LBN5:WS6G:HGBI:Q2VE:LKYE:PD4P:UIOZ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:16 OomKillDisable:true NGoroutines:46 SystemTime:2020-06-25T11:25:10.279440086+09:00 LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:3.10.0-1062.18.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4206e09a0 NCPU:2 MemTotal:2405302272 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8s-node1 Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[docker-runc:{Path:/usr/libexec/docker/docker-runc-current Args:[]} runc:{Path:docker-runc Args:[]}] DefaultRuntime:docker-runc Swarm:{NodeID:m2c9t50qh7aa0yopj7inyr4zu NodeAddr:192.168.0.54 LocalNodeState:pending ControlAvailable:false Error: RemoteManagers:[{NodeID:kjtuefhbkp88pi4ln1h6aog8s Addr:192.168.0.53:2377}] Nodes:0 Managers:0 Cluster:0xc4206d08c0} LiveRestoreEnabled:false Isolation: InitBinary:/usr/libexec/docker/docker-init-current ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:66aedde759f33c190954815fb765eedc1d782dd9 Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:fec3683b971d9c3ef73f284f176672c44b448662 Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=seccomp,profile=/etc/docker/seccomp.json]}
F0625 11:25:10.287744 4571 server.go:233] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
가 나와서
1.
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 에
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd“
추가 하고
systemctl show --property=Environment kubelet | cat
확인 후에 했는데도
에러가 나와서
다른 곳은 systemd로 수정했는데도 다르다고 나오기도하고 cri에러와 kubelet에 에러가 나와서 질문 합니다 .
답변 5
0
kubelet 실행시 systemd로 하면 kubelet cgroup 을systemd로하면 다시 cgroupfs로 바뀌면서 에러가 나는 거라 판단해서
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 파일을 수정했었습니다.
일단 cgroupfs 로 도커랑 kubenetes cgroupfs로 맞춰주었습니다
그리고
kubeadm init --pod-network-cidr=20.96.0.0/12 vm 돌려서 network 브릿지로 만들어서 이렇게 실행하였습니다
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
kubelet 만 cli에 치면
Failed to get system container stats for "/user.slice/user-0.slice/session-3180.scope": failed to get cgroup stats for "/user.slice/user-0.slice/session-3180.scope": failed to get container info for "/user.slice/user-0.slice/session-3180.scope": unknown container "/user.slice/user-0.slice/session-3180.scope"
로그보니 이렇게 나오는데 이것도 문제가 있는지 구글링해도 안나와서 질문합니다
0
제가 방금전에 구글 클라우드에 centos7을 올려서 설치했는데 정상적으로 잘 수행됐습니다 ㅠㅠ 버전정보는 아래 명령을 참고하시면 될 듯 합니다.
[root@instance-1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:45:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
[root@instance-1 ~]# uname -a
Linux instance-1 3.10.0-1127.10.1.el7.x86_64 #1 SMP Wed Jun 3 14:28:03 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@instance-1 ~]# docker -v
Docker version 19.03.12, build 48a66213fe
[root@instance-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
instance-1 Ready master 14m v1.18.5
instance-2 Ready <none> 9m37s v1.18.5
제가 참고해서 사용한 페이지는 다음과 같습니다.
도커 인스톨
https://docs.docker.com/engine/install/centos/
kubeadm 인스톨 및 조인 작업
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
이미 아실것 같지만 kubeadm 설치 페이지에 centos 관련 이슈에 대해 언급은 해두었더라구요. 저는 아무 이슈없이 쭉 잘되었습니다. 워커 노드가 안보이는 이슈는 워커 노드와 마스터 노드의 호스트 이름이 같은 경우에 보이지 않을 수 있으니 확인해주시면 감사하겠습니다.
0
0
centos 7.8 version 입니다 마스터,워커 노드 같은 버전입니다 .
마스터 워커
os centos7.8 centos7.8
docker 19.03.12 19.03.12
minikube 1.9.0 1.11
kubectl 1.18.4 1.18.4
kubelet 1.18.4 1.18.4
버전 사용하고 있습니다
어떤 센토스 어떤거 말씀하시는지 잘 모르겠어서 일단 이렇게 올립니다
전에 공부했을때 minikube 를사용했었는데 이것도 문제 가능성이 있는지 궁금합니다
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
여기 설정 파일을 수정해도 다시 kubelet 설정이 다시돌아가서 문제 인거같은데 cgroupfs 를 systemd로 수정해도
다시 cgroupfs로 돌아가면서 kubelet 에러가 나는거같습니다
0
저도 올려주신뒤에 다른 문제 해결방안이 있는가 찾아보고 있습니다. 계속 찾아보고 있으니 다시 말씀드리겠습니다. 혹시 사용중인 노드가 어떤 센토스를 올렸는지 구체적으로 알려주실수 있으신가요?