게시글
질문&답변
2024.11.16
9.6강의 소스 수정 요청 및 에러 문의
안녕하세요...아.....아......(....)수정 완료하였습니다. 죄송합니다. v1.30으로 올리는 과정에서 다른게 잘 못 복제되었네요.. ㅠ ㅠ의 경우에는 nfs_exporter.sh를 실행해서 다음과 같이 NFS가 구성되어 있어야 하는데 혹시 안되셨는지 확인 부탁드려도 될까요? 구성이 되어 있다면 다음과 같이 나옵니다. root@cp-k8s:~/_Lecture_k8s_learning.kit/ch9/9.6# exportfs /nfs_shared/dynamic-vol 192.168.1.0/24실행은...ch5/5.6에 가셔서 ./nfs_exporter.sh dynamic-vol 를 실행하시면 됩니다. root@cp-k8s:~/_Lecture_k8s_learning.kit/ch5/5.6# ./nfs_exporter.sh dynamic-vol Check created configurations === cat /etc/exports: - # /etc/exports: the access control list for filesystems which may be exported - # to NFS clients. See exports(5). - # - # Example for NFSv2 and NFSv3: - # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) - # - # Example for NFSv4: - # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) - # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) - # - /nfs_shared/dynamic-vol 192.168.1.0/24(rw,sync,no_root_squash) --- ls /nfs_shared: - dynamic-vol제 생각에는 이 문제 같은데 혹시 위의 것이 실행되어 있는 상태라면 다시 말씀을 부탁드립니다.
- 0
- 2
- 9
질문&답변
2024.11.13
8.3강의 set-ctx-pod-admin.sh 수정 요청
아....dev1 이랑 dev2를 수정하면서 여기를 놓친 부분이라 수정 업데이트 하였습니다. 번거롭게 해드려서 죄송합니다. https://github.com/sysnet4admin/_Lecture_k8s_learning.kit/commit/306eafb281ea4b57b2e1f8e73bbb551e53ba6207 (사진)
- 0
- 3
- 22
질문&답변
2024.11.13
8.6 강의 중 sysnet4admin/chk-info 이미지 bash 이슈
해당 부분도 수정 완료해 두었습니다. 번거롭게 해드린 점 다시 한번 사과 말씀드립니다. ㅠㅠ
- 0
- 3
- 20
질문&답변
2024.11.13
8.3강의 set-ctx-pod-admin.sh 수정 요청
아...확인하고 업데이트하겠습니다. 제보해 주셔서 감사드립니다. ㅠ ㅠ
- 0
- 3
- 22
질문&답변
2024.11.13
드디어 맥에서도 virtualbox가 지원 됩니다.
안녕하세요 해당 부분을 확인하긴 했는데..요즘 개인사가 좀 복잡하고 또한 인도 발표등으로 여유가 없어서 근시일 내에는 업데이트가 없을 것 같습니다. 일단 현생이 안정화되고 나서 업데이트 계획을 잡을 수 있을 것 같습니다. 일정에 참고 부탁드립니다. 감사합니다.
- 0
- 2
- 35
질문&답변
2024.11.13
8.6 강의 중 sysnet4admin/chk-info 이미지 bash 이슈
안녕하세요.......ㅠㅠ 죄송합니다. 전체적으로 bash를 포함시키는 방향으로 진행하도록 하겠습니다. 말씀해 주신 것처럼 sh로 되긴 하는데...아무래도 다들 혼란이 있으실꺼 같아서 모두 추가하도록 하겠습니다. 다시 한번 죄송하다는 말씀 드립니다. ㅠ ㅠ
- 0
- 3
- 20
질문&답변
2024.11.12
7.5 강의 tardy-nginx 이미지 문제
수정된 후에 다시 실습한 내용입니다.
- 0
- 3
- 3.2K
질문&답변
2024.11.12
7.5 강의 tardy-nginx 이미지 문제
root@cp-k8s:~/_Lecture_k8s_learning.kit/ch7/7.5# k logs readiness-exec env: can't execute 'bash': No such file or directory root@cp-k8s:~/_Lecture_k8s_learning.kit/ch7/7.5# k get po -w NAME READY STATUS RESTARTS AGE readiness-exec 0/1 ContainerCreating 0 3s readiness-exec 0/1 Running 0 11s readiness-exec 1/1 Running 0 71s
- 0
- 3
- 3.2K
질문&답변
2024.11.12
7.5 강의 tardy-nginx 이미지 문제
안녕하세요 아....아....아...... 실습 효과를 높이려고 apline-slim으로 이미지를 바꿨더니...Bash 가 없어서 실행이 안되었네요..새로 이미지 빌드해서 올렸습니다. ㅠㅠ 송구스럽네요... 다시 한번 실습을 부탁드립니다.
- 0
- 3
- 3.2K
질문&답변
2024.11.11
ch1. controlplan_node.sh 실행 시 에러가 뜹니다
[ 호스트 ](사진) [ cp-k8s ]root@cp-k8s:~# cd _Lecture_k8s_learning.kit.git/ch1/1.5 root@cp-k8s:~/_Lecture_k8s_learning.kit.git/ch1/1.5# ll total 44 drwxr-xr-x 5 root root 4096 Nov 11 06:30 ./ drwxr-xr-x 3 root root 4096 Nov 11 06:30 ../ -rw-r--r-- 1 root root 1656 Nov 11 06:30 .cmd -rwxr-xr-x 1 root root 1822 Nov 11 06:30 controlplane_node.sh* -rwxr-xr-x 1 root root 1456 Nov 11 06:30 k8s_env_build.sh* -rwxr-xr-x 1 root root 1045 Nov 11 06:30 k8s_pkg_cfg.sh* drwxr-xr-x 2 root root 4096 Nov 11 06:30 tabby-v1.0.207/ -rw-r--r-- 1 root root 2365 Nov 11 06:30 Vagrantfile drwxr-xr-x 2 root root 4096 Nov 11 06:30 vagrant-v2.4.1/ drwxr-xr-x 2 root root 4096 Nov 11 06:30 virtualbox-v7.0.18/ -rwxr-xr-x 1 root root 177 Nov 11 06:30 worker_nodes.sh* root@cp-k8s:~/_Lecture_k8s_learning.kit.git/ch1/1.5# ./controlplane_node.sh I1111 06:31:51.974839 4697 version.go:256] remote version is much newer: v1.31.2; falling back to: stable-1.30 [init] Using Kubernetes version: v1.30.6 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' W1111 06:31:53.165929 4697 checks.go:844] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.9" as the CRI sandbox image. [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [cp-k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.10] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [cp-k8s localhost] and IPs [192.168.1.10 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [cp-k8s localhost] and IPs [192.168.1.10 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "super-admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 501.280027ms [api-check] Waiting for a healthy API server. This can take up to 4m0s [api-check] The API server is healthy after 7.501705997s [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node cp-k8s as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node cp-k8s as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: 123456.1234567890123456 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.10:6443 --token 123456.1234567890123456 \ --discovery-token-ca-cert-hash sha256:695121635b0c28de725d5f4c5436e8d34740b0532161def178bd06329ecf6261 poddisruptionbudget.policy/calico-kube-controllers created serviceaccount/calico-kube-controllers created serviceaccount/calico-node created serviceaccount/calico-cni-plugin created configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created daemonset.apps/calico-node created deployment.apps/calico-kube-controllers created Cloning into '_Lecture_k8s_starter.kit'... remote: Enumerating objects: 872, done. remote: Counting objects: 100% (96/96), done. remote: Compressing objects: 100% (74/74), done. remote: Total 872 (delta 40), reused 56 (delta 19), pack-reused 776 (from 1) Receiving objects: 100% (872/872), 164.93 KiB | 5.32 MiB/s, done. Resolving deltas: 100% (387/387), done. mv: cannot stat '/home/vagrant/_Lecture_k8s_starter.kit': No such file or directory find: ‘/root/_Lecture_k8s_starter.kit’: No such file or directory Cloning into '/tmp/update-kube-cert'... remote: Enumerating objects: 166, done. remote: Counting objects: 100% (54/54), done. remote: Compressing objects: 100% (45/45), done. remote: Total 166 (delta 18), reused 20 (delta 8), pack-reused 112 (from 1) Receiving objects: 100% (166/166), 63.56 KiB | 4.54 MiB/s, done. Resolving deltas: 100% (81/81), done. CERTIFICATE EXPIRES /etc/kubernetes/controller-manager.config Nov 10 21:33:56 2025 GMT /etc/kubernetes/scheduler.config Nov 10 21:33:56 2025 GMT /etc/kubernetes/admin.config Nov 10 21:33:56 2025 GMT /etc/kubernetes/super-admin.config Nov 10 21:33:56 2025 GMT /etc/kubernetes/pki/ca.crt Nov 8 21:33:55 2034 GMT /etc/kubernetes/pki/apiserver.crt Nov 10 21:33:55 2025 GMT /etc/kubernetes/pki/apiserver-kubelet-client.crt Nov 10 21:33:55 2025 GMT /etc/kubernetes/pki/front-proxy-ca.crt Nov 8 21:33:55 2034 GMT /etc/kubernetes/pki/front-proxy-client.crt Nov 10 21:33:55 2025 GMT /etc/kubernetes/pki/etcd/ca.crt Nov 8 21:33:55 2034 GMT /etc/kubernetes/pki/etcd/server.crt Nov 10 21:33:55 2025 GMT /etc/kubernetes/pki/etcd/peer.crt Nov 10 21:33:55 2025 GMT /etc/kubernetes/pki/etcd/healthcheck-client.crt Nov 10 21:33:55 2025 GMT /etc/kubernetes/pki/apiserver-etcd-client.crt Nov 10 21:33:55 2025 GMT [2024-11-11T06:34:09.05+0900][INFO] backup /etc/kubernetes to /etc/kubernetes.old-20241111 [2024-11-11T06:34:09.06+0900][INFO] updating... [2024-11-11T06:34:09.07+0900][INFO] updated /etc/kubernetes/pki/etcd/server.conf [2024-11-11T06:34:09.08+0900][INFO] updated /etc/kubernetes/pki/etcd/peer.conf [2024-11-11T06:34:09.09+0900][INFO] updated /etc/kubernetes/pki/etcd/healthcheck-client.conf [2024-11-11T06:34:09.11+0900][INFO] updated /etc/kubernetes/pki/apiserver-etcd-client.conf [2024-11-11T06:34:09.25+0900][INFO] restarted etcd with containerd [2024-11-11T06:34:09.27+0900][INFO] updated /etc/kubernetes/pki/apiserver.crt [2024-11-11T06:34:09.28+0900][INFO] updated /etc/kubernetes/pki/apiserver-kubelet-client.crt [2024-11-11T06:34:09.29+0900][INFO] updated /etc/kubernetes/controller-manager.conf [2024-11-11T06:34:09.31+0900][INFO] updated /etc/kubernetes/scheduler.conf [2024-11-11T06:34:09.32+0900][INFO] updated /etc/kubernetes/admin.conf [2024-11-11T06:34:09.32+0900][INFO] backup /root/.kube/config to /root/.kube/config.old-20241111 [2024-11-11T06:34:09.32+0900][INFO] copy the admin.conf to /root/.kube/config [2024-11-11T06:34:09.32+0900][INFO] does not need to update kubelet.conf [2024-11-11T06:34:09.34+0900][INFO] updated /etc/kubernetes/super-admin.conf [2024-11-11T06:34:09.35+0900][INFO] updated /etc/kubernetes/pki/front-proxy-client.crt [2024-11-11T06:34:09.52+0900][INFO] restarted apiserver with containerd [2024-11-11T06:34:09.61+0900][INFO] restarted controller-manager with containerd [2024-11-11T06:34:09.74+0900][INFO] restarted scheduler with containerd [2024-11-11T06:34:09.84+0900][INFO] restarted kubelet [2024-11-11T06:34:09.85+0900][INFO] done!!! CERTIFICATE EXPIRES /etc/kubernetes/controller-manager.config Nov 8 21:34:09 2034 GMT /etc/kubernetes/scheduler.config Nov 8 21:34:09 2034 GMT /etc/kubernetes/admin.config Nov 8 21:34:09 2034 GMT /etc/kubernetes/super-admin.config Nov 8 21:34:09 2034 GMT /etc/kubernetes/pki/ca.crt Nov 8 21:33:55 2034 GMT /etc/kubernetes/pki/apiserver.crt Nov 8 21:34:09 2034 GMT /etc/kubernetes/pki/apiserver-kubelet-client.crt Nov 8 21:34:09 2034 GMT /etc/kubernetes/pki/front-proxy-ca.crt Nov 8 21:33:55 2034 GMT /etc/kubernetes/pki/front-proxy-client.crt Nov 8 21:34:09 2034 GMT /etc/kubernetes/pki/etcd/ca.crt Nov 8 21:33:55 2034 GMT /etc/kubernetes/pki/etcd/server.crt Nov 8 21:34:09 2034 GMT /etc/kubernetes/pki/etcd/peer.crt Nov 8 21:34:09 2034 GMT /etc/kubernetes/pki/etcd/healthcheck-client.crt Nov 8 21:34:09 2034 GMT /etc/kubernetes/pki/apiserver-etcd-client.crt Nov 8 21:34:09 2034 GMT Wait 30 seconds for restarting the Control-Plane Node...
- 0
- 3
- 40