2017-04-24 37 views
1

この3ノードクラスタ(http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/vagrant/)をセットアップしました。ContainerCreating:サーバーからのエラー(BadRequest):コンテナ "kubedns"

ノードの再起動後。 KubeDNSサービスが開始されていません。ログには多くの情報が表示されませんでした。

なって怒鳴るメッセージ

$ kubectl logs --namespace=kube-system kube-dns-v19-sqx9q -c kubedns 
Error from server (BadRequest): container "kubedns" in pod "kube-dns-v19-sqx9q" is waiting to start: ContainerCreating 

ノードが実行されています。

$ kubectl get nodes 
NAME   STATUS      AGE  VERSION 
172.18.18.101 Ready,SchedulingDisabled 2d  v1.6.0 
172.18.18.102 Ready      2d  v1.6.0 
172.18.18.103 Ready      2d  v1.6.0 


$ kubectl get pods --namespace=kube-system 
NAME          READY  STATUS    RESTARTS AGE 
calico-node-6rhb9       2/2  Running    4   2d 
calico-node-mbhk7       2/2  Running    93   2d 
calico-node-w9sjq       2/2  Running    6   2d 
calico-policy-controller-2425378810-rd9h7 1/1  Running    0   25m 
kube-dns-v19-sqx9q       0/3  ContainerCreating 0   25m 
kubernetes-dashboard-2457468166-rs0tn  0/1  ContainerCreating 0   25m 

どのようにDNSサービスに問題があるのですか?

おかげ SR

いくつかの詳細

Events: 
    FirstSeen LastSeen Count From   SubObjectPath Type  Reason  Message 
    --------- -------- ----- ----   ------------- -------- ------  ------- 
    31m  31m  1 kubelet, 172.18.18.102   Warning  FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 87bd5c4bc5b9d81468170cc840ba9203988bb259aa0c025372ee02303d9e8d4b" 

    31m 31m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: d091593b55eb9e16e09c5bc47f4701015839d83d23546c4c6adc070bc37ad60d" 

    30m 30m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 69a1fa33f26b851664b2ad10def1eb37b5e5391ca33dad2551a2f98c52e05d0d 
    30m 30m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: c3b7c06df3bea90e4d12c0b7f1a03077edf5836407206038223967488b279d3d" 

    28m 28m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 467d54496eb5665c5c7c20b1adb0cc0f01987a83901e4b54c1dc9ccb4860f16d" 

    28m 28m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1cd8022c9309205e61d7e593bc7ff3248af17d731e2a4d55e74b488cbc115162 
    27m 27m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 1ed4174aba86124055981b7888c9d048d784e98cef5f2763fd1352532a0ba85d 
    26m 26m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: failed to "KillPodSandbox" for "d7c71007-2933-11e7-9bbd-08002774bad8" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"_\" network: CNI failed to retrieve network namespace path: Error: No such container: 444693b4ce06eb25f3dbd00aebef922b72b291598fec11083cb233a0f9d5e92d" 

    25m 25m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 736df24a9a6640300d62d542e5098e03a5a9fde4f361926e2672880b43384516 
    8m 8m 1 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: rpc error: code = 2 desc = Error: No such container: 8424dbdf92b16602c7d5a4f61d21cd602c5da449c6ec3449dafbff80ff5e72c4 
    2h 1m 49 kubelet, 172.18.18.102  Warning FailedSync (events with common reason combined) 
    2h 2s 361 kubelet, 172.18.18.102  Warning FailedSync Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-v19-sqx9q_kube-system(d7c71007-2933-11e7-9bbd-08002774bad8)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-v19-sqx9q_kube-system\" network: the server has asked for the client to provide credentials (get pods kube-dns-v19-sqx9q)" 

    2h 1s 406 kubelet, 172.18.18.102  Normal SandboxChanged Pod sandbox changed, it will be killed and re-created. 

ポッド説明出力

Name:  kube-dns-v19-sqx9q 
Namespace: kube-system 
Node:  172.18.18.102/172.18.18.102 
Start Time: Mon, 24 Apr 2017 17:34:22 -0400 
Labels:  k8s-app=kube-dns 
     kubernetes.io/cluster-service=true 
     version=v19 
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"kube-system","name":"kube-dns-v19","uid":"dac3d892-278c-11e7-b2b5-0800... 
     scheduler.alpha.kubernetes.io/critical-pod= 
     scheduler.alpha.kubernetes.io/tolerations=[{"key":"CriticalAddonsOnly", "operator":"Exists"}] 
Status:  Pending 
IP:  
Controllers: ReplicationController/kube-dns-v19 
Containers: 
    kubedns: 
    Container ID: 
    Image:  gcr.io/google_containers/kubedns-amd64:1.7 
    Image ID:  
    Ports:  10053/UDP, 10053/TCP 
    Args: 
     --domain=cluster.local 
     --dns-port=10053 
    State:  Waiting 
     Reason:  ContainerCreating 
    Ready:  False 
    Restart Count: 0 
    Limits: 
     cpu: 100m 
     memory: 170Mi 
    Requests: 
     cpu:  100m 
     memory:  70Mi 
    Liveness:  http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=5 
    Readiness:  http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3 
    Environment: <none> 
    Mounts: 
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro) 
    dnsmasq: 
    Container ID: 
    Image:  gcr.io/google_containers/kube-dnsmasq-amd64:1.3 
    Image ID:  
    Ports:  53/UDP, 53/TCP 
    Args: 
     --cache-size=1000 
     --no-resolv 
     --server=127.0.0.1#10053 
    State:  Waiting 
     Reason:  ContainerCreating 
    Ready:  False 
    Restart Count: 0 
    Environment: <none> 
    Mounts: 
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro) 
    healthz: 
    Container ID: 
    Image:  gcr.io/google_containers/exechealthz-amd64:1.1 
    Image ID:  
    Port:  8080/TCP 
    Args: 
     -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null 
     -port=8080 
     -quiet 
    State:  Waiting 
     Reason:  ContainerCreating 
    Ready:  False 
    Restart Count: 0 
    Limits: 
     cpu: 10m 
     memory: 50Mi 
    Requests: 
     cpu:  10m 
     memory:  50Mi 
    Environment: <none> 
    Mounts: 
     /var/run/secrets/kubernetes.io/serviceaccount from default-token-r5xws (ro) 
Conditions: 
    Type  Status 
    Initialized True 
    Ready  False 
    PodScheduled True 
Volumes: 
    default-token-r5xws: 
    Type: Secret (a volume populated by a Secret) 
    SecretName: default-token-r5xws 
    Optional: false 
QoS Class: Burstable 
Node-Selectors: <none> 
Tolerations: <none> 
+1

'サーバがクライアントにクレデンシャルを提供するように要求しました(ポッドを取得するkube-dns-v19-sqx9q)はあなたのヒントです... –

+0

dnsポッドを削除して作成しました。ノードの1つがファイルシステムをマウントできないように見えます。 kubelet、172.18.18.103 Warning FailedMount MountVolume.SetUpがボリューム "kubernetes.io/secret/32a98bf6-2a1d-11e7-b43a-08002774bad8-default-token-r5xws"(spec.Name: "default-token-r5xws")で失敗しました。秘密の「default-token-r5xws」が見つからない場合は、「32a98bf6-2a1d-11e7-b43a-08002774bad8」(UID: "32a98bf6-2a1d-11e7-b43a-08002774bad8") – sfgroups

+1

クラスタのブートストラップが壊れているように見える:検索秘密の作成のログに保存します。これはkubeシステムの名前空間に作成する必要があります。 –

答えて

1

サービスアカウントは、秘密default-token-r5xwsから/var/run/secrets/kubernetes.io/serviceaccountが失敗したマウント。このシークレット作成の失敗のログを確認します。

関連する問題