2016-06-12 7 views
0

ローカルのubuntuクラスタ(したがって1つのノード)にDNSを使用してkubernetesを展開できません。私はそれがフランネルと関係しているかもしれないと思うが、私は確かではなく、さらに重要なことに、私がubuntuにデプロイしようとしているときにcoreosをなぜ指しているのか分からない。私はcluster/ubuntuの下でconfig-default.shのいくつかのものを変更しなければなりませんでしたが、これを得るには私はこの1つのエラーを解決できませんでした。そして最終的にDNSでkubernetesを起動できませんでした。ローカルのubuntuクラスタ(1つのノード)にDNSを持つkubernetesを展開できません

以下は私のエラートレースです。私は電源を入れたときに、私は以下のエラートレースから次の行は、私はここでkube-up.sh

Error: 100: Key not found (/coreos.com) [1] 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 

ERROR TRACE 

$KUBERNETES_PROVIDER=ubuntu ./kube-up.sh // ran this command on terminal 
... Starting cluster using provider: ubuntu 
... calling verify-prereqs 
... calling kube-up 
~/kubernetes/cluster/ubuntu ~/kubernetes/cluster 
Prepare flannel 0.5.0 release ... 
% Total % Received % Xferd Average Speed Time Time Time Current 
Dload Upload Total Spent Left Speed 
100 608 0 608 0 0 102 0 --:--:-- 0:00:05 --:--:-- 138 
100 2757k 100 2757k 0 0 194k 0 0:00:14 0:00:14 --:--:-- 739k 
Prepare etcd 2.2.0 release ... 
% Total % Received % Xferd Average Speed Time Time Time Current 
Dload Upload Total Spent Left Speed 
100 606 0 606 0 0 101 0 --:--:-- 0:00:05 --:--:-- 175 
100 7183k 100 7183k 0 0 468k 0 0:00:15 0:00:15 --:--:-- 1871k 
Prepare kubernetes 1.2.4 release ... 
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory 
~/kubernetes/cluster 

Deploying master and node on machine 192.168.245.244 
make-ca-cert.sh 100% 4028 3.9KB/s 00:00 

easy-rsa.tar.gz 100% 42KB 42.4KB/s 00:00 

config-default.sh 100% 5419 5.3KB/s 00:00 

util.sh 100% 29KB 28.6KB/s 00:00 

kubelet.conf 100% 644 0.6KB/s 00:00 

kube-proxy.conf 100% 684 0.7KB/s 00:00 

kubelet 100% 2158 2.1KB/s 00:00 

kube-proxy 100% 2233 2.2KB/s 00:00 

kube-scheduler.conf 100% 674 0.7KB/s 00:00 

etcd.conf 100% 709 0.7KB/s 00:00 

kube-controller-manager.conf 100% 744 0.7KB/s 00:00 

kube-apiserver.conf 100% 674 0.7KB/s 00:00 

kube-apiserver 100% 2358 2.3KB/s 00:00 

kube-scheduler 100% 2360 2.3KB/s 00:00 

kube-controller-manager 100% 2672 2.6KB/s 00:00 

etcd 100% 2073 2.0KB/s 00:00 

reconfDocker.sh 100% 2094 2.0KB/s 00:00 

kube-apiserver 100% 58MB 58.2MB/s 00:00 

kube-scheduler 100% 42MB 42.0MB/s 00:00 

kube-controller-manager 100% 52MB 51.8MB/s 00:00 

etcdctl 100% 12MB 12.3MB/s 00:00 

etcd 100% 14MB 13.8MB/s 00:00 

flanneld 100% 11MB 10.8MB/s 00:00 

kubelet 100% 60MB 60.3MB/s 00:01 

kube-proxy 100% 35MB 34.8MB/s 00:00 

flanneld 100% 11MB 10.8MB/s 00:00 

flanneld.conf 100% 577 0.6KB/s 00:00 

flanneld 100% 2121 2.1KB/s 00:00 

flanneld.conf 100% 568 0.6KB/s 00:00 

flanneld 100% 2131 2.1KB/s 00:00 

[sudo] password to start master: // I entered my password manually 
etcd start/running, process 100639 
Error: 100: Key not found (/coreos.com) [1] 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
docker stop/waiting 
docker start/running, process 101035 
Connection to 192.168.245.244 closed. 
Validating master 
Validating [email protected] 
Using master 192.168.245.244 
cluster "ubuntu" set. 
user "ubuntu" set. 
context "ubuntu" set. 
switched to context "ubuntu". 
Wrote config for ubuntu to /home/kant/.kube/config 
... calling validate-cluster 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 

を展開することができませんでした理由である場合はわからないとすると、エラーのトレースでありますあなたはconfig-default.shに誤った設定を持っているようconfig-default.sh

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh 
... Starting cluster using provider: ubuntu 
... calling verify-prereqs 
... calling kube-up 
~/kubernetes/cluster/ubuntu ~/kubernetes/cluster 
Prepare flannel 0.5.5 release ... 
Prepare etcd 2.3.1 release ... 
Prepare kubernetes 1.2.4 release ... 
Done! All your binaries locate in kubernetes/cluster/ubuntu/binaries directory 
~/kubernetes/cluster 

Deploying master and node on machine 192.168.245.237 
make-ca-cert.sh                     100% 4028  3.9KB/s 00:00  
easy-rsa.tar.gz                     100% 42KB 42.4KB/s 00:00  
config-default.sh                    100% 5474  5.4KB/s 00:00  
util.sh                       100% 29KB 28.6KB/s 00:00  
kubelet.conf                     100% 644  0.6KB/s 00:00  
kube-proxy.conf                     100% 684  0.7KB/s 00:00  
kubelet                       100% 2158  2.1KB/s 00:00  
kube-proxy                      100% 2233  2.2KB/s 00:00  
kube-scheduler.conf                    100% 674  0.7KB/s 00:00  
etcd.conf                      100% 709  0.7KB/s 00:00  
kube-controller-manager.conf                 100% 744  0.7KB/s 00:00  
kube-apiserver.conf                    100% 674  0.7KB/s 00:00  
kube-apiserver                     100% 2358  2.3KB/s 00:00  
kube-scheduler                     100% 2360  2.3KB/s 00:00  
kube-controller-manager                   100% 2672  2.6KB/s 00:00  
etcd                       100% 2073  2.0KB/s 00:00  
reconfDocker.sh                     100% 2094  2.0KB/s 00:00  
kube-apiserver                     100% 58MB 58.2MB/s 00:01  
kube-scheduler                     100% 42MB 42.0MB/s 00:00  
kube-controller-manager                   100% 52MB 51.8MB/s 00:00  
etcdctl                       100% 14MB 13.7MB/s 00:00  
etcd                       100% 16MB 15.9MB/s 00:00  
flanneld                      100% 16MB 15.8MB/s 00:00  
kubelet                       100% 60MB 60.3MB/s 00:01  
kube-proxy                      100% 35MB 34.8MB/s 00:00  
flanneld                      100% 16MB 15.8MB/s 00:00  
flanneld.conf                     100% 577  0.6KB/s 00:00  
flanneld                      100% 2121  2.1KB/s 00:00  
flanneld.conf                     100% 568  0.6KB/s 00:00  
flanneld                      100% 2131  2.1KB/s 00:00  
+ source /home/kant/kube/util.sh 
++ set -e 
++ SSH_OPTS='-oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oLogLevel=ERROR' 
++ MASTER= 
++ MASTER_IP= 
++ NODE_IPS= 
+ setClusterInfo 
+ NODE_IPS= 
+ local ii=0 
+ create-etcd-opts 192.168.245.237 
+ cat 
+ create-kube-apiserver-opts 192.168.3.0/24 NamespaceLifecycle,LimitRanger,ServiceAccount,SecurityContextDeny,ResourceQuota 30000-32767 192.168.245.237 
+ cat 
+ create-kube-controller-manager-opts 192.168.245.237 
+ cat 
+ create-kube-scheduler-opts 
+ cat 
+ create-kubelet-opts 192.168.245.237 192.168.245.237 192.168.3.10 cluster.local '' '' 
+ '[' -n '' ']' 
+ cni_opts= 
+ cat 
+ create-kube-proxy-opts 192.168.245.237 192.168.245.237 '' 
+ cat 
+ create-flanneld-opts 127.0.0.1 192.168.245.237 
+ cat 
+ FLANNEL_OTHER_NET_CONFIG= 
+ sudo -E -p '[sudo] password to start master: ' -- /bin/bash -ce ' 
     set -x 
     cp ~/kube/default/* /etc/default/ 
     cp ~/kube/init_conf/* /etc/init/ 
     cp ~/kube/init_scripts/* /etc/init.d/ 

     groupadd -f -r kube-cert 
     DEBUG=true ~/kube/make-ca-cert.sh "192.168.245.237" "IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local" 
     mkdir -p /opt/bin/ 
     cp ~/kube/master/* /opt/bin/ 
     cp ~/kube/minion/* /opt/bin/ 

     service etcd start 
     if true; then FLANNEL_NET="172.16.0.0/16" KUBE_CONFIG_FILE="./../cluster/../cluster/ubuntu/config-default.sh" DOCKER_OPTS="" ~/kube/reconfDocker.sh ai; fi 
     ' 
[sudo] password to start master: 
+ cp /home/kant/kube/default/etcd /home/kant/kube/default/flanneld /home/kant/kube/default/kube-apiserver /home/kant/kube/default/kube-controller-manager /home/kant/kube/default/kubelet /home/kant/kube/default/kube-proxy /home/kant/kube/default/kube-scheduler /etc/default/ 
+ cp /home/kant/kube/init_conf/etcd.conf /home/kant/kube/init_conf/flanneld.conf /home/kant/kube/init_conf/kube-apiserver.conf /home/kant/kube/init_conf/kube-controller-manager.conf /home/kant/kube/init_conf/kubelet.conf /home/kant/kube/init_conf/kube-proxy.conf /home/kant/kube/init_conf/kube-scheduler.conf /etc/init/ 
+ cp /home/kant/kube/init_scripts/etcd /home/kant/kube/init_scripts/flanneld /home/kant/kube/init_scripts/kube-apiserver /home/kant/kube/init_scripts/kube-controller-manager /home/kant/kube/init_scripts/kubelet /home/kant/kube/init_scripts/kube-proxy /home/kant/kube/init_scripts/kube-scheduler /etc/init.d/ 
+ groupadd -f -r kube-cert 
+ DEBUG=true 
+ /home/kant/kube/make-ca-cert.sh 192.168.245.237 IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local 
+ cert_ip=192.168.245.237 
+ extra_sans=IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local 
+ cert_dir=/srv/kubernetes 
+ cert_group=kube-cert 
+ mkdir -p /srv/kubernetes 
+ use_cn=false 
+ '[' 192.168.245.237 == _use_gce_external_ip_ ']' 
+ '[' 192.168.245.237 == _use_aws_external_ip_ ']' 
+ sans=IP:192.168.245.237 
+ [[ -n IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local ]] 
+ sans=IP:192.168.245.237,IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local 
++ mktemp -d -t kubernetes_cacert.XXXXXX 
+ tmpdir=/tmp/kubernetes_cacert.YAN8Jg 
+ trap 'rm -rf "${tmpdir}"' EXIT 
+ cd /tmp/kubernetes_cacert.YAN8Jg 
+ '[' -f /home/kant/kube/easy-rsa.tar.gz ']' 
+ ln -s /home/kant/kube/easy-rsa.tar.gz . 
+ tar xzf easy-rsa.tar.gz 
+ cd easy-rsa-master/easyrsa3 
+ ./easyrsa init-pki 
++ date +%s 
+ ./easyrsa --batch [email protected] build-ca nopass 
+ '[' false = true ']' 
+ ./easyrsa --subject-alt-name=IP:192.168.245.237,IP:192.168.245.237,IP:192.168.3.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local build-server-full kubernetes-master nopass 
+ cp -p pki/issued/kubernetes-master.crt /srv/kubernetes/server.cert 
+ cp -p pki/private/kubernetes-master.key /srv/kubernetes/server.key 
+ ./easyrsa build-client-full kubecfg nopass 
+ cp -p pki/ca.crt /srv/kubernetes/ca.crt 
+ cp -p pki/issued/kubecfg.crt /srv/kubernetes/kubecfg.crt 
+ cp -p pki/private/kubecfg.key /srv/kubernetes/kubecfg.key 
+ chgrp kube-cert /srv/kubernetes/server.key /srv/kubernetes/server.cert /srv/kubernetes/ca.crt 
+ chmod 660 /srv/kubernetes/server.key /srv/kubernetes/server.cert /srv/kubernetes/ca.crt 
+ rm -rf /tmp/kubernetes_cacert.YAN8Jg 
+ mkdir -p /opt/bin/ 
+ cp /home/kant/kube/master/etcd /home/kant/kube/master/etcdctl /home/kant/kube/master/flanneld /home/kant/kube/master/kube-apiserver /home/kant/kube/master/kube-controller-manager /home/kant/kube/master/kube-scheduler /opt/bin/ 
+ cp /home/kant/kube/minion/flanneld /home/kant/kube/minion/kubelet /home/kant/kube/minion/kube-proxy /opt/bin/ 
+ service etcd start 
etcd start/running, process 74611 
+ true 
+ FLANNEL_NET=172.16.0.0/16 
+ KUBE_CONFIG_FILE=./../cluster/../cluster/ubuntu/config-default.sh 
+ DOCKER_OPTS= 
+ /home/kant/kube/reconfDocker.sh ai 
Error: 100: Key not found (/coreos.com) [1] 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
{"Network":"172.16.0.0/16", "Backend": {"Type": "vxlan"}} 
docker stop/waiting 
docker start/running, process 75022 
Connection to 192.168.245.237 closed. 
Validating master 
Validating [email protected] 
Using master 192.168.245.237 
cluster "ubuntu" set. 
user "ubuntu" set. 
context "ubuntu" set. 
switched to context "ubuntu". 
Wrote config for ubuntu to /home/kant/.kube/config 
... calling validate-cluster 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 
Waiting for 1 ready nodes. 0 ready nodes, 0 registered. Retrying. 

答えて

0

でtrueにデバッグフラグを確認してください。 あなたは、あなたが持つファイルconfig-default.sh configのでしょう(マスターと労働者の双方で)ノードにローカルクラスタを展開したい場合:

roles=${roles:-"ai"} 

export NUM_NODES=${NUM_NODES:-1} 

価値のNUM_NODESをIのロールの数である

+0

こんにちは私はちょうど私が確かにしたものを持っていた。今回は、デバッグフラグをオンにして実行しましたが、私の質問にはさらに詳しい情報が追加されています。私は1週間この問題を解決しようとしていますが、私はこの権利を得ることができません。私はそれがフランネルと何かをしなければならないと思うが、私はkubernetesにも新しいので100%確信していない。 – user1870400

+0

ああ、初めてKubernetes Clusterを展開するときに私はあなたと同じ問題を抱えていました。しかし、$ NUMNODEの値を1に変更すると、それはうまくいっています。それから私はあなたがまだエラーを得る理由を知らない。 – luanbuingoc

関連する問題