2017-06-12 11 views
3

kubernetesでマイクロサービスアプリケーションを実行しようとしています。私はrabbitmq、elasticsearchとユーレカ発見サービスをkubernetesで実行しています。それ以外に、3つのマイクロサービスアプリケーションがあります。私がそれらのうちの2つを実行すると、それは問題ありません。しかし、私が3番目を走ったとき、彼らは何も理由がなく何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も何度も私の設定ファイルのドッカーのマイクロサービスアプリがkubernetesで何度も何度も何度も再起動

ワン:

State:  Running 
     Started:  Mon, 12 Jun 2017 12:08:28 +0300 
    Last State:  Terminated 
     Reason:  Error 
     Exit Code: 137 
     Started:  Mon, 01 Jan 0001 00:00:00 +0000 
     Finished:  Mon, 12 Jun 2017 12:07:05 +0300 
    Ready:  True 
    Restart Count: 5 
    18m  18m  1 kubelet, minikube    Warning  FailedSync Error syncing pod, skipping: failed to "StartContainer" for "hrm" with CrashLoopBackOff: "Back-off 10s restarting failed container=hrm pod=hrm-3288407936-cwvgz_default(915fb55c-4f4a-11e7-9240-080027ccf1c3)" 

取得kubectlポッド::

NAME      READY  STATUS RESTARTS AGE 
discserv-189146465-s599x 1/1  Running 0   2d 
esearch-3913228203-9sm72 1/1  Running 0   2d 
hrm-3288407936-cwvgz  1/1  Running 6   46m 
parabot-1262887100-6098j 1/1  Running 9   2d 
rabbitmq-279796448-9qls3 1/1  Running 0   2d 
suite-ui-1725964700-clvbd 1/1  Running 3   2d 

kubectlバージョン:

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"} 
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:43:50Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"} 
012 kubectlから

apiVersion: v1 
kind: Service 
metadata: 
    name: hrm 
    labels: 
    app: suite 
spec: 
    type: NodePort 
    ports: 
    - port: 8086 
     nodePort: 30001 
    selector: 
    app: suite 
    tier: hrm-core 
--- 
apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
    name: hrm 
spec: 
    replicas: 1 
    template: 
    metadata: 
     labels: 
     app: suite 
     tier: hrm-core 
    spec: 
     containers: 
     - image: privaterepo/hrm-core 
     name: hrm 
     ports: 
     - containerPort: 8086 
     imagePullSecrets: 
     - name: regsecret 

結果はポッドHRMを記述する

minikubeバージョン:

minikube version: v0.18.0 

私はポッドのログを見ると、エラーがありません。それは何の問題もなく始まるようです。何が問題なの?

編集:kubectlの出力を得るのイベント:

19m  19m   1   discserv-189146465-lk3sm Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Pulling     kubelet, minikube  pulling image "private repo" 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Created     kubelet, minikube  Created container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67 
19m  19m   1   discserv-189146465-lk3sm Pod   spec.containers{discserv} Normal Started     kubelet, minikube  Started container with id 1607af1a7d217a6c9c91c1061f6b2148dd830a525b4fb02e9c6d71e8932c9f67 
19m  19m   1   esearch-3913228203-6l3t7 Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   esearch-3913228203-6l3t7 Pod   spec.containers{esearch} Normal Pulled     kubelet, minikube  Container image "elasticsearch:2.4" already present on machine 
19m  19m   1   esearch-3913228203-6l3t7 Pod   spec.containers{esearch} Normal Created     kubelet, minikube  Created container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60 
19m  19m   1   esearch-3913228203-6l3t7 Pod   spec.containers{esearch} Normal Started     kubelet, minikube  Started container with id db30f7190fec4643b0ee7f9e211fa92572ff24a7d934e312a97e0a08bb1ccd60 
18m  18m   1   hrm-3288407936-d2vhh  Pod          Normal Scheduled     default-scheduler  Successfully assigned hrm-3288407936-d2vhh to minikube 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Pulling     kubelet, minikube  pulling image "private repo" 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Created     kubelet, minikube  Created container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e 
18m  18m   1   hrm-3288407936-d2vhh  Pod   spec.containers{hrm}  Normal Started     kubelet, minikube  Started container with id 34d1f35fc68ed64e5415e9339405847d496e48ad60eb7b08e864ee0f5b87516e 
18m  18m   1   hrm-3288407936    ReplicaSet        Normal SuccessfulCreate   replicaset-controller Created pod: hrm-3288407936-d2vhh 
18m  18m   1   hrm       Deployment        Normal ScalingReplicaSet   deployment-controller Scaled up replica set hrm-3288407936 to 1 
19m  19m   1   minikube     Node          Normal RegisteredNode   controllermanager  Node minikube event: Registered Node minikube in NodeController 
19m  19m   1   minikube     Node          Normal Starting     kubelet, minikube  Starting kubelet. 
19m  19m   1   minikube     Node          Warning ImageGCFailed    kubelet, minikube  unable to find data for container/
19m  19m   1   minikube     Node          Normal NodeAllocatableEnforced kubelet, minikube  Updated Node Allocatable limit across pods 
19m  19m   1   minikube     Node          Normal NodeHasSufficientDisk  kubelet, minikube  Node minikube status is now: NodeHasSufficientDisk 
19m  19m   1   minikube     Node          Normal NodeHasSufficientMemory kubelet, minikube  Node minikube status is now: NodeHasSufficientMemory 
19m  19m   1   minikube     Node          Normal NodeHasNoDiskPressure  kubelet, minikube  Node minikube status is now: NodeHasNoDiskPressure 
19m  19m   1   minikube     Node          Warning Rebooted     kubelet, minikube  Node minikube has been rebooted, boot id: f66e28f9-62b3-4066-9e18-33b152fa1300 
19m  19m   1   minikube     Node          Normal NodeNotReady    kubelet, minikube  Node minikube status is now: NodeNotReady 
19m  19m   1   minikube     Node          Normal Starting     kube-proxy, minikube Starting kube-proxy. 
19m  19m   1   minikube     Node          Normal NodeReady     kubelet, minikube  Node minikube status is now: NodeReady 
8m   8m   1   minikube     Node          Warning SystemOOM     kubelet, minikube  System OOM encountered 
18m  18m   1   parabot-1262887100-r84kf Pod          Normal Scheduled     default-scheduler  Successfully assigned parabot-1262887100-r84kf to minikube 
8m   18m   2   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Pulling     kubelet, minikube  pulling image "private repo" 
8m   18m   2   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
18m  18m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Created     kubelet, minikube  Created container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045 
18m  18m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Started     kubelet, minikube  Started container with id ed8b5c19a2ad3729015f20707b6b4d4132f86bd8a3f8db1d8d79381200c63045 
8m   8m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Created     kubelet, minikube  Created container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b 
8m   8m   1   parabot-1262887100-r84kf Pod   spec.containers{parabot} Normal Started     kubelet, minikube  Started container with id 664931f24e482310e1f66dcb230c9a2a4d11aae8d4b3866bcbd084b19d3d7b2b 
18m  18m   1   parabot-1262887100   ReplicaSet        Normal SuccessfulCreate   replicaset-controller Created pod: parabot-1262887100-r84kf 
18m  18m   1   parabot      Deployment        Normal ScalingReplicaSet   deployment-controller Scaled up replica set parabot-1262887100 to 1 
19m  19m   1   rabbitmq-279796448-pcqqh Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Pulling     kubelet, minikube  pulling image "rabbitmq" 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Pulled     kubelet, minikube  Successfully pulled image "rabbitmq" 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Created     kubelet, minikube  Created container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50 
19m  19m   1   rabbitmq-279796448-pcqqh Pod   spec.containers{rabbitmq} Normal Started     kubelet, minikube  Started container with id 155e900afaa00952e4bb9a7a8b282d2c26004d187aa727201bab596465f0ea50 
19m  19m   1   suite-ui-1725964700-ssshn Pod          Normal SandboxChanged   kubelet, minikube  Pod sandbox changed, it will be killed and re-created. 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Pulling     kubelet, minikube  pulling image "private repo" 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Pulled     kubelet, minikube  Successfully pulled image "private repo" 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Created     kubelet, minikube  Created container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a 
19m  19m   1   suite-ui-1725964700-ssshn Pod   spec.containers{suite-ui} Normal Started     kubelet, minikube  Started container with id bcaa7d96e3b0e574cd48641a633eb36c5d938f5fad41d44db425dd02da63ba3a 
+0

ちょうど楽観的な推測では、終了コード137は、ノード上に十分なメモリが存在しない可能性があるように信号9を打ち消す(128を引く)ことを意味します。プロセスはOSによって強制終了される可能性があります。ノードの数を増やしたり、他のサービスの数を減らして助けてくれるかどうかを確認してください。 – hurturk

+0

私は同じことを考えていましたが、ノードについて説明すると十分なメモリがあるようです。それは言う: OutOfDisk \t \t偽 MemoryPressure今、私は発見サービスについて問題がある可能性があります考えてい\t真 \t \t \t偽 準備\t偽 DiskPressure。 –

+0

どのような順序で起動するかは重要ですか?たとえば、起動に失敗するのは常に 'hrm'ですか、別の順序で起動するのかは常に3番目ですか?これは他のコメントと同様に、リソースの問題を意味します。 私は、サーバが1.6.0であることに気付きました。1.6.4のサーバで試してみた最初の1.6リリースだったとします。 –

答えて

1

が明白なエラーのログを取得kubectl参照してください。この場合、不十分なリソース問題(またはリソースリークを持つサービス)のように思われます。 可能であれば、リソースが増加しているかどうか確認してください。

+0

私は、より多くのメモリを持つミニクーバを起動しようとします。それ以降は問題をアップデートします。ありがとう! –

+0

こんにちは@AshishVyasそれは確かにメモリの問題です。彼らは今問題なく動作しているようです。どうもありがとうございました。 –

+0

うれしい私はあなたを助けることができます。 –

関連する問題