2016-04-19 9 views
0

ローカルPCにWindowsXエンタープライズバージョンでspringXD 1.3.1を使用し、Hortonworks on Microsoft Azureクラウドを構成しています。 AzureのHortonworksで、私はdir xdを作成し、springxd docに記載されているように必要な権限を与えました。さらに、私はその後ストリームが正常に展開されていても、HDFSのファイルは空白のままです。

fs.default.name=hdfs://13.92.199.104:8020 

としてのconfig/hadoop.propertiesファイルのエントリを作った、私はxd-singlenode.batのようなコマンドによってspringxd始めと

spring: 
    profiles: singlenode 
    hadoop: 
    fsUri: hdfs://13.92.199.104:8020 
    resourceManagerHost: 13.92.199.104 
    resourceManagerPort: 8050 

:それから私は、設定/ server.ymlファイルのエントリを次のよう作られましたその後、私は

hadoop config fs --namenode hdfs://13.92.199.104:8020 
のようなコマンドを実行するシェル・コンソールになりまし xd-shell.bat

でシェルを開始しました今ではすべてがうまくとAzureの上のHadoop環境でうまく構成されているまで、つまり、

Found 5 items 
drwxrwxrwx - jitendra.kumar.singh hdfs   0 2016-04-15 15:42 /xd/asdsadasdsad 
drwxrwxrwx - jitendra.kumar.singh hdfs   0 2016-04-15 14:30 /xd/fsd 
drwxrwxrwx - jitendra.kumar.singh hdfs   0 2016-04-19 12:53 /xd/jitendra 
drwxrwxrwx - jitendra.kumar.singh hdfs   0 2016-04-15 14:34 /xd/timeLogHdfs 
drwxrwxrwx - jitendra.kumar.singh hdfs   0 2016-04-19 12:22 /xd/zzzz 

:コマンドhadoop fs ls /xdを実行した後

は、以下の結果を得ました。今私はtime | hdfs --fsUri=hdfs://13.92.199.104/のようなストリームを作成し、それが正常に展開され、nnnnn.txt.tmpがAzureのHDFSで作成されたというファイルが作成されました。今まで、SpringXDサーバー上のすべては問題ありません。今、私は、ストリームをアンデプロイし、何もHDFSにnnnnn.txt.tmpファイルに書かれていないとspringxdサーバー上で次のエラーを得たことがわかっ:

2016-04-19T15:14:49+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Deploying module 'time' for stream 'nnnnnn' 
2016-04-19T15:14:49+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Deploying module [[email protected] moduleName = 'time', moduleLabe 
l = 'time', group = 'nnnnnn', sourceChannelName = [null], sinkChannelName = [null], index = 0, type = source, parameters = map[[empty]], children = list[[empty]]] 
2016-04-19T15:14:50+0530 1.3.1.RELEASE INFO DeploymentSupervisor-0 zk.ZKStreamDeploymentHandler - Deployment status for stream 'nnnnnn': DeploymentStatus{state=deployed} 
2016-04-19T15:17:22+0530 1.3.1.RELEASE INFO main-EventThread container.DeploymentListener - Undeploying module [[email protected] moduleName = 'time', moduleLabel = 'time', 
group = 'nnnnnn', sourceChannelName = [null], sinkChannelName = [null], index = 0, type = source, parameters = map[[empty]], children = list[[empty]]] 
2016-04-19T15:17:22+0530 1.3.1.RELEASE INFO main-EventThread container.DeploymentListener - Undeploying module [[email protected] moduleName = 'hdfs', moduleLabel = 'hdfs', 
group = 'nnnnnn', sourceChannelName = [null], sinkChannelName = [null], index = 1, type = sink, parameters = map['fsUri' -> 'hdfs://13.92.199.104/'], children = list[[empty]]] 
2016-04-19T15:17:46+0530 1.3.1.RELEASE WARN Thread-19 hdfs.DFSClient - DataStreamer Exception 
org.apache.hadoop.ipc.RemoteException: File /xd/nnnnnn/nnnnnn-0.txt.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 n 
ode(s) are excluded in this operation. 
     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) 
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) 
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:415) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145) 

     at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:na] 
     at com.sun.proxy.$Proxy135.addBlock(Unknown Source) ~[na:na] 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) ~[hadoop-hdfs-2.7.1.jar:na] 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_77] 
     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77] 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77] 
     at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_77] 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar:na] 
     at com.sun.proxy.$Proxy136.addBlock(Unknown Source) ~[na:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430) ~[hadoop-hdfs-2.7.1.jar:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226) ~[hadoop-hdfs-2.7.1.jar:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) ~[hadoop-hdfs-2.7.1.jar:na] 
2016-04-19T15:17:47+0530 1.3.1.RELEASE ERROR main-EventThread output.TextFileWriter - error in close 
org.apache.hadoop.ipc.RemoteException: File /xd/nnnnnn/nnnnnn-0.txt.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 n 
ode(s) are excluded in this operation. 
     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) 
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) 
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:415) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145) 

     at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:na] 
     at com.sun.proxy.$Proxy135.addBlock(Unknown Source) ~[na:na] 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) ~[hadoop-hdfs-2.7.1.jar:na] 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_77] 
     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77] 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77] 
     at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_77] 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar:na] 
     at com.sun.proxy.$Proxy136.addBlock(Unknown Source) ~[na:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430) ~[hadoop-hdfs-2.7.1.jar:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226) ~[hadoop-hdfs-2.7.1.jar:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) ~[hadoop-hdfs-2.7.1.jar:na] 
2016-04-19T15:17:47+0530 1.3.1.RELEASE ERROR main-EventThread outbound.HdfsDataStoreMessageHandler - Error closing writer 
org.apache.hadoop.ipc.RemoteException: File /xd/nnnnnn/nnnnnn-0.txt.tmp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 n 
ode(s) are excluded in this operation. 
     at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1588) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3116) 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3040) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:789) 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) 
     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) 
     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151) 
     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2147) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:415) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 
     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2145) 

     at org.apache.hadoop.ipc.Client.call(Client.java:1476) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.ipc.Client.call(Client.java:1407) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[hadoop-common-2.7.1.jar:na] 
     at com.sun.proxy.$Proxy135.addBlock(Unknown Source) ~[na:na] 
     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) ~[hadoop-hdfs-2.7.1.jar:na] 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_77] 
     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77] 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[na:1.8.0_77] 
     at java.lang.reflect.Method.invoke(Unknown Source) ~[na:1.8.0_77] 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.7.1.jar:na] 
     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.1.jar:na] 
     at com.sun.proxy.$Proxy136.addBlock(Unknown Source) ~[na:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430) ~[hadoop-hdfs-2.7.1.jar:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226) ~[hadoop-hdfs-2.7.1.jar:na] 
     at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) ~[hadoop-hdfs-2.7.1.jar:na] 
2016-04-19T15:17:47+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Path cache event: path=/deployments/modules/allocated/6b31fe38-f07d-4f75-a 
d60-fd7c56aca843/nnnnnn.source.time.1, type=CHILD_REMOVED 
2016-04-19T15:17:47+0530 1.3.1.RELEASE INFO DeploymentsPathChildrenCache-0 container.DeploymentListener - Path cache event: path=/deployments/modules/allocated/6b31fe38-f07d-4f75-a 
d60-fd7c56aca843/nnnnnn.sink.hdfs.1, type=CHILD_REMOVED 
+0

私の推測では、HDFSシンクはあなたが別のネットワークからHDFSに接続しようとするクラウドのセットアップにかなり一般的であるデータノードに話すことができないということです。 core-site.xmlまたはdatanodeログを調べて、サービスに定義されているIPアドレス/ホストを確認します。 –

答えて

0

それはあなたのファイルのロールオーバーサイズはまだ達していない可能性があります。指定されたサイズに達すると、新しいファイルにロールオーバーするには、オプション--rolloverが必要になります。

あなたはより多くの情報についてはこちらを参照してくださいすることができますhttp://docs.spring.io/spring-xd/docs/current-SNAPSHOT/reference/html/#hadoop-hdfs

関連する問題