2016-10-20 19 views
0

私はselect * from customersをハイブで実行して結果を得ます。 select count(*) customersを実行すると、ジョブのステータスは失敗します。 JobHistoryでは、4つの失敗したマップが見つかりました。 とマップログファイルに私はこれがあります。エラー:java.net.NoRouteToHostExceptionホストへのルートがありません

2016-10-19 12:47:09,725 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2016-10-19 12:47:09,786 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2016-10-19 12:47:09,786 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started 
2016-10-19 12:47:09,796 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: 
2016-10-19 12:47:09,796 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1476893269614_0006, Ident: ([email protected]aabe9c) 
2016-10-19 12:47:09,878 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 
2016-10-19 12:47:29,958 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 0 time(s); maxRetries=45 
2016-10-19 12:47:30,961 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:31,962 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:32,963 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:36,971 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:37,975 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:38,976 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:46,992 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:47,993 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:48,994 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:50,999 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 
2016-10-19 12:47:51,002 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.net.NoRouteToHostException: No Route to Host from master1/192.168.1.30 to slave1:37159 failed on socket timeout exception: java.net.NoRouteToHostException: Aucun chemin d'accès pour atteindre l'hôte cible; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1475) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1408) 
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:243) 
    at com.sun.proxy.$Proxy9.getTask(Unknown Source) 
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:132) 
Caused by: java.net.NoRouteToHostException: Aucun chemin d'accès pour atteindre l'hôte cible 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713) 
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1447) 
    ... 4 more 

そしてClouedra Managerでホスト>スレーブ1>プロセス> YARN NodeManagerの>ログファイルを私は2つのworningが見つかりました:

はorg.apache.hadoopをWARN .hdfs.BlockReaderFactory:

I/O error constructing remote block reader. 
java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.33:56208, remote=/192.168.1.30:50010, for file /user/admin/.staging/job_1476893269614_0001/libjars/hive-hbase-handler-1.1.0-cdh5.8.2.jar, for pool BP-1641388066-192.168.1.30-1476615377122 block 1073751347_10539 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467) 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942) 
    at java.io.DataInputStream.read(DataInputStream.java:100) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369) 
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:265) 
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) 
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) 
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) 
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 

はorg.apache.hadoop.hdfs.DFSClientをWARN:

Failed to connect to /192.168.1.30:50010 for block, add to deadNodes and continue. java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.33:56208, remote=/192.168.1.30:50010, for file /user/admin/.staging/job_1476893269614_0001/libjars/hive-hbase-handler-1.1.0-cdh5.8.2.jar, for pool BP-1641388066-192.168.1.30-1476615377122 block 1073751347_10539 
java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.33:56208, remote=/192.168.1.30:50010, for file /user/admin/.staging/job_1476893269614_0001/libjars/hive-hbase-handler-1.1.0-cdh5.8.2.jar, for pool BP-1641388066-192.168.1.30-1476615377122 block 1073751347_10539 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467) 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942) 
    at java.io.DataInputStream.read(DataInputStream.java:100) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59) 
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369) 
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:265) 
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) 
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359) 
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) 
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356) 
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 

お願いします。私は長い間この問題で座っていました。ありがとうございました!

答えて

0

あなたはあなたの最初のクエリからの結果を得る:ハイブは、マップを使用しないため、顧客から選択*は

を結果として取得する減らすあなたのHadoopの構成についてよろしいですか? Hosts Fileを設定しましたか?

How to configure hosts file for Hadoop ecosystem

+0

のようなエラーを通じてはい、私はそれを行います賢明な他の試合でなければなりませんあなたのhostsファイルとホストファイル を確認してください。スレーブ1でファイアウォールを無効にすると、マスターと2つのスレーブ(slave1、slave2)があります。** select count(*)customers **はうまく動作します。私の問題はポートにあります:** slave1/192.168.1.33:port **、このポートは動的です。このジョブポートでは37159ですが、別のポートでは新しいポートを取得します。 –

+0

ブロックされたファイアウォールなしでこの問題を解決する方法 –

+0

これらの特定のIPをブロック解除する必要があります。そのためには、互いを受け入れるように両側にファイアウォールを設定する必要があります。私はあなたのIPを許可するようにiptablesを設定する方法について、この詳細な答えを見てみることをお勧めします。 http://serverfault.com/questions/30026/whitelist-allowed-ips-in-out-using-iptables/30031#30031 – ouphi

-1

それはそれはあなた

sudo gedit /etc/hosts 
====== 
hosts 
====== 
127.0.0.1 localhost 
127.0.0.1 orienit 


sudo gedit /etc/hostname 

hostname 
======== 
orienit 
関連する問題