2017-04-05 9 views
2

通常のアカウントでは。namenodeはhadoopでフォーマットされていません

私はいくつかのディレクトリを作成しました。

/usr/local/hadoop-2.7.3/data/dfs/namenode 
/usr/local/hadoop-2.7.3/data/dfs/namesecondary 
/usr/local/hadoop-2.7.3/data/dfs/datanode 
/usr/local/hadoop-2.7.3/data/yarn/nm-local-dir 
/usr/local/hadoop-2.7.3/data/yarn/system/rmstore 

、通常のアカウントで

その後

bin/hdfs namenode –format 
sudo sbin/start-all.sh 
jps 

いくつかのコマンドを入力した、私はJPSを見ることができました。

rootアカウントでは、Jps、DataNode、SecondaryNameNode、NodeManager、およびResourceManagerを確認できました。

私には2つの質問があります。

  1. 通常のアカウントではjpsだけしか表示されないのはなぜですか?
  2. なぜ名前ノードが起動していないのですか?

お読みいただきありがとうございます。 あなたが私を助けてくれたら、私はあなたに感謝します。私は、通常のアカウントでのみJPSを見ることができるのはなぜ

名前ノードのログファイル

2017-04-06 01:16:15,217 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 
 
2017-04-06 01:16:15,220 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] 
 
2017-04-06 01:16:15,680 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
 
2017-04-06 01:16:15,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
 
2017-04-06 01:16:15,843 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 
 
2017-04-06 01:16:15,845 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9010 
 
2017-04-06 01:16:15,846 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9010 to access this namenode/service. 
 
2017-04-06 01:16:16,070 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://localhost:50070 
 
2017-04-06 01:16:16,152 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 
 
2017-04-06 01:16:16,158 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 
 
2017-04-06 01:16:16,165 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined 
 
2017-04-06 01:16:16,169 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 
 
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 
 
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 
 
2017-04-06 01:16:16,171 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 
 
2017-04-06 01:16:16,300 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 
 
2017-04-06 01:16:16,303 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 
 
2017-04-06 01:16:16,330 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070 
 
2017-04-06 01:16:16,330 INFO org.mortbay.log: jetty-6.1.26 
 
2017-04-06 01:16:16,581 INFO org.mortbay.log: Started [email protected]:50070 
 
2017-04-06 01:16:16,612 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration. 
 
2017-04-06 01:16:16,612 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration. 
 
2017-04-06 01:16:16,613 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories! 
 
2017-04-06 01:16:16,613 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories! 
 
2017-04-06 01:16:16,617 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration. 
 
2017-04-06 01:16:16,617 WARN org.apache.hadoop.hdfs.server.common.Util: Path /usr/local/hadoop-2.7.3/data/dfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration. 
 
2017-04-06 01:16:16,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found. 
 
2017-04-06 01:16:16,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true 
 
2017-04-06 01:16:16,668 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 
 
2017-04-06 01:16:16,668 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 
 
2017-04-06 01:16:16,669 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 
 
2017-04-06 01:16:16,669 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2017 Apr 06 01:16:16 
 
2017-04-06 01:16:16,670 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap 
 
2017-04-06 01:16:16,670 INFO org.apache.hadoop.util.GSet: VM type  = 64-bit 
 
2017-04-06 01:16:16,671 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB 
 
2017-04-06 01:16:16,671 INFO org.apache.hadoop.util.GSet: capacity  = 2^21 = 2097152 entries 
 
2017-04-06 01:16:16,690 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false 
 
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication   = 1 
 
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication    = 512 
 
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication    = 1 
 
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams  = 2 
 
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 
 
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer  = false 
 
2017-04-06 01:16:16,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog   = 1000 
 
2017-04-06 01:16:16,706 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner    = root (auth:SIMPLE) 
 
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup   = supergroup 
 
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true 
 
2017-04-06 01:16:16,707 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false 
 
2017-04-06 01:16:16,708 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true 
 
2017-04-06 01:16:16,963 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap 
 
2017-04-06 01:16:16,963 INFO org.apache.hadoop.util.GSet: VM type  = 64-bit 
 
2017-04-06 01:16:16,970 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB 
 
2017-04-06 01:16:16,970 INFO org.apache.hadoop.util.GSet: capacity  = 2^20 = 1048576 entries 
 
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false 
 
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true 
 
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384 
 
2017-04-06 01:16:16,971 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
 
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks 
 
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: VM type  = 64-bit 
 
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB 
 
2017-04-06 01:16:16,977 INFO org.apache.hadoop.util.GSet: capacity  = 2^18 = 262144 entries 
 
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 
 
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 
 
2017-04-06 01:16:16,978 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension  = 30000 
 
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 
 
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 
 
2017-04-06 01:16:16,980 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 
 
2017-04-06 01:16:16,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled 
 
2017-04-06 01:16:16,983 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 
 
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache 
 
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: VM type  = 64-bit 
 
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB 
 
2017-04-06 01:16:16,984 INFO org.apache.hadoop.util.GSet: capacity  = 2^15 = 32768 entries 
 
2017-04-06 01:16:17,005 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hadoop-2.7.3/data/dfs/namenode/in_use.lock acquired by nodename [email protected] 
 
2017-04-06 01:16:17,007 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage 
 
java.io.IOException: NameNode is not formatted. 
 
\t at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) 
 
\t at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) 
 
\t at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 
 
2017-04-06 01:16:17,032 INFO org.mortbay.log: Stopped [email protected]:50070 
 
2017-04-06 01:16:17,035 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking. 
 
2017-04-06 01:16:17,035 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false 
 
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 
 
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 
 
2017-04-06 01:16:17,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 
 
2017-04-06 01:16:17,035 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. 
 
java.io.IOException: NameNode is not formatted. 
 
\t at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:225) 
 
\t at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) 
 
\t at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) 
 
\t at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 
 
2017-04-06 01:16:17,036 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 
 
2017-04-06 01:16:17,040 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

答えて

4

sudoでデーモンを起動したので、rootユーザーはプロセスを所有しています。コマンドjpsは、アクセス権を持つJVMのみを報告します。通常のアカウントには、rootが所有するプロセスのアクセス権がありません。

なぜ名前ノードが起動していないのですか?

java.io.IOException: NameNode is not formatted. 

名前ノードは、まだフォーマットされていません。 formatコマンドが(Y/N)のプロンプトを表示したときにYを入力しなかった可能性があります。

+0

問題がありますか?私は通常のアカウントでハープを使用します。 hadoopがsudoで起動された –

+0

'sudo'を使わずに起動します。それは 'jps'の下にリストされます。起動する前に、namenodeが正常にフォーマットされていることを確認してください。 – franklinsijo

関連する問題